Search (59 results, page 1 of 3)

  • × theme_ss:"Automatisches Abstracting"
  1. Moens, M.-F.; Uyttendaele, C.: Automatic text structuring and categorization as a first step in summarizing legal cases (1997) 0.05
    0.048932396 = product of:
      0.17126338 = sum of:
        0.031131983 = weight(_text_:management in 2256) [ClassicSimilarity], result of:
          0.031131983 = score(doc=2256,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.22344214 = fieldWeight in 2256, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=2256)
        0.1401314 = weight(_text_:case in 2256) [ClassicSimilarity], result of:
          0.1401314 = score(doc=2256,freq=14.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.771088 = fieldWeight in 2256, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=2256)
      0.2857143 = coord(2/7)
    
    Abstract
    The SALOMON system automatically summarizes Belgian criminal cases in order to improve access to the large number of existing and future court decisions. SALOMON extracts relevant text units from the case text to form a case summary. Such a case profile facilitates the rapid determination of the relevance of the case or may be employed in text search. In a first important abstracting step SALOMON performs an initial categorization of legal criminal cases and structures the case text into separate legally relevant and irrelevant components. A text grammar represented as a semantic network is used to automatically determine the category of the case and its components. Extracts from the case general data and identifies text portions relevant for further abstracting. Prior knowledge of the text structure and its indicative cues may support automatic abstracting. A text grammar is a promising form for representing the knowledge involved
    Source
    Information processing and management. 33(1997) no.6, S.727-737
  2. Endres-Niggemeyer, B.; Maier, E.; Sigel, A.: How to implement a naturalistic model of abstracting : four core working steps of an expert abstractor (1995) 0.04
    0.035345122 = product of:
      0.12370792 = sum of:
        0.036320645 = weight(_text_:management in 2930) [ClassicSimilarity], result of:
          0.036320645 = score(doc=2930,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.2606825 = fieldWeight in 2930, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2930)
        0.08738727 = weight(_text_:case in 2930) [ClassicSimilarity], result of:
          0.08738727 = score(doc=2930,freq=4.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.48085782 = fieldWeight in 2930, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2930)
      0.2857143 = coord(2/7)
    
    Abstract
    4 working steps taken from a comprehensive empirical model of expert abstracting are studied in order to prepare an explorative implementation of a simulation model. It aims at explaining the knowledge processing activities during professional summarizing. Following the case-based and holistic strategy of qualitative empirical research, the main features of the simulation system were developed by investigating in detail a small but central test case - 4 working steps where an expert abstractor discovers what the paper is about and drafts the topic sentence of the abstract
    Source
    Information processing and management. 31(1995) no.5, S.631-674
  3. Sweeney, S.; Crestani, F.; Losada, D.E.: 'Show me more' : incremental length summarisation using novelty detection (2008) 0.02
    0.020023016 = product of:
      0.070080556 = sum of:
        0.025943318 = weight(_text_:management in 2054) [ClassicSimilarity], result of:
          0.025943318 = score(doc=2054,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.18620178 = fieldWeight in 2054, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2054)
        0.04413724 = weight(_text_:case in 2054) [ClassicSimilarity], result of:
          0.04413724 = score(doc=2054,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.24286987 = fieldWeight in 2054, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2054)
      0.2857143 = coord(2/7)
    
    Abstract
    The paper presents a study investigating the effects of incorporating novelty detection in automatic text summarisation. Condensing a textual document, automatic text summarisation can reduce the need to refer to the source document. It also offers a means to deliver device-friendly content when accessing information in non-traditional environments. An effective method of summarisation could be to produce a summary that includes only novel information. However, a consequence of focusing exclusively on novel parts may result in a loss of context, which may have an impact on the correct interpretation of the summary, with respect to the source document. In this study we compare two strategies to produce summaries that incorporate novelty in different ways: a constant length summary, which contains only novel sentences, and an incremental summary, containing additional sentences that provide context. The aim is to establish whether a summary that contains only novel sentences provides sufficient basis to determine relevance of a document, or if indeed we need to include additional sentences to provide context. Findings from the study seem to suggest that there is only a minimal difference in performance for the tasks we set our users and that the presence of contextual information is not so important. However, for the case of mobile information access, a summary that contains only novel information does offer benefits, given bandwidth constraints.
    Source
    Information processing and management. 44(2008) no.2, S.663-686
  4. Moens, M.-F.; Uyttendaele, C.; Dumotier, J.: Abstracting of legal cases : the potential of clustering based on the selection of representative objects (1999) 0.02
    0.01853378 = product of:
      0.12973645 = sum of:
        0.12973645 = weight(_text_:case in 2944) [ClassicSimilarity], result of:
          0.12973645 = score(doc=2944,freq=12.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.7138887 = fieldWeight in 2944, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=2944)
      0.14285715 = coord(1/7)
    
    Abstract
    The SALOMON project automatically summarizes Belgian criminal cases in order to improve access to the large number of existing and future court decisions. SALOMON extracts text units from the case text to form a case summary. Such a case summary facilitates the rapid determination of the relevance of the case or may be employed in text search. an important part of the research concerns the development of techniques for automatic recognition of representative text paragraphs (or sentences) in texts of unrestricted domains. these techniques are employed to eliminate redundant material in the case texts, and to identify informative text paragraphs which are relevant to include in the case summary. An evaluation of a test set of 700 criminal cases demonstrates that the algorithms have an application potential for automatic indexing, abstracting, and text linkage
  5. Goh, A.; Hui, S.C.: TES: a text extraction system (1996) 0.02
    0.018260393 = product of:
      0.06391137 = sum of:
        0.04150931 = weight(_text_:management in 6599) [ClassicSimilarity], result of:
          0.04150931 = score(doc=6599,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.29792285 = fieldWeight in 6599, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0625 = fieldNorm(doc=6599)
        0.02240206 = product of:
          0.04480412 = sum of:
            0.04480412 = weight(_text_:22 in 6599) [ClassicSimilarity], result of:
              0.04480412 = score(doc=6599,freq=2.0), product of:
                0.14475311 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041336425 = queryNorm
                0.30952093 = fieldWeight in 6599, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6599)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Date
    26. 2.1997 10:22:43
    Source
    Microcomputers for information management. 13(1996) no.1, S.41-55
  6. Marcu, D.: Automatic abstracting and summarization (2009) 0.02
    0.016944442 = product of:
      0.11861108 = sum of:
        0.11861108 = weight(_text_:europe in 3748) [ClassicSimilarity], result of:
          0.11861108 = score(doc=3748,freq=2.0), product of:
            0.25178367 = queryWeight, product of:
              6.091085 = idf(docFreq=271, maxDocs=44218)
              0.041336425 = queryNorm
            0.4710833 = fieldWeight in 3748, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.091085 = idf(docFreq=271, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3748)
      0.14285715 = coord(1/7)
    
    Abstract
    After lying dormant for a few decades, the field of automated text summarization has experienced a tremendous resurgence of interest. Recently, many new algorithms and techniques have been proposed for identifying important information in single documents and document collections, and for mapping this information into grammatical, cohesive, and coherent abstracts. Since 1997, annual workshops, conferences, and large-scale comparative evaluations have provided a rich environment for exchanging ideas between researchers in Asia, Europe, and North America. This entry reviews the main developments in the field and provides a guiding map to those interested in understanding the strengths and weaknesses of an increasingly ubiquitous technology.
  7. Xu, D.; Cheng, G.; Qu, Y.: Preferences in Wikipedia abstracts : empirical findings and implications for automatic entity summarization (2014) 0.02
    0.015127847 = product of:
      0.05294746 = sum of:
        0.031131983 = weight(_text_:management in 2700) [ClassicSimilarity], result of:
          0.031131983 = score(doc=2700,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.22344214 = fieldWeight in 2700, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=2700)
        0.021815477 = product of:
          0.043630954 = sum of:
            0.043630954 = weight(_text_:studies in 2700) [ClassicSimilarity], result of:
              0.043630954 = score(doc=2700,freq=2.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.26452032 = fieldWeight in 2700, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2700)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The volume of entity-centric structured data grows rapidly on the Web. The description of an entity, composed of property-value pairs (a.k.a. features), has become very large in many applications. To avoid information overload, efforts have been made to automatically select a limited number of features to be shown to the user based on certain criteria, which is called automatic entity summarization. However, to the best of our knowledge, there is a lack of extensive studies on how humans rank and select features in practice, which can provide empirical support and inspire future research. In this article, we present a large-scale statistical analysis of the descriptions of entities provided by DBpedia and the abstracts of their corresponding Wikipedia articles, to empirically study, along several different dimensions, which kinds of features are preferable when humans summarize. Implications for automatic entity summarization are drawn from the findings.
    Source
    Information processing and management. 50(2014) no.2, S.284-296
  8. Vanderwende, L.; Suzuki, H.; Brockett, J.M.; Nenkova, A.: Beyond SumBasic : task-focused summarization with sentence simplification and lexical expansion (2007) 0.01
    0.013695294 = product of:
      0.047933526 = sum of:
        0.031131983 = weight(_text_:management in 948) [ClassicSimilarity], result of:
          0.031131983 = score(doc=948,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.22344214 = fieldWeight in 948, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=948)
        0.016801544 = product of:
          0.033603087 = sum of:
            0.033603087 = weight(_text_:22 in 948) [ClassicSimilarity], result of:
              0.033603087 = score(doc=948,freq=2.0), product of:
                0.14475311 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041336425 = queryNorm
                0.23214069 = fieldWeight in 948, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=948)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    In recent years, there has been increased interest in topic-focused multi-document summarization. In this task, automatic summaries are produced in response to a specific information request, or topic, stated by the user. The system we have designed to accomplish this task comprises four main components: a generic extractive summarization system, a topic-focusing component, sentence simplification, and lexical expansion of topic words. This paper details each of these components, together with experiments designed to quantify their individual contributions. We include an analysis of our results on two large datasets commonly used to evaluate task-focused summarization, the DUC2005 and DUC2006 datasets, using automatic metrics. Additionally, we include an analysis of our results on the DUC2006 task according to human evaluation metrics. In the human evaluation of system summaries compared to human summaries, i.e., the Pyramid method, our system ranked first out of 22 systems in terms of overall mean Pyramid score; and in the human evaluation of summary responsiveness to the topic, our system ranked third out of 35 systems.
    Source
    Information processing and management. 43(2007) no.6, S.1606-1618
  9. Wu, Y.-f.B.; Li, Q.; Bot, R.S.; Chen, X.: Finding nuggets in documents : a machine learning approach (2006) 0.01
    0.011412744 = product of:
      0.039944604 = sum of:
        0.025943318 = weight(_text_:management in 5290) [ClassicSimilarity], result of:
          0.025943318 = score(doc=5290,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.18620178 = fieldWeight in 5290, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5290)
        0.0140012875 = product of:
          0.028002575 = sum of:
            0.028002575 = weight(_text_:22 in 5290) [ClassicSimilarity], result of:
              0.028002575 = score(doc=5290,freq=2.0), product of:
                0.14475311 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041336425 = queryNorm
                0.19345059 = fieldWeight in 5290, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5290)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Document keyphrases provide a concise summary of a document's content, offering semantic metadata summarizing a document. They can be used in many applications related to knowledge management and text mining, such as automatic text summarization, development of search engines, document clustering, document classification, thesaurus construction, and browsing interfaces. Because only a small portion of documents have keyphrases assigned by authors, and it is time-consuming and costly to manually assign keyphrases to documents, it is necessary to develop an algorithm to automatically generate keyphrases for documents. This paper describes a Keyphrase Identification Program (KIP), which extracts document keyphrases by using prior positive samples of human identified phrases to assign weights to the candidate keyphrases. The logic of our algorithm is: The more keywords a candidate keyphrase contains and the more significant these keywords are, the more likely this candidate phrase is a keyphrase. KIP's learning function can enrich the glossary database by automatically adding new identified keyphrases to the database. KIP's personalization feature will let the user build a glossary database specifically suitable for the area of his/her interest. The evaluation results show that KIP's performance is better than the systems we compared to and that the learning function is effective.
    Date
    22. 7.2006 17:25:48
  10. Craven, T.C.: Presentation of repeated phrases in a computer-assisted abstracting tool kit (2001) 0.01
    0.010377328 = product of:
      0.07264129 = sum of:
        0.07264129 = weight(_text_:management in 3667) [ClassicSimilarity], result of:
          0.07264129 = score(doc=3667,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.521365 = fieldWeight in 3667, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.109375 = fieldNorm(doc=3667)
      0.14285715 = coord(1/7)
    
    Source
    Information processing and management. 37(2001) no.2, S.221-230
  11. Bateman, J.; Teich, E.: Selective information presentation in an integrated publication system : an application of genre-driven text generation (1995) 0.01
    0.010377328 = product of:
      0.07264129 = sum of:
        0.07264129 = weight(_text_:management in 2928) [ClassicSimilarity], result of:
          0.07264129 = score(doc=2928,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.521365 = fieldWeight in 2928, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.109375 = fieldNorm(doc=2928)
      0.14285715 = coord(1/7)
    
    Source
    Information processing and management. 31(1995) no.5, S.753-767
  12. Endres-Niggemeyer, B.: SimSum : an empirically founded simulation of summarizing (2000) 0.01
    0.010377328 = product of:
      0.07264129 = sum of:
        0.07264129 = weight(_text_:management in 3343) [ClassicSimilarity], result of:
          0.07264129 = score(doc=3343,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.521365 = fieldWeight in 3343, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.109375 = fieldNorm(doc=3343)
      0.14285715 = coord(1/7)
    
    Source
    Information processing and management. 36(2000) no.4, S.659-682
  13. Johnson, F.C.; Paice, C.D.; Black, W.J.; Neal, A.P.: ¬The application of linguistic processing to automatic abstract generation (1993) 0.01
    0.007412377 = product of:
      0.051886637 = sum of:
        0.051886637 = weight(_text_:management in 2290) [ClassicSimilarity], result of:
          0.051886637 = score(doc=2290,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.37240356 = fieldWeight in 2290, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.078125 = fieldNorm(doc=2290)
      0.14285715 = coord(1/7)
    
    Source
    Journal of document and text management. 1(1993), S.215-241
  14. McKeown, K.; Robin, J.; Kukich, K.: Generating concise natural language summaries (1995) 0.01
    0.007412377 = product of:
      0.051886637 = sum of:
        0.051886637 = weight(_text_:management in 2932) [ClassicSimilarity], result of:
          0.051886637 = score(doc=2932,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.37240356 = fieldWeight in 2932, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.078125 = fieldNorm(doc=2932)
      0.14285715 = coord(1/7)
    
    Source
    Information processing and management. 31(1995) no.5, S.703-733
  15. Craven, T.C.: ¬A computer-aided abstracting tool kit (1993) 0.01
    0.0059299017 = product of:
      0.04150931 = sum of:
        0.04150931 = weight(_text_:management in 6506) [ClassicSimilarity], result of:
          0.04150931 = score(doc=6506,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.29792285 = fieldWeight in 6506, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0625 = fieldNorm(doc=6506)
      0.14285715 = coord(1/7)
    
    Abstract
    Describes the abstracting assistance features being prototyped in the TEXNET text network management system. Sentence weighting methods include: weithing negatively or positively on the stems in a selected passage; weighting on general lists of cue words, adjusting weights of selected segments; and weighting of occurrence of frequent stems. The user may adjust a number of parameters: the minimum strength of extracts; the threshold for frequent word/stems and the amount sentence weight is to be adjusted for each weighting type
  16. Brandow, R.; Mitze, K.; Rau, L.F.: Automatic condensation of electronic publications by sentence selection (1995) 0.01
    0.0059299017 = product of:
      0.04150931 = sum of:
        0.04150931 = weight(_text_:management in 2929) [ClassicSimilarity], result of:
          0.04150931 = score(doc=2929,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.29792285 = fieldWeight in 2929, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0625 = fieldNorm(doc=2929)
      0.14285715 = coord(1/7)
    
    Source
    Information processing and management. 31(1995) no.5, S.675-685
  17. Sparck Jones, K.; Endres-Niggemeyer, B.: Introduction: automatic summarizing (1995) 0.01
    0.0059299017 = product of:
      0.04150931 = sum of:
        0.04150931 = weight(_text_:management in 2931) [ClassicSimilarity], result of:
          0.04150931 = score(doc=2931,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.29792285 = fieldWeight in 2931, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0625 = fieldNorm(doc=2931)
      0.14285715 = coord(1/7)
    
    Source
    Information processing and management. 31(1995) no.5, S.625-630
  18. Ahmad, K.: Text summarisation : the role of lexical cohesion analysis (1995) 0.01
    0.0059299017 = product of:
      0.04150931 = sum of:
        0.04150931 = weight(_text_:management in 5795) [ClassicSimilarity], result of:
          0.04150931 = score(doc=5795,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.29792285 = fieldWeight in 5795, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0625 = fieldNorm(doc=5795)
      0.14285715 = coord(1/7)
    
    Source
    New review of document and text management. 1995, no.1, S.321-335
  19. Automatic summarizing : introduction (1995) 0.01
    0.0059299017 = product of:
      0.04150931 = sum of:
        0.04150931 = weight(_text_:management in 626) [ClassicSimilarity], result of:
          0.04150931 = score(doc=626,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.29792285 = fieldWeight in 626, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0625 = fieldNorm(doc=626)
      0.14285715 = coord(1/7)
    
    Source
    Information processing and management. 31(1995) no.5, S.625-630
  20. Over, P.; Dang, H.; Harman, D.: DUC in context (2007) 0.01
    0.0059299017 = product of:
      0.04150931 = sum of:
        0.04150931 = weight(_text_:management in 934) [ClassicSimilarity], result of:
          0.04150931 = score(doc=934,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.29792285 = fieldWeight in 934, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0625 = fieldNorm(doc=934)
      0.14285715 = coord(1/7)
    
    Source
    Information processing and management. 43(2007) no.6, S.1506-1520

Years

Types

  • a 58
  • s 1
  • More… Less…