Search (115 results, page 1 of 6)

  • × theme_ss:"Automatisches Abstracting"
  1. Salton, G.; Allan, J.; Buckley, C.; Singhal, A.: Automatic analysis, theme generation, and summarization of machine readable texts (1994) 0.07
    0.071772985 = product of:
      0.119621634 = sum of:
        0.06318085 = weight(_text_:g in 1949) [ClassicSimilarity], result of:
          0.06318085 = score(doc=1949,freq=2.0), product of:
            0.15225126 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.040536046 = queryNorm
            0.4149775 = fieldWeight in 1949, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.078125 = fieldNorm(doc=1949)
        0.048019946 = weight(_text_:u in 1949) [ClassicSimilarity], result of:
          0.048019946 = score(doc=1949,freq=2.0), product of:
            0.13273303 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.040536046 = queryNorm
            0.3617784 = fieldWeight in 1949, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.078125 = fieldNorm(doc=1949)
        0.00842084 = weight(_text_:a in 1949) [ClassicSimilarity], result of:
          0.00842084 = score(doc=1949,freq=4.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.18016359 = fieldWeight in 1949, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=1949)
      0.6 = coord(3/5)
    
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.478-483.
    Type
    a
  2. Automatic summarizing : introduction (1995) 0.04
    0.037054945 = product of:
      0.09263736 = sum of:
        0.08590069 = weight(_text_:u in 626) [ClassicSimilarity], result of:
          0.08590069 = score(doc=626,freq=10.0), product of:
            0.13273303 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.040536046 = queryNorm
            0.6471689 = fieldWeight in 626, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0625 = fieldNorm(doc=626)
        0.006736672 = weight(_text_:a in 626) [ClassicSimilarity], result of:
          0.006736672 = score(doc=626,freq=4.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.14413087 = fieldWeight in 626, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=626)
      0.4 = coord(2/5)
    
    Content
    Enthält u.a. Beiträge von: J. BATEMAN u. E. TEICH; R. BRANDOW, K. MITZE u. L.F. RAU; B. ENDRES-NIGGEMEYER, E. MAIER u. A. SIGEL; M.T. MAYBURY; K. McKEOWN, J. ROBIN u. K. KUKICH; A. ROTHKEGEL
    Editor
    Sparck Jones, K. u. B. Endres-Niggemeyer
  3. Endres-Niggemeyer, B.: Kognitive Modellierung des Abstracting (1991) 0.04
    0.03545515 = product of:
      0.088637866 = sum of:
        0.08149254 = weight(_text_:u in 23) [ClassicSimilarity], result of:
          0.08149254 = score(doc=23,freq=4.0), product of:
            0.13273303 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.040536046 = queryNorm
            0.6139583 = fieldWeight in 23, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.09375 = fieldNorm(doc=23)
        0.0071453196 = weight(_text_:a in 23) [ClassicSimilarity], result of:
          0.0071453196 = score(doc=23,freq=2.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.15287387 = fieldWeight in 23, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=23)
      0.4 = coord(2/5)
    
    Source
    Deutscher Dokumentartag 1990. 1. Deutsch-deutscher Dokumentartag, 25.-27.9.90, Fulda. Proceedings. Hrsg. W. Neubauer u. U. Schneider-Briehn
    Type
    a
  4. Hahn, U.: Automatisches Abstracting (2013) 0.03
    0.029545953 = product of:
      0.073864885 = sum of:
        0.067910455 = weight(_text_:u in 721) [ClassicSimilarity], result of:
          0.067910455 = score(doc=721,freq=4.0), product of:
            0.13273303 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.040536046 = queryNorm
            0.5116319 = fieldWeight in 721, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.078125 = fieldNorm(doc=721)
        0.0059544328 = weight(_text_:a in 721) [ClassicSimilarity], result of:
          0.0059544328 = score(doc=721,freq=2.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.12739488 = fieldWeight in 721, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=721)
      0.4 = coord(2/5)
    
    Source
    Grundlagen der praktischen Information und Dokumentation. Handbuch zur Einführung in die Informationswissenschaft und -praxis. 6., völlig neu gefaßte Ausgabe. Hrsg. von R. Kuhlen, W. Semar u. D. Strauch. Begründet von Klaus Laisiepen, Ernst Lutterbeck, Karl-Heinrich Meyer-Uhlenried
    Type
    a
  5. Endres-Niggemeyer, B.: ¬An empirical process model of abstracting (1992) 0.03
    0.027091576 = product of:
      0.06772894 = sum of:
        0.05762393 = weight(_text_:u in 8834) [ClassicSimilarity], result of:
          0.05762393 = score(doc=8834,freq=2.0), product of:
            0.13273303 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.040536046 = queryNorm
            0.43413407 = fieldWeight in 8834, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.09375 = fieldNorm(doc=8834)
        0.010105007 = weight(_text_:a in 8834) [ClassicSimilarity], result of:
          0.010105007 = score(doc=8834,freq=4.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.2161963 = fieldWeight in 8834, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=8834)
      0.4 = coord(2/5)
    
    Source
    Mensch und Maschine: Informationelle Schnittstellen der Kommunikation. Proc. des 3. Int. Symposiums für Informationswissenschaft (ISI'92), 5.-7.11.1992 in Saarbrücken. Hrsg.: H.H. Zimmermann, H.-D. Luckhardt u. A. Schulz
    Type
    a
  6. Xianghao, G.; Yixin, Z.; Li, Y.: ¬A new method of news test understanding and abstracting based on speech acts theory (1998) 0.02
    0.024478516 = product of:
      0.06119629 = sum of:
        0.050544675 = weight(_text_:g in 3532) [ClassicSimilarity], result of:
          0.050544675 = score(doc=3532,freq=2.0), product of:
            0.15225126 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.040536046 = queryNorm
            0.331982 = fieldWeight in 3532, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.0625 = fieldNorm(doc=3532)
        0.010651614 = weight(_text_:a in 3532) [ClassicSimilarity], result of:
          0.010651614 = score(doc=3532,freq=10.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.22789092 = fieldWeight in 3532, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=3532)
      0.4 = coord(2/5)
    
    Abstract
    Presents a method for the automated analysis and comprehension of foreign affairs news produced by a Chinese news agency. Notes that the development of the method was prededed by a study of the structuring rules of the news. Describes how an abstract of the news story is produced automatically from the analysis. Stresses the main aim of the work which is to use specch act theory to analyse and classify sentences
    Type
    a
  7. Marsh, E.: ¬A production rule system for message summarisation (1984) 0.02
    0.022576315 = product of:
      0.056440786 = sum of:
        0.048019946 = weight(_text_:u in 1956) [ClassicSimilarity], result of:
          0.048019946 = score(doc=1956,freq=2.0), product of:
            0.13273303 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.040536046 = queryNorm
            0.3617784 = fieldWeight in 1956, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.078125 = fieldNorm(doc=1956)
        0.00842084 = weight(_text_:a in 1956) [ClassicSimilarity], result of:
          0.00842084 = score(doc=1956,freq=4.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.18016359 = fieldWeight in 1956, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=1956)
      0.4 = coord(2/5)
    
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.534-537.
    Type
    a
  8. Ercan, G.; Cicekli, I.: Using lexical chains for keyword extraction (2007) 0.02
    0.022406308 = product of:
      0.05601577 = sum of:
        0.044226594 = weight(_text_:g in 951) [ClassicSimilarity], result of:
          0.044226594 = score(doc=951,freq=2.0), product of:
            0.15225126 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.040536046 = queryNorm
            0.29048425 = fieldWeight in 951, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.0546875 = fieldNorm(doc=951)
        0.011789177 = weight(_text_:a in 951) [ClassicSimilarity], result of:
          0.011789177 = score(doc=951,freq=16.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.25222903 = fieldWeight in 951, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=951)
      0.4 = coord(2/5)
    
    Abstract
    Keywords can be considered as condensed versions of documents and short forms of their summaries. In this paper, the problem of automatic extraction of keywords from documents is treated as a supervised learning task. A lexical chain holds a set of semantically related words of a text and it can be said that a lexical chain represents the semantic content of a portion of the text. Although lexical chains have been extensively used in text summarization, their usage for keyword extraction problem has not been fully investigated. In this paper, a keyword extraction technique that uses lexical chains is described, and encouraging results are obtained.
    Type
    a
  9. Johnson, F.C.; Paice, C.D.; Black, W.J.; Neal, A.P.: ¬The application of linguistic processing to automatic abstract generation (1993) 0.02
    0.021589752 = product of:
      0.05397438 = sum of:
        0.048019946 = weight(_text_:u in 2290) [ClassicSimilarity], result of:
          0.048019946 = score(doc=2290,freq=2.0), product of:
            0.13273303 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.040536046 = queryNorm
            0.3617784 = fieldWeight in 2290, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.078125 = fieldNorm(doc=2290)
        0.0059544328 = weight(_text_:a in 2290) [ClassicSimilarity], result of:
          0.0059544328 = score(doc=2290,freq=2.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.12739488 = fieldWeight in 2290, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=2290)
      0.4 = coord(2/5)
    
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.538-552.
    Type
    a
  10. Hahn, U.: ¬Die Verdichtung textuellen Wissens zu Information : vom Wandel methodischer Paradigmen beim automatischen Abstracting (2004) 0.02
    0.021589752 = product of:
      0.05397438 = sum of:
        0.048019946 = weight(_text_:u in 4667) [ClassicSimilarity], result of:
          0.048019946 = score(doc=4667,freq=2.0), product of:
            0.13273303 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.040536046 = queryNorm
            0.3617784 = fieldWeight in 4667, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.078125 = fieldNorm(doc=4667)
        0.0059544328 = weight(_text_:a in 4667) [ClassicSimilarity], result of:
          0.0059544328 = score(doc=4667,freq=2.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.12739488 = fieldWeight in 4667, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=4667)
      0.4 = coord(2/5)
    
    Type
    a
  11. Salton, G.: Automatic text structuring and summarization (1997) 0.02
    0.021418704 = product of:
      0.053546757 = sum of:
        0.044226594 = weight(_text_:g in 145) [ClassicSimilarity], result of:
          0.044226594 = score(doc=145,freq=2.0), product of:
            0.15225126 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.040536046 = queryNorm
            0.29048425 = fieldWeight in 145, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.0546875 = fieldNorm(doc=145)
        0.009320162 = weight(_text_:a in 145) [ClassicSimilarity], result of:
          0.009320162 = score(doc=145,freq=10.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.19940455 = fieldWeight in 145, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=145)
      0.4 = coord(2/5)
    
    Abstract
    Applies the ideas from the automatic link generation research to automatic text summarisation. Using techniques for inter-document link generation, generates intra-document links between passages of a document. Based on the intra-document linkage pattern of a text, characterises the structure of the text. Applies the knowledge of text structure to do automatic text summarisation by passage extraction. Evaluates a set of 50 summaries generated using these techniques by comparing the to paragraph extracts constructed by humans. The automatic summarisation methods perform well, especially in view of the fact that the summaries generates by 2 humans for the same article are surprisingly dissimilar
    Footnote
    Contribution to a special issue on methods and tools for the automatic construction of hypertext
    Type
    a
  12. Saggion, H.; Lapalme, G.: Selective analysis for the automatic generation of summaries (2000) 0.02
    0.02102512 = product of:
      0.0525628 = sum of:
        0.044226594 = weight(_text_:g in 132) [ClassicSimilarity], result of:
          0.044226594 = score(doc=132,freq=2.0), product of:
            0.15225126 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.040536046 = queryNorm
            0.29048425 = fieldWeight in 132, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.0546875 = fieldNorm(doc=132)
        0.008336206 = weight(_text_:a in 132) [ClassicSimilarity], result of:
          0.008336206 = score(doc=132,freq=8.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.17835285 = fieldWeight in 132, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=132)
      0.4 = coord(2/5)
    
    Abstract
    Selective Analysis is a new method for text summarization of technical articles whose design is based on the study of a corpus of professional abstracts and technical documents The method emphasizes the selection of particular types of information and its elaboration exploring the issue of dynamical summarization. A computer prototype was developed to demonstrate the viability of the approach and the automatic abstracts were evaluated using human informants. The results so far obtained indicate that the summaries are acceptable in content and text quality
    Type
    a
  13. Xu, D.; Cheng, G.; Qu, Y.: Preferences in Wikipedia abstracts : empirical findings and implications for automatic entity summarization (2014) 0.02
    0.01802153 = product of:
      0.045053825 = sum of:
        0.037908506 = weight(_text_:g in 2700) [ClassicSimilarity], result of:
          0.037908506 = score(doc=2700,freq=2.0), product of:
            0.15225126 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.040536046 = queryNorm
            0.24898648 = fieldWeight in 2700, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.046875 = fieldNorm(doc=2700)
        0.0071453196 = weight(_text_:a in 2700) [ClassicSimilarity], result of:
          0.0071453196 = score(doc=2700,freq=8.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.15287387 = fieldWeight in 2700, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2700)
      0.4 = coord(2/5)
    
    Abstract
    The volume of entity-centric structured data grows rapidly on the Web. The description of an entity, composed of property-value pairs (a.k.a. features), has become very large in many applications. To avoid information overload, efforts have been made to automatically select a limited number of features to be shown to the user based on certain criteria, which is called automatic entity summarization. However, to the best of our knowledge, there is a lack of extensive studies on how humans rank and select features in practice, which can provide empirical support and inspire future research. In this article, we present a large-scale statistical analysis of the descriptions of entities provided by DBpedia and the abstracts of their corresponding Wikipedia articles, to empirically study, along several different dimensions, which kinds of features are preferable when humans summarize. Implications for automatic entity summarization are drawn from the findings.
    Type
    a
  14. Goh, A.; Hui, S.C.: TES: a text extraction system (1996) 0.01
    0.0141766565 = product of:
      0.03544164 = sum of:
        0.013473344 = weight(_text_:a in 6599) [ClassicSimilarity], result of:
          0.013473344 = score(doc=6599,freq=16.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.28826174 = fieldWeight in 6599, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=6599)
        0.021968298 = product of:
          0.043936595 = sum of:
            0.043936595 = weight(_text_:22 in 6599) [ClassicSimilarity], result of:
              0.043936595 = score(doc=6599,freq=2.0), product of:
                0.14195032 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040536046 = queryNorm
                0.30952093 = fieldWeight in 6599, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6599)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    With the onset of the information explosion arising from digital libraries and access to a wealth of information through the Internet, the need to efficiently determine the relevance of a document becomes even more urgent. Describes a text extraction system (TES), which retrieves a set of sentences from a document to form an indicative abstract. Such an automated process enables information to be filtered more quickly. Discusses the combination of various text extraction techniques. Compares results with manually produced abstracts
    Date
    26. 2.1997 10:22:43
    Type
    a
  15. Kannan, R.; Ghinea, G.; Swaminathan, S.: What do you wish to see? : A summarization system for movies based on user preferences (2015) 0.01
    0.014037057 = product of:
      0.03509264 = sum of:
        0.025272338 = weight(_text_:g in 2683) [ClassicSimilarity], result of:
          0.025272338 = score(doc=2683,freq=2.0), product of:
            0.15225126 = queryWeight, product of:
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.040536046 = queryNorm
            0.165991 = fieldWeight in 2683, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7559474 = idf(docFreq=2809, maxDocs=44218)
              0.03125 = fieldNorm(doc=2683)
        0.009820302 = weight(_text_:a in 2683) [ClassicSimilarity], result of:
          0.009820302 = score(doc=2683,freq=34.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.21010503 = fieldWeight in 2683, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=2683)
      0.4 = coord(2/5)
    
    Abstract
    Video summarization aims at producing a compact version of a full-length video while preserving the significant content of the original video. Movie summarization condenses a full-length movie into a summary that still retains the most significant and interesting content of the original movie. In the past, several movie summarization systems have been proposed to generate a movie summary based on low-level video features such as color, motion, texture, etc. However, a generic summary, which is common to everyone and is produced based only on low-level video features will not satisfy every user. As users' preferences for the summary differ vastly for the same movie, there is a need for a personalized movie summarization system nowadays. To address this demand, this paper proposes a novel system to generate semantically meaningful video summaries for the same movie, which are tailored to the preferences and interests of a user. For a given movie, shots and scenes are automatically detected and their high-level features are semi-automatically annotated. Preferences over high-level movie features are explicitly collected from the user using a query interface. The user preferences are generated by means of a stored-query. Movie summaries are generated at shot level and scene level, where shots or scenes are selected for summary skim based on the similarity measured between shots and scenes, and the user's preferences. The proposed movie summarization system is evaluated subjectively using a sample of 20 subjects with eight movies in the English language. The quality of the generated summaries is assessed by informativeness, enjoyability, relevance, and acceptance metrics and Quality of Perception measures. Further, the usability of the proposed summarization system is subjectively evaluated by conducting a questionnaire survey. The experimental results on the performance of the proposed movie summarization approach show the potential of the proposed system.
    Type
    a
  16. Robin, J.; McKeown, K.: Empirically designing and evaluating a new revision-based model for summary generation (1996) 0.01
    0.013454623 = product of:
      0.033636555 = sum of:
        0.011668257 = weight(_text_:a in 6751) [ClassicSimilarity], result of:
          0.011668257 = score(doc=6751,freq=12.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.24964198 = fieldWeight in 6751, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=6751)
        0.021968298 = product of:
          0.043936595 = sum of:
            0.043936595 = weight(_text_:22 in 6751) [ClassicSimilarity], result of:
              0.043936595 = score(doc=6751,freq=2.0), product of:
                0.14195032 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040536046 = queryNorm
                0.30952093 = fieldWeight in 6751, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6751)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Presents a system for summarizing quantitative data in natural language, focusing on the use of a corpus of basketball game summaries, drawn from online news services, to empirically shape the system design and to evaluate the approach. Initial corpus analysis revealed characteristics of textual summaries that challenge the capabilities of current language generation systems. A revision based corpus analysis was used to identify and encode the revision rules of the system. Presents a quantitative evaluation, using several test corpora, to measure the robustness of the new revision based model
    Date
    6. 3.1997 16:22:15
    Type
    a
  17. Jones, P.A.; Bradbeer, P.V.G.: Discovery of optimal weights in a concept selection system (1996) 0.01
    0.012598157 = product of:
      0.031495392 = sum of:
        0.009527093 = weight(_text_:a in 6974) [ClassicSimilarity], result of:
          0.009527093 = score(doc=6974,freq=8.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.20383182 = fieldWeight in 6974, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=6974)
        0.021968298 = product of:
          0.043936595 = sum of:
            0.043936595 = weight(_text_:22 in 6974) [ClassicSimilarity], result of:
              0.043936595 = score(doc=6974,freq=2.0), product of:
                0.14195032 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040536046 = queryNorm
                0.30952093 = fieldWeight in 6974, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6974)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Describes the application of weighting strategies to model uncertainties and probabilities in automatic abstracting systems, particularly in the concept selection phase. The weights were originally assigned in an ad hoc manner and were then refined by manual analysis of the results. The new method attempts to derive a more systematic methods and performs this using a genetic algorithm
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
    Type
    a
  18. Endres-Niggemeyer, B.; Jauris-Heipke, S.; Pinsky, S.M.; Ulbricht, U.: Wissen gewinnen durch Wissen : Ontologiebasierte Informationsextraktion (2006) 0.01
    0.010794876 = product of:
      0.02698719 = sum of:
        0.024009973 = weight(_text_:u in 6016) [ClassicSimilarity], result of:
          0.024009973 = score(doc=6016,freq=2.0), product of:
            0.13273303 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.040536046 = queryNorm
            0.1808892 = fieldWeight in 6016, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6016)
        0.0029772164 = weight(_text_:a in 6016) [ClassicSimilarity], result of:
          0.0029772164 = score(doc=6016,freq=2.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.06369744 = fieldWeight in 6016, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6016)
      0.4 = coord(2/5)
    
    Type
    a
  19. Vanderwende, L.; Suzuki, H.; Brockett, J.M.; Nenkova, A.: Beyond SumBasic : task-focused summarization with sentence simplification and lexical expansion (2007) 0.01
    0.009785974 = product of:
      0.024464935 = sum of:
        0.007988711 = weight(_text_:a in 948) [ClassicSimilarity], result of:
          0.007988711 = score(doc=948,freq=10.0), product of:
            0.046739966 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.040536046 = queryNorm
            0.1709182 = fieldWeight in 948, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=948)
        0.016476223 = product of:
          0.032952446 = sum of:
            0.032952446 = weight(_text_:22 in 948) [ClassicSimilarity], result of:
              0.032952446 = score(doc=948,freq=2.0), product of:
                0.14195032 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040536046 = queryNorm
                0.23214069 = fieldWeight in 948, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=948)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    In recent years, there has been increased interest in topic-focused multi-document summarization. In this task, automatic summaries are produced in response to a specific information request, or topic, stated by the user. The system we have designed to accomplish this task comprises four main components: a generic extractive summarization system, a topic-focusing component, sentence simplification, and lexical expansion of topic words. This paper details each of these components, together with experiments designed to quantify their individual contributions. We include an analysis of our results on two large datasets commonly used to evaluate task-focused summarization, the DUC2005 and DUC2006 datasets, using automatic metrics. Additionally, we include an analysis of our results on the DUC2006 task according to human evaluation metrics. In the human evaluation of system summaries compared to human summaries, i.e., the Pyramid method, our system ranked first out of 22 systems in terms of overall mean Pyramid score; and in the human evaluation of summary responsiveness to the topic, our system ranked third out of 35 systems.
    Type
    a
  20. Advances in automatic text summarization (1999) 0.01
    0.009603989 = product of:
      0.048019946 = sum of:
        0.048019946 = weight(_text_:u in 6191) [ClassicSimilarity], result of:
          0.048019946 = score(doc=6191,freq=2.0), product of:
            0.13273303 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.040536046 = queryNorm
            0.3617784 = fieldWeight in 6191, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.078125 = fieldNorm(doc=6191)
      0.2 = coord(1/5)
    
    Editor
    Mani, I. u. M.T. Maybury

Years

Languages

  • e 96
  • d 17
  • chi 2
  • More… Less…

Types

  • a 109
  • m 3
  • s 2
  • el 1
  • r 1
  • x 1
  • More… Less…