Search (11 results, page 1 of 1)

  • × theme_ss:"Automatisches Abstracting"
  1. Robin, J.; McKeown, K.: Empirically designing and evaluating a new revision-based model for summary generation (1996) 0.05
    0.049538486 = product of:
      0.09907697 = sum of:
        0.09907697 = sum of:
          0.0424972 = weight(_text_:online in 6751) [ClassicSimilarity], result of:
            0.0424972 = score(doc=6751,freq=2.0), product of:
              0.15842392 = queryWeight, product of:
                3.0349014 = idf(docFreq=5778, maxDocs=44218)
                0.05220068 = queryNorm
              0.2682499 = fieldWeight in 6751, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0349014 = idf(docFreq=5778, maxDocs=44218)
                0.0625 = fieldNorm(doc=6751)
          0.05657977 = weight(_text_:22 in 6751) [ClassicSimilarity], result of:
            0.05657977 = score(doc=6751,freq=2.0), product of:
              0.18279788 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05220068 = queryNorm
              0.30952093 = fieldWeight in 6751, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=6751)
      0.5 = coord(1/2)
    
    Abstract
    Presents a system for summarizing quantitative data in natural language, focusing on the use of a corpus of basketball game summaries, drawn from online news services, to empirically shape the system design and to evaluate the approach. Initial corpus analysis revealed characteristics of textual summaries that challenge the capabilities of current language generation systems. A revision based corpus analysis was used to identify and encode the revision rules of the system. Presents a quantitative evaluation, using several test corpora, to measure the robustness of the new revision based model
    Date
    6. 3.1997 16:22:15
  2. Goh, A.; Hui, S.C.: TES: a text extraction system (1996) 0.01
    0.014144942 = product of:
      0.028289884 = sum of:
        0.028289884 = product of:
          0.05657977 = sum of:
            0.05657977 = weight(_text_:22 in 6599) [ClassicSimilarity], result of:
              0.05657977 = score(doc=6599,freq=2.0), product of:
                0.18279788 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05220068 = queryNorm
                0.30952093 = fieldWeight in 6599, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6599)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    26. 2.1997 10:22:43
  3. Jones, P.A.; Bradbeer, P.V.G.: Discovery of optimal weights in a concept selection system (1996) 0.01
    0.014144942 = product of:
      0.028289884 = sum of:
        0.028289884 = product of:
          0.05657977 = sum of:
            0.05657977 = weight(_text_:22 in 6974) [ClassicSimilarity], result of:
              0.05657977 = score(doc=6974,freq=2.0), product of:
                0.18279788 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05220068 = queryNorm
                0.30952093 = fieldWeight in 6974, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6974)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  4. Vanderwende, L.; Suzuki, H.; Brockett, J.M.; Nenkova, A.: Beyond SumBasic : task-focused summarization with sentence simplification and lexical expansion (2007) 0.01
    0.010608707 = product of:
      0.021217413 = sum of:
        0.021217413 = product of:
          0.042434826 = sum of:
            0.042434826 = weight(_text_:22 in 948) [ClassicSimilarity], result of:
              0.042434826 = score(doc=948,freq=2.0), product of:
                0.18279788 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05220068 = queryNorm
                0.23214069 = fieldWeight in 948, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=948)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In recent years, there has been increased interest in topic-focused multi-document summarization. In this task, automatic summaries are produced in response to a specific information request, or topic, stated by the user. The system we have designed to accomplish this task comprises four main components: a generic extractive summarization system, a topic-focusing component, sentence simplification, and lexical expansion of topic words. This paper details each of these components, together with experiments designed to quantify their individual contributions. We include an analysis of our results on two large datasets commonly used to evaluate task-focused summarization, the DUC2005 and DUC2006 datasets, using automatic metrics. Additionally, we include an analysis of our results on the DUC2006 task according to human evaluation metrics. In the human evaluation of system summaries compared to human summaries, i.e., the Pyramid method, our system ranked first out of 22 systems in terms of overall mean Pyramid score; and in the human evaluation of summary responsiveness to the topic, our system ranked third out of 35 systems.
  5. Rodríguez-Vidal, J.; Carrillo-de-Albornoz, J.; Gonzalo, J.; Plaza, L.: Authority and priority signals in automatic summary generation for online reputation management (2021) 0.01
    0.009390644 = product of:
      0.018781288 = sum of:
        0.018781288 = product of:
          0.037562575 = sum of:
            0.037562575 = weight(_text_:online in 213) [ClassicSimilarity], result of:
              0.037562575 = score(doc=213,freq=4.0), product of:
                0.15842392 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.05220068 = queryNorm
                0.23710167 = fieldWeight in 213, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=213)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Online reputation management (ORM) comprises the collection of techniques that help monitoring and improving the public image of an entity (companies, products, institutions) on the Internet. The ORM experts try to minimize the negative impact of the information about an entity while maximizing the positive material for being more trustworthy to the customers. Due to the huge amount of information that is published on the Internet every day, there is a need to summarize the entire flow of information to obtain only those data that are relevant to the entities. Traditionally the automatic summarization task in the ORM scenario takes some in-domain signals into account such as popularity, polarity for reputation and novelty but exists other feature to be considered, the authority of the people. This authority depends on the ability to convince others and therefore to influence opinions. In this work, we propose the use of authority signals that measures the influence of a user jointly with (a) priority signals related to the ORM domain and (b) information regarding the different topics that influential people is talking about. Our results indicate that the use of authority signals may significantly improve the quality of the summaries that are automatically generated.
  6. Moens, M.F.; Dumortier, J.: Use of a text grammar for generating highlight abstracts of magazine articles (2000) 0.01
    0.009296263 = product of:
      0.018592525 = sum of:
        0.018592525 = product of:
          0.03718505 = sum of:
            0.03718505 = weight(_text_:online in 4540) [ClassicSimilarity], result of:
              0.03718505 = score(doc=4540,freq=2.0), product of:
                0.15842392 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.05220068 = queryNorm
                0.23471867 = fieldWeight in 4540, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4540)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Browsing a database of article abstracts is one way to select and buy relevant magazine articles online. Our research contributes to the design and development of text grammars for abstracting texts in unlimited subject domains. We developed a system that parses texts based on the text grammar of a specific text type and that extracts sentences and statements which are relevant for inclusion in the abstracts. The system employs knowledge of the discourse patterns that are typical of news stories. The results are encouraging and demonstrate the importance of discourse structures in text summarisation.
  7. Wu, Y.-f.B.; Li, Q.; Bot, R.S.; Chen, X.: Finding nuggets in documents : a machine learning approach (2006) 0.01
    0.008840589 = product of:
      0.017681178 = sum of:
        0.017681178 = product of:
          0.035362355 = sum of:
            0.035362355 = weight(_text_:22 in 5290) [ClassicSimilarity], result of:
              0.035362355 = score(doc=5290,freq=2.0), product of:
                0.18279788 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05220068 = queryNorm
                0.19345059 = fieldWeight in 5290, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5290)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 17:25:48
  8. Kim, H.H.; Kim, Y.H.: Generic speech summarization of transcribed lecture videos : using tags and their semantic relations (2016) 0.01
    0.008840589 = product of:
      0.017681178 = sum of:
        0.017681178 = product of:
          0.035362355 = sum of:
            0.035362355 = weight(_text_:22 in 2640) [ClassicSimilarity], result of:
              0.035362355 = score(doc=2640,freq=2.0), product of:
                0.18279788 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05220068 = queryNorm
                0.19345059 = fieldWeight in 2640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2640)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2016 12:29:41
  9. Oh, H.; Nam, S.; Zhu, Y.: Structured abstract summarization of scientific articles : summarization using full-text section information (2023) 0.01
    0.008840589 = product of:
      0.017681178 = sum of:
        0.017681178 = product of:
          0.035362355 = sum of:
            0.035362355 = weight(_text_:22 in 889) [ClassicSimilarity], result of:
              0.035362355 = score(doc=889,freq=2.0), product of:
                0.18279788 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05220068 = queryNorm
                0.19345059 = fieldWeight in 889, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=889)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2023 18:57:12
  10. Jiang, Y.; Meng, R.; Huang, Y.; Lu, W.; Liu, J.: Generating keyphrases for readers : a controllable keyphrase generation framework (2023) 0.01
    0.008840589 = product of:
      0.017681178 = sum of:
        0.017681178 = product of:
          0.035362355 = sum of:
            0.035362355 = weight(_text_:22 in 1012) [ClassicSimilarity], result of:
              0.035362355 = score(doc=1012,freq=2.0), product of:
                0.18279788 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05220068 = queryNorm
                0.19345059 = fieldWeight in 1012, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1012)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2023 14:55:20
  11. Meyer, R.: Allein, es wär' so schön gewesen : Der Copernic Summarzier kann Internettexte leider nicht befriedigend und sinnvoll zusammenfassen (2002) 0.00
    0.0046481313 = product of:
      0.009296263 = sum of:
        0.009296263 = product of:
          0.018592525 = sum of:
            0.018592525 = weight(_text_:online in 648) [ClassicSimilarity], result of:
              0.018592525 = score(doc=648,freq=2.0), product of:
                0.15842392 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.05220068 = queryNorm
                0.11735933 = fieldWeight in 648, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=648)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Das Netz hat die Jagd nach textlichen Inhalten erheblich erleichtert. Es ist so ein-fach, irgendeinen Beitrag über ein bestimmtes Thema zu finden, daß man eher über Fülle als über Mangel klagt. Suchmaschinen und Kataloge helfen beim Sichten, indem sie eine Vorauswahl von Links treffen. Das Programm "Copernic Summarizer" geht einen anderen Weg: Es erstellt Exzerpte beliebiger Texte und will damit die Lesezeit verkürzen. Decken wir über die lästige Zwangsregistrierung (unter Pflichtangabe einer Mailadresse) das Mäntelchen des Schweigens. Was folgt, geht rasch, nicht nur die ersten Schritte sind schnell vollzogen. Die Software läßt sich in verschiedenen Umgebungen einsetzen. Unterstützt werden Microsoft Office, einige Mailprogramme sowie der Acrobat Reader für PDF-Dateien. Besonders eignet sich das Verfahren freilich für Internetseiten. Der "Summarizer" nistet sich im Browser als Symbol ein. Und mit einem Klick faßt er einen Online Text in einem Extrafenster zusammen. Es handelt sich dabei nicht im eigentlichen Sinne um eine Zusammenfassung mit eigenen Worten, die in Kürze den Gesamtgehalt wiedergibt. Das Ergebnis ist schlichtes Kürzen, das sich noch dazu ziemlich brutal vollzieht, da grundsätzlich vollständige Sätze gestrichen werden. Die Software erfaßt den Text, versucht Schlüsselwörter zu ermitteln und entscheidet danach, welche Sätze wichtig sind und welche nicht. Das Verfahren mag den Entwicklungsaufwand verringert haben, dem Anwender hingegen bereitet es Probleme. Oftmals beziehen sich Sätze auf frühere Aussagen, etwa in Formulierungen wie "Diese Methode wird . . ." oder "Ein Jahr später . . ." In der Zusammenfassung fehlt entweder der Kontext dazu oder man kann nicht darauf vertrauen, daß der Bezug sich tatsächlich im voranstehenden Satz findet. Die Liste der Schlüsselwörter, die links eingeblendet wird, wirkt nicht immer glücklich. Teilweise finden sich unauffällige Begriffe wie "Anlaß" oder "zudem". Wenigstens lassen sich einzelne Begriffe entfernen, um das Ergebnis zu verfeinern. Hilfreich ist das mögliche Markieren der Schlüsselbegriffe im Text. Unverständlich bleibt hingegen, weshalb man nicht selbst relevante Wörter festlegen darf, die als Basis für die Zusammenfassung dienen. Das Kürzen des Textes ist in mehreren Stufen möglich, von fünf bis fünfzig Prozent. Fünf Prozent sind unbrauchbar; ein guter Kompromiß sind fünfundzwanzig. Allerdings nimmt es die Software nicht genau mit den eigenen Vorgaben. Bei kürzeren Texten ist die Zusammenfassung von angeblich einem Viertel fast genauso lang wie das Original; noch bei zwei Seiten eng bedrucktem Text (8 Kilobyte) entspricht das Exzerpt einem Drittel des Originals. Für gewöhnlich sind Webseiten geschmückt mit einem Menü, mit Werbung, mit Hinweiskästen und allerlei mehr. Sehr zuverlässig erkennt die Software, was überhaupt Fließtext ist; alles andere wird ausgefiltert. Da bedauert man es zuweilen, daß der Summarizer nicht den kompletten Text listet, damit er in einer angenehmen Umgebung schwarz auf weiß gelesen oder gedruckt wird. Wahlweise zum manuellen Auslösen der Zusammenfassung wird der "LiveSummarizer" aktiviert. Er verdichtet Text zeitgleich mit dem Aufrufen einer Seite, nimmt dafür aber ein Drittel der Bildschirmfläche ein - ein zu hoher Preis. Insgesamt fragen wir uns, wie man das Programm sinnvoll nutzen soll. Beim Verdichten von Nachrichten ist unsicher, ob Summarizer nicht wichtige Details unterschlägt. Bei langen Texten sorgen Fragen zum Kontext für Verwirrung. Sucht man nach der Antwort auf eine Detailfrage, hilft die Suchfunktion des Browsers oft schneller. Eine Zusammenfassung hätte auch dem Preis gutgetan: 100 Euro verlangt der deutsche Verleger Softline. Das scheint deutlich zu hoch gegriffen. Zumal das Zusammenfassen der einzige Zweck des Summarizers ist. Das Verwalten von Bookmarks und das Archivieren von Texten wären sinnvolle Ergänzungen gewesen.