Search (50 results, page 1 of 3)

  • × theme_ss:"Automatisches Abstracting"
  • × year_i:[2000 TO 2010}
  1. Kuhlen, R.: Informationsaufbereitung III : Referieren (Abstracts - Abstracting - Grundlagen) (2004) 0.02
    0.024689557 = product of:
      0.049379114 = sum of:
        0.047750723 = weight(_text_:von in 2917) [ClassicSimilarity], result of:
          0.047750723 = score(doc=2917,freq=20.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.37285718 = fieldWeight in 2917, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.03125 = fieldNorm(doc=2917)
        0.0016283898 = product of:
          0.004885169 = sum of:
            0.004885169 = weight(_text_:a in 2917) [ClassicSimilarity], result of:
              0.004885169 = score(doc=2917,freq=6.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.088261776 = fieldWeight in 2917, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2917)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Was ein Abstract (im Folgenden synonym mit Referat oder Kurzreferat gebraucht) ist, legt das American National Standards Institute in einer Weise fest, die sicherlich von den meisten Fachleuten akzeptiert werden kann: "An abstract is defined as an abbreviated, accurate representation of the contents of a document"; fast genauso die deutsche Norm DIN 1426: "Das Kurzreferat gibt kurz und klar den Inhalt des Dokuments wieder." Abstracts gehören zum wissenschaftlichen Alltag. Weitgehend allen Publikationen, zumindest in den naturwissenschaftlichen, technischen, informationsbezogenen oder medizinischen Bereichen, gehen Abstracts voran, "prefe-rably prepared by its author(s) for publication with it". Es gibt wohl keinen Wissenschaftler, der nicht irgendwann einmal ein Abstract geschrieben hätte. Gehört das Erstellen von Abstracts dann überhaupt zur dokumentarischen bzw informationswissenschaftlichen Methodenlehre, wenn es jeder kann? Was macht den informationellen Mehrwert aus, der durch Expertenreferate gegenüber Laienreferaten erzeugt wird? Dies ist nicht so leicht zu beantworten, zumal geeignete Bewertungsverfahren fehlen, die Qualität von Abstracts vergleichend "objektiv" zu messen. Abstracts werden in erheblichem Umfang von Informationsspezialisten erstellt, oft unter der Annahme, dass Autoren selber dafür weniger geeignet sind. Vergegenwärtigen wir uns, was wir über Abstracts und Abstracting wissen. Ein besonders gelungenes Abstract ist zuweilen klarer als der Ursprungstext selber, darf aber nicht mehr Information als dieser enthalten: "Good abstracts are highly structured, concise, and coherent, and are the result of a thorough analysis of the content of the abstracted materials. Abstracts may be more readable than the basis documents, but because of size constraints they rarely equal and never surpass the information content of the basic document". Dies ist verständlich, denn ein "Abstract" ist zunächst nichts anderes als ein Ergebnis des Vorgangs einer Abstraktion. Ohne uns zu sehr in die philosophischen Hintergründe der Abstraktion zu verlieren, besteht diese doch "in der Vernachlässigung von bestimmten Vorstellungsbzw. Begriffsinhalten, von welchen zugunsten anderer Teilinhalte abgesehen, abstrahiert' wird. Sie ist stets verbunden mit einer Fixierung von (interessierenden) Merkmalen durch die aktive Aufmerksamkeit, die unter einem bestimmten pragmatischen Gesichtspunkt als wesentlich' für einen vorgestellten bzw für einen unter einen Begriff fallenden Gegenstand (oder eine Mehrheit von Gegenständen) betrachtet werden". Abstracts reduzieren weniger Begriffsinhalte, sondern Texte bezüglich ihres proportionalen Gehaltes. Borko/ Bernier haben dies sogar quantifiziert; sie schätzen den Reduktionsfaktor auf 1:10 bis 1:12
    Source
    Grundlagen der praktischen Information und Dokumentation. 5., völlig neu gefaßte Ausgabe. 2 Bde. Hrsg. von R. Kuhlen, Th. Seeger u. D. Strauch. Begründet von Klaus Laisiepen, Ernst Lutterbeck, Karl-Heinrich Meyer-Uhlenried. Bd.1: Handbuch zur Einführung in die Informationswissenschaft und -praxis
    Type
    a
  2. Kuhlen, R.: In Richtung Summarizing für Diskurse in K3 (2006) 0.02
    0.02370751 = product of:
      0.04741502 = sum of:
        0.04576976 = weight(_text_:von in 6067) [ClassicSimilarity], result of:
          0.04576976 = score(doc=6067,freq=6.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.357389 = fieldWeight in 6067, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6067)
        0.0016452647 = product of:
          0.004935794 = sum of:
            0.004935794 = weight(_text_:a in 6067) [ClassicSimilarity], result of:
              0.004935794 = score(doc=6067,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.089176424 = fieldWeight in 6067, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6067)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Der Bedarf nach Summarizing-Leistungen, in Situationen der Fachinformation, aber auch in kommunikativen Umgebungen (Diskursen) wird aufgezeigt. Summarizing wird dazu in den Kontext des bisherigen (auch automatischen) Abstracting/Extracting gestellt. Der aktuelle Forschungsstand, vor allem mit Blick auf Multi-Document-Summarizing, wird dargestellt. Summarizing ist eine wichtige Funktion in komplex und umfänglich werdenden Diskussionen in elektronischen Foren. Dies wird am Beispiel des e-Learning-Systems K3 aufgezeigt. Rudimentäre Summarizing-Funktionen von K3 und des zugeordneten K3VIS-Systems werden dargestellt. Der Rahmen für ein elaborierteres, Template-orientiertes Summarizing unter Verwendung der vielfältigen Auszeichnungsfunktionen von K3 (Rollen, Diskurstypen, Inhaltstypen etc.) wird aufgespannt.
    Source
    Information und Sprache: Beiträge zu Informationswissenschaft, Computerlinguistik, Bibliothekswesen und verwandten Fächern. Festschrift für Harald H. Zimmermann. Herausgegeben von Ilse Harms, Heinz-Dirk Luckhardt und Hans W. Giessen
    Type
    a
  3. Meyer, R.: Allein, es wär' so schön gewesen : Der Copernic Summarzier kann Internettexte leider nicht befriedigend und sinnvoll zusammenfassen (2002) 0.02
    0.01659337 = product of:
      0.03318674 = sum of:
        0.032364108 = weight(_text_:von in 648) [ClassicSimilarity], result of:
          0.032364108 = score(doc=648,freq=12.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.2527122 = fieldWeight in 648, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.02734375 = fieldNorm(doc=648)
        8.2263234E-4 = product of:
          0.002467897 = sum of:
            0.002467897 = weight(_text_:a in 648) [ClassicSimilarity], result of:
              0.002467897 = score(doc=648,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.044588212 = fieldWeight in 648, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=648)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Das Netz hat die Jagd nach textlichen Inhalten erheblich erleichtert. Es ist so ein-fach, irgendeinen Beitrag über ein bestimmtes Thema zu finden, daß man eher über Fülle als über Mangel klagt. Suchmaschinen und Kataloge helfen beim Sichten, indem sie eine Vorauswahl von Links treffen. Das Programm "Copernic Summarizer" geht einen anderen Weg: Es erstellt Exzerpte beliebiger Texte und will damit die Lesezeit verkürzen. Decken wir über die lästige Zwangsregistrierung (unter Pflichtangabe einer Mailadresse) das Mäntelchen des Schweigens. Was folgt, geht rasch, nicht nur die ersten Schritte sind schnell vollzogen. Die Software läßt sich in verschiedenen Umgebungen einsetzen. Unterstützt werden Microsoft Office, einige Mailprogramme sowie der Acrobat Reader für PDF-Dateien. Besonders eignet sich das Verfahren freilich für Internetseiten. Der "Summarizer" nistet sich im Browser als Symbol ein. Und mit einem Klick faßt er einen Online Text in einem Extrafenster zusammen. Es handelt sich dabei nicht im eigentlichen Sinne um eine Zusammenfassung mit eigenen Worten, die in Kürze den Gesamtgehalt wiedergibt. Das Ergebnis ist schlichtes Kürzen, das sich noch dazu ziemlich brutal vollzieht, da grundsätzlich vollständige Sätze gestrichen werden. Die Software erfaßt den Text, versucht Schlüsselwörter zu ermitteln und entscheidet danach, welche Sätze wichtig sind und welche nicht. Das Verfahren mag den Entwicklungsaufwand verringert haben, dem Anwender hingegen bereitet es Probleme. Oftmals beziehen sich Sätze auf frühere Aussagen, etwa in Formulierungen wie "Diese Methode wird . . ." oder "Ein Jahr später . . ." In der Zusammenfassung fehlt entweder der Kontext dazu oder man kann nicht darauf vertrauen, daß der Bezug sich tatsächlich im voranstehenden Satz findet. Die Liste der Schlüsselwörter, die links eingeblendet wird, wirkt nicht immer glücklich. Teilweise finden sich unauffällige Begriffe wie "Anlaß" oder "zudem". Wenigstens lassen sich einzelne Begriffe entfernen, um das Ergebnis zu verfeinern. Hilfreich ist das mögliche Markieren der Schlüsselbegriffe im Text. Unverständlich bleibt hingegen, weshalb man nicht selbst relevante Wörter festlegen darf, die als Basis für die Zusammenfassung dienen. Das Kürzen des Textes ist in mehreren Stufen möglich, von fünf bis fünfzig Prozent. Fünf Prozent sind unbrauchbar; ein guter Kompromiß sind fünfundzwanzig. Allerdings nimmt es die Software nicht genau mit den eigenen Vorgaben. Bei kürzeren Texten ist die Zusammenfassung von angeblich einem Viertel fast genauso lang wie das Original; noch bei zwei Seiten eng bedrucktem Text (8 Kilobyte) entspricht das Exzerpt einem Drittel des Originals. Für gewöhnlich sind Webseiten geschmückt mit einem Menü, mit Werbung, mit Hinweiskästen und allerlei mehr. Sehr zuverlässig erkennt die Software, was überhaupt Fließtext ist; alles andere wird ausgefiltert. Da bedauert man es zuweilen, daß der Summarizer nicht den kompletten Text listet, damit er in einer angenehmen Umgebung schwarz auf weiß gelesen oder gedruckt wird. Wahlweise zum manuellen Auslösen der Zusammenfassung wird der "LiveSummarizer" aktiviert. Er verdichtet Text zeitgleich mit dem Aufrufen einer Seite, nimmt dafür aber ein Drittel der Bildschirmfläche ein - ein zu hoher Preis. Insgesamt fragen wir uns, wie man das Programm sinnvoll nutzen soll. Beim Verdichten von Nachrichten ist unsicher, ob Summarizer nicht wichtige Details unterschlägt. Bei langen Texten sorgen Fragen zum Kontext für Verwirrung. Sucht man nach der Antwort auf eine Detailfrage, hilft die Suchfunktion des Browsers oft schneller. Eine Zusammenfassung hätte auch dem Preis gutgetan: 100 Euro verlangt der deutsche Verleger Softline. Das scheint deutlich zu hoch gegriffen. Zumal das Zusammenfassen der einzige Zweck des Summarizers ist. Das Verwalten von Bookmarks und das Archivieren von Texten wären sinnvolle Ergänzungen gewesen.
    Type
    a
  4. Shen, D.; Yang, Q.; Chen, Z.: Noise reduction through summarization for Web-page classification (2007) 0.02
    0.016329413 = product of:
      0.06531765 = sum of:
        0.06531765 = product of:
          0.097976476 = sum of:
            0.007327754 = weight(_text_:a in 953) [ClassicSimilarity], result of:
              0.007327754 = score(doc=953,freq=6.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.13239266 = fieldWeight in 953, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=953)
            0.09064872 = weight(_text_:z in 953) [ClassicSimilarity], result of:
              0.09064872 = score(doc=953,freq=2.0), product of:
                0.2562021 = queryWeight, product of:
                  5.337313 = idf(docFreq=577, maxDocs=44218)
                  0.04800207 = queryNorm
                0.35381722 = fieldWeight in 953, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.337313 = idf(docFreq=577, maxDocs=44218)
                  0.046875 = fieldNorm(doc=953)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    Due to a large variety of noisy information embedded in Web pages, Web-page classification is much more difficult than pure-text classification. In this paper, we propose to improve the Web-page classification performance by removing the noise through summarization techniques. We first give empirical evidence that ideal Web-page summaries generated by human editors can indeed improve the performance of Web-page classification algorithms. We then put forward a new Web-page summarization algorithm based on Web-page layout and evaluate it along with several other state-of-the-art text summarization algorithms on the LookSmart Web directory. Experimental results show that the classification algorithms (NB or SVM) augmented by any summarization approach can achieve an improvement by more than 5.0% as compared to pure-text-based classification algorithms. We further introduce an ensemble method to combine the different summarization algorithms. The ensemble summarization method achieves more than 12.0% improvement over pure-text based methods.
    Type
    a
  5. Endres-Niggemeyer, B.; Jauris-Heipke, S.; Pinsky, S.M.; Ulbricht, U.: Wissen gewinnen durch Wissen : Ontologiebasierte Informationsextraktion (2006) 0.01
    0.013934327 = product of:
      0.027868655 = sum of:
        0.026693465 = weight(_text_:von in 6016) [ClassicSimilarity], result of:
          0.026693465 = score(doc=6016,freq=4.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.2084335 = fieldWeight in 6016, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6016)
        0.001175189 = product of:
          0.003525567 = sum of:
            0.003525567 = weight(_text_:a in 6016) [ClassicSimilarity], result of:
              0.003525567 = score(doc=6016,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.06369744 = fieldWeight in 6016, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6016)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Die ontologiebasierte Informationsextraktion, über die hier berichtet wird, ist Teil eines Systems zum automatischen Zusammenfassen, das sich am Vorgehen kompetenter Menschen orientiert. Dahinter steht die Annahme, dass Menschen die Ergebnisse eines Systems leichter übernehmen können, wenn sie mit Verfahren erarbeitet worden sind, die sie selbst auch benutzen. Das erste Anwendungsgebiet ist Knochenmarktransplantation (KMT). Im Kern des Systems Summit-BMT (Summarize It in Bone Marrow Transplantation) steht eine Ontologie des Fachgebietes. Sie ist als MySQL-Datenbank realisiert und versorgt menschliche Benutzer und Systemkomponenten mit Wissen. Summit-BMT unterstützt die Frageformulierung mit einem empirisch fundierten Szenario-Interface. Die Retrievalergebnisse werden durch ein Textpassagenretrieval vorselektiert und dann kognitiv fundierten Agenten unterbreitet, die unter Einsatz ihrer Wissensbasis / Ontologie genauer prüfen, ob die Propositionen aus der Benutzerfrage getroffen werden. Die relevanten Textclips aus dem Duelldokument werden in das Szenarioformular eingetragen und mit einem Link zu ihrem Vorkommen im Original präsentiert. In diesem Artikel stehen die Ontologie und ihr Gebrauch zur wissensbasierten Informationsextraktion im Mittelpunkt. Die Ontologiedatenbank hält unterschiedliche Wissenstypen so bereit, dass sie leicht kombiniert werden können: Konzepte, Propositionen und ihre syntaktisch-semantischen Schemata, Unifikatoren, Paraphrasen und Definitionen von Frage-Szenarios. Auf sie stützen sich die Systemagenten, welche von Menschen adaptierte Zusammenfassungsstrategien ausführen. Mängel in anderen Verarbeitungsschritten führen zu Verlusten, aber die eigentliche Qualität der Ergebnisse steht und fällt mit der Qualität der Ontologie. Erste Tests der Extraktionsleistung fallen verblüffend positiv aus.
    Type
    a
  6. Vanderwende, L.; Suzuki, H.; Brockett, J.M.; Nenkova, A.: Beyond SumBasic : task-focused summarization with sentence simplification and lexical expansion (2007) 0.01
    0.0080803 = product of:
      0.0323212 = sum of:
        0.0323212 = product of:
          0.0484818 = sum of:
            0.00946009 = weight(_text_:a in 948) [ClassicSimilarity], result of:
              0.00946009 = score(doc=948,freq=10.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.1709182 = fieldWeight in 948, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=948)
            0.039021708 = weight(_text_:22 in 948) [ClassicSimilarity], result of:
              0.039021708 = score(doc=948,freq=2.0), product of:
                0.16809508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04800207 = queryNorm
                0.23214069 = fieldWeight in 948, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=948)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    In recent years, there has been increased interest in topic-focused multi-document summarization. In this task, automatic summaries are produced in response to a specific information request, or topic, stated by the user. The system we have designed to accomplish this task comprises four main components: a generic extractive summarization system, a topic-focusing component, sentence simplification, and lexical expansion of topic words. This paper details each of these components, together with experiments designed to quantify their individual contributions. We include an analysis of our results on two large datasets commonly used to evaluate task-focused summarization, the DUC2005 and DUC2006 datasets, using automatic metrics. Additionally, we include an analysis of our results on the DUC2006 task according to human evaluation metrics. In the human evaluation of system summaries compared to human summaries, i.e., the Pyramid method, our system ranked first out of 22 systems in terms of overall mean Pyramid score; and in the human evaluation of summary responsiveness to the topic, our system ranked third out of 35 systems.
    Type
    a
  7. Wu, Y.-f.B.; Li, Q.; Bot, R.S.; Chen, X.: Finding nuggets in documents : a machine learning approach (2006) 0.01
    0.0072778193 = product of:
      0.029111277 = sum of:
        0.029111277 = product of:
          0.043666914 = sum of:
            0.0111488225 = weight(_text_:a in 5290) [ClassicSimilarity], result of:
              0.0111488225 = score(doc=5290,freq=20.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.20142901 = fieldWeight in 5290, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5290)
            0.032518093 = weight(_text_:22 in 5290) [ClassicSimilarity], result of:
              0.032518093 = score(doc=5290,freq=2.0), product of:
                0.16809508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04800207 = queryNorm
                0.19345059 = fieldWeight in 5290, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5290)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    Document keyphrases provide a concise summary of a document's content, offering semantic metadata summarizing a document. They can be used in many applications related to knowledge management and text mining, such as automatic text summarization, development of search engines, document clustering, document classification, thesaurus construction, and browsing interfaces. Because only a small portion of documents have keyphrases assigned by authors, and it is time-consuming and costly to manually assign keyphrases to documents, it is necessary to develop an algorithm to automatically generate keyphrases for documents. This paper describes a Keyphrase Identification Program (KIP), which extracts document keyphrases by using prior positive samples of human identified phrases to assign weights to the candidate keyphrases. The logic of our algorithm is: The more keywords a candidate keyphrase contains and the more significant these keywords are, the more likely this candidate phrase is a keyphrase. KIP's learning function can enrich the glossary database by automatically adding new identified keyphrases to the database. KIP's personalization feature will let the user build a glossary database specifically suitable for the area of his/her interest. The evaluation results show that KIP's performance is better than the systems we compared to and that the learning function is effective.
    Date
    22. 7.2006 17:25:48
    Type
    a
  8. Endres-Niggemeyer, B.; Ziegert, C.: SummIt-BMT : (Summarize It in BMT) in Diagnose und Therapie, Abschlussbericht (2002) 0.00
    0.0047187824 = product of:
      0.01887513 = sum of:
        0.01887513 = weight(_text_:von in 4497) [ClassicSimilarity], result of:
          0.01887513 = score(doc=4497,freq=2.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.14738473 = fieldWeight in 4497, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4497)
      0.25 = coord(1/4)
    
    Abstract
    SummIt-BMT (Summarize It in Bone Marrow Transplantation) - das Zielsystem des Projektes - soll Ärzten in der Knochenmarktransplantation durch kognitiv fundiertes Zusammenfassen (Endres-Niggemeyer, 1998) aus dem WWW eine schnelle Informationsaufnahme ermöglichen. Im bmbffinanzierten Teilprojekt, über das hier zu berichten ist, liegt der Schwerpunkt auf den klinischen Fragestellungen. SummIt-BMT hat als zentrale Komponente eine KMT-Ontologie. Den Systemablauf veranschaulicht Abb. 1: Benutzer geben ihren Informationsbedarf in ein strukturiertes Szenario ein. Sie ziehen dazu Begriffe aus der Ontologie heran. Aus dem Szenario werden Fragen an Suchmaschinen abgeleitet. Die Summit-BMT-Metasuchmaschine stößt Google an und sucht in Medline, der zentralen Literaturdatenbank der Medizin. Das Suchergebnis wird aufbereitet. Dabei werden Links zu Volltexten verfolgt und die Volltexte besorgt. Die beschafften Dokumente werden mit einem Schlüsselwortretrieval auf Passagen untersucht, in denen sich Suchkonzepte aus der Frage / Ontologie häufen. Diese Passagen werden zum Zusammenfassen vorgeschlagen. In ihnen werden die Aussagen syntaktisch analysiert. Die Systemagenten untersuchen sie. Lassen Aussagen sich mit einer semantischen Relation an die Frage anbinden, tragen also zur deren Beantwortung bei, werden sie in die Zusammenfassung aufgenommen, es sei denn, andere Agenten machen Hinderungsgründe geltend, z.B. Redundanz. Das Ergebnis der Zusammenfassung wird in das Frage/Antwort-Szenario integriert. Präsentiert werden Exzerpte aus den Quelldokumenten. Mit einem Link vermitteln sie einen sofortigen Rückgriff auf die Quelle. SummIt-BMT ist zum nächsten Durchgang von Informationssuche und Zusammenfassung bereit, sobald der Benutzer dies wünscht.
  9. Hobson, S.P.; Dorr, B.J.; Monz, C.; Schwartz, R.: Task-based evaluation of text summarization using Relevance Prediction (2007) 0.00
    0.0012711613 = product of:
      0.005084645 = sum of:
        0.005084645 = product of:
          0.015253935 = sum of:
            0.015253935 = weight(_text_:a in 938) [ClassicSimilarity], result of:
              0.015253935 = score(doc=938,freq=26.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.27559727 = fieldWeight in 938, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=938)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    This article introduces a new task-based evaluation measure called Relevance Prediction that is a more intuitive measure of an individual's performance on a real-world task than interannotator agreement. Relevance Prediction parallels what a user does in the real world task of browsing a set of documents using standard search tools, i.e., the user judges relevance based on a short summary and then that same user - not an independent user - decides whether to open (and judge) the corresponding document. This measure is shown to be a more reliable measure of task performance than LDC Agreement, a current gold-standard based measure used in the summarization evaluation community. Our goal is to provide a stable framework within which developers of new automatic measures may make stronger statistical statements about the effectiveness of their measures in predicting summary usefulness. We demonstrate - as a proof-of-concept methodology for automatic metric developers - that a current automatic evaluation measure has a better correlation with Relevance Prediction than with LDC Agreement and that the significance level for detected differences is higher for the former than for the latter.
    Type
    a
  10. Díaz, A.; Gervás, P.: User-model based personalized summarization (2007) 0.00
    0.0012212924 = product of:
      0.0048851697 = sum of:
        0.0048851697 = product of:
          0.014655508 = sum of:
            0.014655508 = weight(_text_:a in 952) [ClassicSimilarity], result of:
              0.014655508 = score(doc=952,freq=24.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.26478532 = fieldWeight in 952, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=952)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    The potential of summary personalization is high, because a summary that would be useless to decide the relevance of a document if summarized in a generic manner, may be useful if the right sentences are selected that match the user interest. In this paper we defend the use of a personalized summarization facility to maximize the density of relevance of selections sent by a personalized information system to a given user. The personalization is applied to the digital newspaper domain and it used a user-model that stores long and short term interests using four reference systems: sections, categories, keywords and feedback terms. On the other side, it is crucial to measure how much information is lost during the summarization process, and how this information loss may affect the ability of the user to judge the relevance of a given document. The results obtained in two personalization systems show that personalized summaries perform better than generic and generic-personalized summaries in terms of identifying documents that satisfy user preferences. We also considered a user-centred direct evaluation that showed a high level of user satisfaction with the summaries.
    Type
    a
  11. Craven, T.C.: Presentation of repeated phrases in a computer-assisted abstracting tool kit (2001) 0.00
    0.0011633779 = product of:
      0.0046535116 = sum of:
        0.0046535116 = product of:
          0.013960535 = sum of:
            0.013960535 = weight(_text_:a in 3667) [ClassicSimilarity], result of:
              0.013960535 = score(doc=3667,freq=4.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.25222903 = fieldWeight in 3667, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3667)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Type
    a
  12. Yusuff, A.: Automatisches Indexing and Abstracting : Grundlagen und Beispiele (2002) 0.00
    0.0011633779 = product of:
      0.0046535116 = sum of:
        0.0046535116 = product of:
          0.013960535 = sum of:
            0.013960535 = weight(_text_:a in 1577) [ClassicSimilarity], result of:
              0.013960535 = score(doc=1577,freq=4.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.25222903 = fieldWeight in 1577, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1577)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Imprint
    Potsdam : Fachhochschule, FB A-B-D
  13. Ercan, G.; Cicekli, I.: Using lexical chains for keyword extraction (2007) 0.00
    0.0011633779 = product of:
      0.0046535116 = sum of:
        0.0046535116 = product of:
          0.013960535 = sum of:
            0.013960535 = weight(_text_:a in 951) [ClassicSimilarity], result of:
              0.013960535 = score(doc=951,freq=16.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.25222903 = fieldWeight in 951, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=951)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Keywords can be considered as condensed versions of documents and short forms of their summaries. In this paper, the problem of automatic extraction of keywords from documents is treated as a supervised learning task. A lexical chain holds a set of semantically related words of a text and it can be said that a lexical chain represents the semantic content of a portion of the text. Although lexical chains have been extensively used in text summarization, their usage for keyword extraction problem has not been fully investigated. In this paper, a keyword extraction technique that uses lexical chains is described, and encouraging results are obtained.
    Type
    a
  14. Ling, X.; Jiang, J.; He, X.; Mei, Q.; Zhai, C.; Schatz, B.: Generating gene summaries from biomedical literature : a study of semi-structured summarization (2007) 0.00
    0.0010992887 = product of:
      0.004397155 = sum of:
        0.004397155 = product of:
          0.013191464 = sum of:
            0.013191464 = weight(_text_:a in 946) [ClassicSimilarity], result of:
              0.013191464 = score(doc=946,freq=28.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.23833402 = fieldWeight in 946, product of:
                  5.2915025 = tf(freq=28.0), with freq of:
                    28.0 = termFreq=28.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=946)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Most knowledge accumulated through scientific discoveries in genomics and related biomedical disciplines is buried in the vast amount of biomedical literature. Since understanding gene regulations is fundamental to biomedical research, summarizing all the existing knowledge about a gene based on literature is highly desirable to help biologists digest the literature. In this paper, we present a study of methods for automatically generating gene summaries from biomedical literature. Unlike most existing work on automatic text summarization, in which the generated summary is often a list of extracted sentences, we propose to generate a semi-structured summary which consists of sentences covering specific semantic aspects of a gene. Such a semi-structured summary is more appropriate for describing genes and poses special challenges for automatic text summarization. We propose a two-stage approach to generate such a summary for a given gene - first retrieving articles about a gene and then extracting sentences for each specified semantic aspect. We address the issue of gene name variation in the first stage and propose several different methods for sentence extraction in the second stage. We evaluate the proposed methods using a test set with 20 genes. Experiment results show that the proposed methods can generate useful semi-structured gene summaries automatically from biomedical literature, and our proposed methods outperform general purpose summarization methods. Among all the proposed methods for sentence extraction, a probabilistic language modeling approach that models gene context performs the best.
    Type
    a
  15. Zajic, D.; Dorr, B.J.; Lin, J.; Schwartz, R.: Multi-candidate reduction : sentence compression as a tool for document summarization tasks (2007) 0.00
    0.0010882404 = product of:
      0.0043529617 = sum of:
        0.0043529617 = product of:
          0.013058884 = sum of:
            0.013058884 = weight(_text_:a in 944) [ClassicSimilarity], result of:
              0.013058884 = score(doc=944,freq=14.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.23593865 = fieldWeight in 944, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=944)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    This article examines the application of two single-document sentence compression techniques to the problem of multi-document summarization-a "parse-and-trim" approach and a statistical noisy-channel approach. We introduce the multi-candidate reduction (MCR) framework for multi-document summarization, in which many compressed candidates are generated for each source sentence. These candidates are then selected for inclusion in the final summary based on a combination of static and dynamic features. Evaluations demonstrate that sentence compression is a valuable component of a larger multi-document summarization framework.
    Type
    a
  16. Ou, S.; Khoo, C.S.G.; Goh, D.H.: Multi-document summarization of news articles using an event-based framework (2006) 0.00
    0.0010593012 = product of:
      0.004237205 = sum of:
        0.004237205 = product of:
          0.012711613 = sum of:
            0.012711613 = weight(_text_:a in 657) [ClassicSimilarity], result of:
              0.012711613 = score(doc=657,freq=26.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.22966442 = fieldWeight in 657, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=657)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The purpose of this research is to develop a method for automatic construction of multi-document summaries of sets of news articles that might be retrieved by a web search engine in response to a user query. Design/methodology/approach - Based on the cross-document discourse analysis, an event-based framework is proposed for integrating and organizing information extracted from different news articles. It has a hierarchical structure in which the summarized information is presented at the top level and more detailed information given at the lower levels. A tree-view interface was implemented for displaying a multi-document summary based on the framework. A preliminary user evaluation was performed by comparing the framework-based summaries against the sentence-based summaries. Findings - In a small evaluation, all the human subjects preferred the framework-based summaries to the sentence-based summaries. It indicates that the event-based framework is an effective way to summarize a set of news articles reporting an event or a series of relevant events. Research limitations/implications - Limited to event-based news articles only, not applicable to news critiques and other kinds of news articles. A summarization system based on the event-based framework is being implemented. Practical implications - Multi-document summarization of news articles can adopt the proposed event-based framework. Originality/value - An event-based framework for summarizing sets of news articles was developed and evaluated using a tree-view interface for displaying such summaries.
    Type
    a
  17. Steinberger, J.; Poesio, M.; Kabadjov, M.A.; Jezek, K.: Two uses of anaphora resolution in summarization (2007) 0.00
    0.0010576702 = product of:
      0.004230681 = sum of:
        0.004230681 = product of:
          0.012692042 = sum of:
            0.012692042 = weight(_text_:a in 949) [ClassicSimilarity], result of:
              0.012692042 = score(doc=949,freq=18.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.22931081 = fieldWeight in 949, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=949)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    We propose a new method for using anaphoric information in Latent Semantic Analysis (lsa), and discuss its application to develop an lsa-based summarizer which achieves a significantly better performance than a system not using anaphoric information, and a better performance by the rouge measure than all but one of the single-document summarizers participating in DUC-2002. Anaphoric information is automatically extracted using a new release of our own anaphora resolution system, guitar, which incorporates proper noun resolution. Our summarizer also includes a new approach for automatically identifying the dimensionality reduction of a document on the basis of the desired summarization percentage. Anaphoric information is also used to check the coherence of the summary produced by our summarizer, by a reference checker module which identifies anaphoric resolution errors caused by sentence extraction.
    Type
    a
  18. Sweeney, S.; Crestani, F.; Losada, D.E.: 'Show me more' : incremental length summarisation using novelty detection (2008) 0.00
    0.0010177437 = product of:
      0.004070975 = sum of:
        0.004070975 = product of:
          0.012212924 = sum of:
            0.012212924 = weight(_text_:a in 2054) [ClassicSimilarity], result of:
              0.012212924 = score(doc=2054,freq=24.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.22065444 = fieldWeight in 2054, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2054)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    The paper presents a study investigating the effects of incorporating novelty detection in automatic text summarisation. Condensing a textual document, automatic text summarisation can reduce the need to refer to the source document. It also offers a means to deliver device-friendly content when accessing information in non-traditional environments. An effective method of summarisation could be to produce a summary that includes only novel information. However, a consequence of focusing exclusively on novel parts may result in a loss of context, which may have an impact on the correct interpretation of the summary, with respect to the source document. In this study we compare two strategies to produce summaries that incorporate novelty in different ways: a constant length summary, which contains only novel sentences, and an incremental summary, containing additional sentences that provide context. The aim is to establish whether a summary that contains only novel sentences provides sufficient basis to determine relevance of a document, or if indeed we need to include additional sentences to provide context. Findings from the study seem to suggest that there is only a minimal difference in performance for the tasks we set our users and that the presence of contextual information is not so important. However, for the case of mobile information access, a summary that contains only novel information does offer benefits, given bandwidth constraints.
    Type
    a
  19. Dorr, B.J.; Gaasterland, T.: Exploiting aspectual features and connecting words for summarization-inspired temporal-relation extraction (2007) 0.00
    9.97181E-4 = product of:
      0.003988724 = sum of:
        0.003988724 = product of:
          0.011966172 = sum of:
            0.011966172 = weight(_text_:a in 950) [ClassicSimilarity], result of:
              0.011966172 = score(doc=950,freq=16.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.2161963 = fieldWeight in 950, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=950)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    This paper presents a model that incorporates contemporary theories of tense and aspect and develops a new framework for extracting temporal relations between two sentence-internal events, given their tense, aspect, and a temporal connecting word relating the two events. A linguistic constraint on event combination has been implemented to detect incorrect parser analyses and potentially apply syntactic reanalysis or semantic reinterpretation - in preparation for subsequent processing for multi-document summarization. An important contribution of this work is the extension of two different existing theoretical frameworks - Hornstein's 1990 theory of tense analysis and Allen's 1984 theory on event ordering - and the combination of both into a unified system for representing and constraining combinations of different event types (points, closed intervals, and open-ended intervals). We show that our theoretical results have been verified in a large-scale corpus analysis. The framework is designed to inform a temporally motivated sentence-ordering module in an implemented multi-document summarization system.
    Type
    a
  20. Sjöbergh, J.: Older versions of the ROUGEeval summarization evaluation system were easier to fool (2007) 0.00
    9.401512E-4 = product of:
      0.003760605 = sum of:
        0.003760605 = product of:
          0.011281814 = sum of:
            0.011281814 = weight(_text_:a in 940) [ClassicSimilarity], result of:
              0.011281814 = score(doc=940,freq=8.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.20383182 = fieldWeight in 940, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=940)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    We show some limitations of the ROUGE evaluation method for automatic summarization. We present a method for automatic summarization based on a Markov model of the source text. By a simple greedy word selection strategy, summaries with high ROUGE-scores are generated. These summaries would however not be considered good by human readers. The method can be adapted to trick different settings of the ROUGEeval package.
    Type
    a

Languages

  • e 40
  • d 10

Types

  • a 47
  • el 1
  • m 1
  • r 1
  • x 1
  • More… Less…