Search (44 results, page 1 of 3)

  • × theme_ss:"Automatisches Abstracting"
  • × year_i:[2000 TO 2010}
  1. Kuhlen, R.: Informationsaufbereitung III : Referieren (Abstracts - Abstracting - Grundlagen) (2004) 0.03
    0.028076127 = product of:
      0.056152254 = sum of:
        0.011892734 = weight(_text_:information in 2917) [ClassicSimilarity], result of:
          0.011892734 = score(doc=2917,freq=6.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.1343758 = fieldWeight in 2917, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2917)
        0.044259522 = weight(_text_:standards in 2917) [ClassicSimilarity], result of:
          0.044259522 = score(doc=2917,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.19697142 = fieldWeight in 2917, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.03125 = fieldNorm(doc=2917)
      0.5 = coord(2/4)
    
    Abstract
    Was ein Abstract (im Folgenden synonym mit Referat oder Kurzreferat gebraucht) ist, legt das American National Standards Institute in einer Weise fest, die sicherlich von den meisten Fachleuten akzeptiert werden kann: "An abstract is defined as an abbreviated, accurate representation of the contents of a document"; fast genauso die deutsche Norm DIN 1426: "Das Kurzreferat gibt kurz und klar den Inhalt des Dokuments wieder." Abstracts gehören zum wissenschaftlichen Alltag. Weitgehend allen Publikationen, zumindest in den naturwissenschaftlichen, technischen, informationsbezogenen oder medizinischen Bereichen, gehen Abstracts voran, "prefe-rably prepared by its author(s) for publication with it". Es gibt wohl keinen Wissenschaftler, der nicht irgendwann einmal ein Abstract geschrieben hätte. Gehört das Erstellen von Abstracts dann überhaupt zur dokumentarischen bzw informationswissenschaftlichen Methodenlehre, wenn es jeder kann? Was macht den informationellen Mehrwert aus, der durch Expertenreferate gegenüber Laienreferaten erzeugt wird? Dies ist nicht so leicht zu beantworten, zumal geeignete Bewertungsverfahren fehlen, die Qualität von Abstracts vergleichend "objektiv" zu messen. Abstracts werden in erheblichem Umfang von Informationsspezialisten erstellt, oft unter der Annahme, dass Autoren selber dafür weniger geeignet sind. Vergegenwärtigen wir uns, was wir über Abstracts und Abstracting wissen. Ein besonders gelungenes Abstract ist zuweilen klarer als der Ursprungstext selber, darf aber nicht mehr Information als dieser enthalten: "Good abstracts are highly structured, concise, and coherent, and are the result of a thorough analysis of the content of the abstracted materials. Abstracts may be more readable than the basis documents, but because of size constraints they rarely equal and never surpass the information content of the basic document". Dies ist verständlich, denn ein "Abstract" ist zunächst nichts anderes als ein Ergebnis des Vorgangs einer Abstraktion. Ohne uns zu sehr in die philosophischen Hintergründe der Abstraktion zu verlieren, besteht diese doch "in der Vernachlässigung von bestimmten Vorstellungsbzw. Begriffsinhalten, von welchen zugunsten anderer Teilinhalte abgesehen, abstrahiert' wird. Sie ist stets verbunden mit einer Fixierung von (interessierenden) Merkmalen durch die aktive Aufmerksamkeit, die unter einem bestimmten pragmatischen Gesichtspunkt als wesentlich' für einen vorgestellten bzw für einen unter einen Begriff fallenden Gegenstand (oder eine Mehrheit von Gegenständen) betrachtet werden". Abstracts reduzieren weniger Begriffsinhalte, sondern Texte bezüglich ihres proportionalen Gehaltes. Borko/ Bernier haben dies sogar quantifiziert; sie schätzen den Reduktionsfaktor auf 1:10 bis 1:12
    Source
    Grundlagen der praktischen Information und Dokumentation. 5., völlig neu gefaßte Ausgabe. 2 Bde. Hrsg. von R. Kuhlen, Th. Seeger u. D. Strauch. Begründet von Klaus Laisiepen, Ernst Lutterbeck, Karl-Heinrich Meyer-Uhlenried. Bd.1: Handbuch zur Einführung in die Informationswissenschaft und -praxis
  2. Saggion, H.; Lapalme, G.: Selective analysis for the automatic generation of summaries (2000) 0.02
    0.023531828 = product of:
      0.047063656 = sum of:
        0.012015978 = weight(_text_:information in 132) [ClassicSimilarity], result of:
          0.012015978 = score(doc=132,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.13576832 = fieldWeight in 132, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=132)
        0.03504768 = product of:
          0.07009536 = sum of:
            0.07009536 = weight(_text_:organization in 132) [ClassicSimilarity], result of:
              0.07009536 = score(doc=132,freq=4.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.38996086 = fieldWeight in 132, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=132)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Selective Analysis is a new method for text summarization of technical articles whose design is based on the study of a corpus of professional abstracts and technical documents The method emphasizes the selection of particular types of information and its elaboration exploring the issue of dynamical summarization. A computer prototype was developed to demonstrate the viability of the approach and the automatic abstracts were evaluated using human informants. The results so far obtained indicate that the summaries are acceptable in content and text quality
    Series
    Advances in knowledge organization; vol.7
    Source
    Dynamism and stability in knowledge organization: Proceedings of the 6th International ISKO-Conference, 10-13 July 2000, Toronto, Canada. Ed.: C. Beghtol et al
  3. Vanderwende, L.; Suzuki, H.; Brockett, J.M.; Nenkova, A.: Beyond SumBasic : task-focused summarization with sentence simplification and lexical expansion (2007) 0.02
    0.017528716 = product of:
      0.035057433 = sum of:
        0.014565565 = weight(_text_:information in 948) [ClassicSimilarity], result of:
          0.014565565 = score(doc=948,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.16457605 = fieldWeight in 948, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=948)
        0.02049187 = product of:
          0.04098374 = sum of:
            0.04098374 = weight(_text_:22 in 948) [ClassicSimilarity], result of:
              0.04098374 = score(doc=948,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.23214069 = fieldWeight in 948, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=948)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In recent years, there has been increased interest in topic-focused multi-document summarization. In this task, automatic summaries are produced in response to a specific information request, or topic, stated by the user. The system we have designed to accomplish this task comprises four main components: a generic extractive summarization system, a topic-focusing component, sentence simplification, and lexical expansion of topic words. This paper details each of these components, together with experiments designed to quantify their individual contributions. We include an analysis of our results on two large datasets commonly used to evaluate task-focused summarization, the DUC2005 and DUC2006 datasets, using automatic metrics. Additionally, we include an analysis of our results on the DUC2006 task according to human evaluation metrics. In the human evaluation of system summaries compared to human summaries, i.e., the Pyramid method, our system ranked first out of 22 systems in terms of overall mean Pyramid score; and in the human evaluation of summary responsiveness to the topic, our system ranked third out of 35 systems.
    Source
    Information processing and management. 43(2007) no.6, S.1606-1618
  4. Wu, Y.-f.B.; Li, Q.; Bot, R.S.; Chen, X.: Finding nuggets in documents : a machine learning approach (2006) 0.01
    0.0128297005 = product of:
      0.025659401 = sum of:
        0.008582841 = weight(_text_:information in 5290) [ClassicSimilarity], result of:
          0.008582841 = score(doc=5290,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.09697737 = fieldWeight in 5290, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5290)
        0.01707656 = product of:
          0.03415312 = sum of:
            0.03415312 = weight(_text_:22 in 5290) [ClassicSimilarity], result of:
              0.03415312 = score(doc=5290,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.19345059 = fieldWeight in 5290, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5290)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 7.2006 17:25:48
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.6, S.740-752
  5. Pinto, M.: Engineering the production of meta-information : the abstracting concern (2003) 0.01
    0.00849658 = product of:
      0.03398632 = sum of:
        0.03398632 = weight(_text_:information in 4667) [ClassicSimilarity], result of:
          0.03398632 = score(doc=4667,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.3840108 = fieldWeight in 4667, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=4667)
      0.25 = coord(1/4)
    
    Source
    Journal of information science. 29(2003) no.5, S.405-418
  6. Craven, T.C.: Presentation of repeated phrases in a computer-assisted abstracting tool kit (2001) 0.01
    0.006007989 = product of:
      0.024031956 = sum of:
        0.024031956 = weight(_text_:information in 3667) [ClassicSimilarity], result of:
          0.024031956 = score(doc=3667,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.27153665 = fieldWeight in 3667, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=3667)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 37(2001) no.2, S.221-230
  7. Endres-Niggemeyer, B.: SimSum : an empirically founded simulation of summarizing (2000) 0.01
    0.006007989 = product of:
      0.024031956 = sum of:
        0.024031956 = weight(_text_:information in 3343) [ClassicSimilarity], result of:
          0.024031956 = score(doc=3343,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.27153665 = fieldWeight in 3343, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=3343)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 36(2000) no.4, S.659-682
  8. Harabagiu, S.; Hickl, A.; Lacatusu, F.: Satisfying information needs with multi-document summaries (2007) 0.01
    0.005946367 = product of:
      0.023785468 = sum of:
        0.023785468 = weight(_text_:information in 939) [ClassicSimilarity], result of:
          0.023785468 = score(doc=939,freq=6.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.2687516 = fieldWeight in 939, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=939)
      0.25 = coord(1/4)
    
    Abstract
    Generating summaries that meet the information needs of a user relies on (1) several forms of question decomposition; (2) different summarization approaches; and (3) textual inference for combining the summarization strategies. This novel framework for summarization has the advantage of producing highly responsive summaries, as indicated by the evaluation results.
    Source
    Information processing and management. 43(2007) no.6, S.1619-1642
  9. Steinberger, J.; Poesio, M.; Kabadjov, M.A.; Jezek, K.: Two uses of anaphora resolution in summarization (2007) 0.01
    0.005757545 = product of:
      0.02303018 = sum of:
        0.02303018 = weight(_text_:information in 949) [ClassicSimilarity], result of:
          0.02303018 = score(doc=949,freq=10.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.2602176 = fieldWeight in 949, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=949)
      0.25 = coord(1/4)
    
    Abstract
    We propose a new method for using anaphoric information in Latent Semantic Analysis (lsa), and discuss its application to develop an lsa-based summarizer which achieves a significantly better performance than a system not using anaphoric information, and a better performance by the rouge measure than all but one of the single-document summarizers participating in DUC-2002. Anaphoric information is automatically extracted using a new release of our own anaphora resolution system, guitar, which incorporates proper noun resolution. Our summarizer also includes a new approach for automatically identifying the dimensionality reduction of a document on the basis of the desired summarization percentage. Anaphoric information is also used to check the coherence of the summary produced by our summarizer, by a reference checker module which identifies anaphoric resolution errors caused by sentence extraction.
    Source
    Information processing and management. 43(2007) no.6, S.1663-1680
  10. Sweeney, S.; Crestani, F.; Losada, D.E.: 'Show me more' : incremental length summarisation using novelty detection (2008) 0.01
    0.005255895 = product of:
      0.02102358 = sum of:
        0.02102358 = weight(_text_:information in 2054) [ClassicSimilarity], result of:
          0.02102358 = score(doc=2054,freq=12.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.23754507 = fieldWeight in 2054, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2054)
      0.25 = coord(1/4)
    
    Abstract
    The paper presents a study investigating the effects of incorporating novelty detection in automatic text summarisation. Condensing a textual document, automatic text summarisation can reduce the need to refer to the source document. It also offers a means to deliver device-friendly content when accessing information in non-traditional environments. An effective method of summarisation could be to produce a summary that includes only novel information. However, a consequence of focusing exclusively on novel parts may result in a loss of context, which may have an impact on the correct interpretation of the summary, with respect to the source document. In this study we compare two strategies to produce summaries that incorporate novelty in different ways: a constant length summary, which contains only novel sentences, and an incremental summary, containing additional sentences that provide context. The aim is to establish whether a summary that contains only novel sentences provides sufficient basis to determine relevance of a document, or if indeed we need to include additional sentences to provide context. Findings from the study seem to suggest that there is only a minimal difference in performance for the tasks we set our users and that the presence of contextual information is not so important. However, for the case of mobile information access, a summary that contains only novel information does offer benefits, given bandwidth constraints.
    Source
    Information processing and management. 44(2008) no.2, S.663-686
  11. Marcu, D.: Automatic abstracting and summarization (2009) 0.01
    0.0052030715 = product of:
      0.020812286 = sum of:
        0.020812286 = weight(_text_:information in 3748) [ClassicSimilarity], result of:
          0.020812286 = score(doc=3748,freq=6.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.23515764 = fieldWeight in 3748, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3748)
      0.25 = coord(1/4)
    
    Abstract
    After lying dormant for a few decades, the field of automated text summarization has experienced a tremendous resurgence of interest. Recently, many new algorithms and techniques have been proposed for identifying important information in single documents and document collections, and for mapping this information into grammatical, cohesive, and coherent abstracts. Since 1997, annual workshops, conferences, and large-scale comparative evaluations have provided a rich environment for exchanging ideas between researchers in Asia, Europe, and North America. This entry reviews the main developments in the field and provides a guiding map to those interested in understanding the strengths and weaknesses of an increasingly ubiquitous technology.
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  12. Haag, M.: Automatic text summarization (2002) 0.01
    0.005149705 = product of:
      0.02059882 = sum of:
        0.02059882 = weight(_text_:information in 5662) [ClassicSimilarity], result of:
          0.02059882 = score(doc=5662,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.23274569 = fieldWeight in 5662, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=5662)
      0.25 = coord(1/4)
    
    Source
    Information - Wissenschaft und Praxis. 53(2002) H.4, 243-244
  13. Díaz, A.; Gervás, P.: User-model based personalized summarization (2007) 0.01
    0.005149705 = product of:
      0.02059882 = sum of:
        0.02059882 = weight(_text_:information in 952) [ClassicSimilarity], result of:
          0.02059882 = score(doc=952,freq=8.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.23274569 = fieldWeight in 952, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=952)
      0.25 = coord(1/4)
    
    Abstract
    The potential of summary personalization is high, because a summary that would be useless to decide the relevance of a document if summarized in a generic manner, may be useful if the right sentences are selected that match the user interest. In this paper we defend the use of a personalized summarization facility to maximize the density of relevance of selections sent by a personalized information system to a given user. The personalization is applied to the digital newspaper domain and it used a user-model that stores long and short term interests using four reference systems: sections, categories, keywords and feedback terms. On the other side, it is crucial to measure how much information is lost during the summarization process, and how this information loss may affect the ability of the user to judge the relevance of a given document. The results obtained in two personalization systems show that personalized summaries perform better than generic and generic-personalized summaries in terms of identifying documents that satisfy user preferences. We also considered a user-centred direct evaluation that showed a high level of user satisfaction with the summaries.
    Source
    Information processing and management. 43(2007) no.6, S.1715-1734
  14. Haag, M.: Automatic text summarization : Evaluation des Copernic Summarizer und mögliche Einsatzfelder in der Fachinformation der DaimlerCrysler AG (2002) 0.00
    0.0044597755 = product of:
      0.017839102 = sum of:
        0.017839102 = weight(_text_:information in 649) [ClassicSimilarity], result of:
          0.017839102 = score(doc=649,freq=6.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.20156369 = fieldWeight in 649, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=649)
      0.25 = coord(1/4)
    
    Abstract
    An evaluation of the Copernic Summarizer, a software for automatically summarizing text in various data formats, is being presented. It shall be assessed if and how the Copernic Summarizer can reasonably be used in the DaimlerChrysler Information Division in order to enhance the quality of its information services. First, an introduction into Automatic Text Summarization is given and the Copernic Summarizer is being presented. Various methods for evaluating Automatic Text Summarization systems and software ergonomics are presented. Two evaluation forms are developed with which the employees of the Information Division shall evaluate the quality and relevance of the extracted keywords and summaries as well as the software's usability. The quality and relevance assessment is done by comparing the original text to the summaries. Finally, a recommendation is given concerning the use of the Copernic Summarizer.
  15. Nomoto, T.: Discriminative sentence compression with conditional random fields (2007) 0.00
    0.0044597755 = product of:
      0.017839102 = sum of:
        0.017839102 = weight(_text_:information in 945) [ClassicSimilarity], result of:
          0.017839102 = score(doc=945,freq=6.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.20156369 = fieldWeight in 945, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=945)
      0.25 = coord(1/4)
    
    Abstract
    The paper focuses on a particular approach to automatic sentence compression which makes use of a discriminative sequence classifier known as Conditional Random Fields (CRF). We devise several features for CRF that allow it to incorporate information on nonlinear relations among words. Along with that, we address the issue of data paucity by collecting data from RSS feeds available on the Internet, and turning them into training data for use with CRF, drawing on techniques from biology and information retrieval. We also discuss a recursive application of CRF on the syntactic structure of a sentence as a way of improving the readability of the compression it generates. Experiments found that our approach works reasonably well compared to the state-of-the-art system [Knight, K., & Marcu, D. (2002). Summarization beyond sentence extraction: A probabilistic approach to sentence compression. Artificial Intelligence 139, 91-107.].
    Source
    Information processing and management. 43(2007) no.6, S.1571-1587
  16. Hahn, U.: ¬Die Verdichtung textuellen Wissens zu Information : vom Wandel methodischer Paradigmen beim automatischen Abstracting (2004) 0.00
    0.0042914203 = product of:
      0.017165681 = sum of:
        0.017165681 = weight(_text_:information in 4667) [ClassicSimilarity], result of:
          0.017165681 = score(doc=4667,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.19395474 = fieldWeight in 4667, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=4667)
      0.25 = coord(1/4)
    
  17. Reeve, L.H.; Han, H.; Brooks, A.D.: ¬The use of domain-specific concepts in biomedical text summarization (2007) 0.00
    0.0042914203 = product of:
      0.017165681 = sum of:
        0.017165681 = weight(_text_:information in 955) [ClassicSimilarity], result of:
          0.017165681 = score(doc=955,freq=8.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.19395474 = fieldWeight in 955, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=955)
      0.25 = coord(1/4)
    
    Abstract
    Text summarization is a method for data reduction. The use of text summarization enables users to reduce the amount of text that must be read while still assimilating the core information. The data reduction offered by text summarization is particularly useful in the biomedical domain, where physicians must continuously find clinical trial study information to incorporate into their patient treatment efforts. Such efforts are often hampered by the high-volume of publications. This paper presents two independent methods (BioChain and FreqDist) for identifying salient sentences in biomedical texts using concepts derived from domain-specific resources. Our semantic-based method (BioChain) is effective at identifying thematic sentences, while our frequency-distribution method (FreqDist) removes information redundancy. The two methods are then combined to form a hybrid method (ChainFreq). An evaluation of each method is performed using the ROUGE system to compare system-generated summaries against a set of manually-generated summaries. The BioChain and FreqDist methods outperform some common summarization systems, while the ChainFreq method improves upon the base approaches. Our work shows that the best performance is achieved when the two methods are combined. The paper also presents a brief physician's evaluation of three randomly-selected papers from an evaluation corpus to show that the author's abstract does not always reflect the entire contents of the full-text.
    Source
    Information processing and management. 43(2007) no.6, S.1765-1776
  18. Yang, C.C.; Wang, F.L.: Hierarchical summarization of large documents (2008) 0.00
    0.0042914203 = product of:
      0.017165681 = sum of:
        0.017165681 = weight(_text_:information in 1719) [ClassicSimilarity], result of:
          0.017165681 = score(doc=1719,freq=8.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.19395474 = fieldWeight in 1719, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1719)
      0.25 = coord(1/4)
    
    Abstract
    Many automatic text summarization models have been developed in the last decades. Related research in information science has shown that human abstractors extract sentences for summaries based on the hierarchical structure of documents; however, the existing automatic summarization models do not take into account the human abstractor's behavior of sentence extraction and only consider the document as a sequence of sentences during the process of extraction of sentences as a summary. In general, a document exhibits a well-defined hierarchical structure that can be described as fractals - mathematical objects with a high degree of redundancy. In this article, we introduce the fractal summarization model based on the fractal theory. The important information is captured from the source document by exploring the hierarchical structure and salient features of the document. A condensed version of the document that is informatively close to the source document is produced iteratively using the contractive transformation in the fractal theory. The fractal summarization model is the first attempt to apply fractal theory to document summarization. It significantly improves the divergence of information coverage of summary and the precision of summary. User evaluations have been conducted. Results have indicated that fractal summarization is promising and outperforms current summarization techniques that do not consider the hierarchical structure of documents.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.6, S.887-902
  19. Soricut, R.; Marcu, D.: Abstractive headline generation using WIDL-expressions (2007) 0.00
    0.00424829 = product of:
      0.01699316 = sum of:
        0.01699316 = weight(_text_:information in 943) [ClassicSimilarity], result of:
          0.01699316 = score(doc=943,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.1920054 = fieldWeight in 943, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=943)
      0.25 = coord(1/4)
    
    Abstract
    We present a new paradigm for the automatic creation of document headlines that is based on direct transformation of relevant textual information into well-formed textual output. Starting from an input document, we automatically create compact representations of weighted finite sets of strings, called WIDL-expressions, which encode the most important topics in the document. A generic natural language generation engine performs the headline generation task, driven by both statistical knowledge encapsulated in WIDL-expressions (representing topic biases induced by the input document) and statistical knowledge encapsulated in language models (representing biases induced by the target language). Our evaluation shows similar performance in quality with a state-of-the-art, extractive approach to headline generation, and significant improvements in quality over previously proposed solutions to abstractive headline generation.
    Source
    Information processing and management. 43(2007) no.6, S.1536-1548
  20. Dunlavy, D.M.; O'Leary, D.P.; Conroy, J.M.; Schlesinger, J.D.: QCS: A system for querying, clustering and summarizing documents (2007) 0.00
    0.0038383633 = product of:
      0.015353453 = sum of:
        0.015353453 = weight(_text_:information in 947) [ClassicSimilarity], result of:
          0.015353453 = score(doc=947,freq=10.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.1734784 = fieldWeight in 947, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=947)
      0.25 = coord(1/4)
    
    Abstract
    Information retrieval systems consist of many complicated components. Research and development of such systems is often hampered by the difficulty in evaluating how each particular component would behave across multiple systems. We present a novel integrated information retrieval system-the Query, Cluster, Summarize (QCS) system-which is portable, modular, and permits experimentation with different instantiations of each of the constituent text analysis components. Most importantly, the combination of the three types of methods in the QCS design improves retrievals by providing users more focused information organized by topic. We demonstrate the improved performance by a series of experiments using standard test sets from the Document Understanding Conferences (DUC) as measured by the best known automatic metric for summarization system evaluation, ROUGE. Although the DUC data and evaluations were originally designed to test multidocument summarization, we developed a framework to extend it to the task of evaluation for each of the three components: query, clustering, and summarization. Under this framework, we then demonstrate that the QCS system (end-to-end) achieves performance as good as or better than the best summarization engines. Given a query, QCS retrieves relevant documents, separates the retrieved documents into topic clusters, and creates a single summary for each cluster. In the current implementation, Latent Semantic Indexing is used for retrieval, generalized spherical k-means is used for the document clustering, and a method coupling sentence "trimming" and a hidden Markov model, followed by a pivoted QR decomposition, is used to create a single extract summary for each cluster. The user interface is designed to provide access to detailed information in a compact and useful format. Our system demonstrates the feasibility of assembling an effective IR system from existing software libraries, the usefulness of the modularity of the design, and the value of this particular combination of modules.
    Source
    Information processing and management. 43(2007) no.6, S.1588-1605

Languages

  • e 38
  • d 6

Types

  • a 43
  • m 1
  • More… Less…