Search (49 results, page 1 of 3)

  • × year_i:[2000 TO 2010}
  • × theme_ss:"Automatisches Abstracting"
  1. Steinberger, J.; Poesio, M.; Kabadjov, M.A.; Jezek, K.: Two uses of anaphora resolution in summarization (2007) 0.02
    0.016802754 = product of:
      0.03360551 = sum of:
        0.03360551 = product of:
          0.05040826 = sum of:
            0.0120163495 = weight(_text_:a in 949) [ClassicSimilarity], result of:
              0.0120163495 = score(doc=949,freq=18.0), product of:
                0.05240202 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04544656 = queryNorm
                0.22931081 = fieldWeight in 949, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=949)
            0.03839191 = weight(_text_:k in 949) [ClassicSimilarity], result of:
              0.03839191 = score(doc=949,freq=2.0), product of:
                0.16223413 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04544656 = queryNorm
                0.23664509 = fieldWeight in 949, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.046875 = fieldNorm(doc=949)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    We propose a new method for using anaphoric information in Latent Semantic Analysis (lsa), and discuss its application to develop an lsa-based summarizer which achieves a significantly better performance than a system not using anaphoric information, and a better performance by the rouge measure than all but one of the single-document summarizers participating in DUC-2002. Anaphoric information is automatically extracted using a new release of our own anaphora resolution system, guitar, which incorporates proper noun resolution. Our summarizer also includes a new approach for automatically identifying the dimensionality reduction of a document on the basis of the desired summarization percentage. Anaphoric information is also used to check the coherence of the summary produced by our summarizer, by a reference checker module which identifies anaphoric resolution errors caused by sentence extraction.
    Type
    a
  2. Nomoto, T.: Discriminative sentence compression with conditional random fields (2007) 0.02
    0.016329778 = product of:
      0.032659557 = sum of:
        0.032659557 = product of:
          0.048989333 = sum of:
            0.010597425 = weight(_text_:a in 945) [ClassicSimilarity], result of:
              0.010597425 = score(doc=945,freq=14.0), product of:
                0.05240202 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04544656 = queryNorm
                0.20223314 = fieldWeight in 945, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=945)
            0.03839191 = weight(_text_:k in 945) [ClassicSimilarity], result of:
              0.03839191 = score(doc=945,freq=2.0), product of:
                0.16223413 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04544656 = queryNorm
                0.23664509 = fieldWeight in 945, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.046875 = fieldNorm(doc=945)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    The paper focuses on a particular approach to automatic sentence compression which makes use of a discriminative sequence classifier known as Conditional Random Fields (CRF). We devise several features for CRF that allow it to incorporate information on nonlinear relations among words. Along with that, we address the issue of data paucity by collecting data from RSS feeds available on the Internet, and turning them into training data for use with CRF, drawing on techniques from biology and information retrieval. We also discuss a recursive application of CRF on the syntactic structure of a sentence as a way of improving the readability of the compression it generates. Experiments found that our approach works reasonably well compared to the state-of-the-art system [Knight, K., & Marcu, D. (2002). Summarization beyond sentence extraction: A probabilistic approach to sentence compression. Artificial Intelligence 139, 91-107.].
    Type
    a
  3. Vanderwende, L.; Suzuki, H.; Brockett, J.M.; Nenkova, A.: Beyond SumBasic : task-focused summarization with sentence simplification and lexical expansion (2007) 0.02
    0.01530025 = product of:
      0.0306005 = sum of:
        0.0306005 = product of:
          0.045900747 = sum of:
            0.008956458 = weight(_text_:a in 948) [ClassicSimilarity], result of:
              0.008956458 = score(doc=948,freq=10.0), product of:
                0.05240202 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04544656 = queryNorm
                0.1709182 = fieldWeight in 948, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=948)
            0.03694429 = weight(_text_:22 in 948) [ClassicSimilarity], result of:
              0.03694429 = score(doc=948,freq=2.0), product of:
                0.15914612 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04544656 = queryNorm
                0.23214069 = fieldWeight in 948, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=948)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    In recent years, there has been increased interest in topic-focused multi-document summarization. In this task, automatic summaries are produced in response to a specific information request, or topic, stated by the user. The system we have designed to accomplish this task comprises four main components: a generic extractive summarization system, a topic-focusing component, sentence simplification, and lexical expansion of topic words. This paper details each of these components, together with experiments designed to quantify their individual contributions. We include an analysis of our results on two large datasets commonly used to evaluate task-focused summarization, the DUC2005 and DUC2006 datasets, using automatic metrics. Additionally, we include an analysis of our results on the DUC2006 task according to human evaluation metrics. In the human evaluation of system summaries compared to human summaries, i.e., the Pyramid method, our system ranked first out of 22 systems in terms of overall mean Pyramid score; and in the human evaluation of summary responsiveness to the topic, our system ranked third out of 35 systems.
    Type
    a
  4. Lam, W.; Chan, K.; Radev, D.; Saggion, H.; Teufel, S.: Context-based generic cross-lingual retrieval of documents and automated summaries (2005) 0.02
    0.015109851 = product of:
      0.030219702 = sum of:
        0.030219702 = product of:
          0.045329552 = sum of:
            0.0069376426 = weight(_text_:a in 1965) [ClassicSimilarity], result of:
              0.0069376426 = score(doc=1965,freq=6.0), product of:
                0.05240202 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04544656 = queryNorm
                0.13239266 = fieldWeight in 1965, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1965)
            0.03839191 = weight(_text_:k in 1965) [ClassicSimilarity], result of:
              0.03839191 = score(doc=1965,freq=2.0), product of:
                0.16223413 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04544656 = queryNorm
                0.23664509 = fieldWeight in 1965, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1965)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    We develop a context-based generic cross-lingual retrieval model that can deal with different language pairs. Our model considers contexts in the query translation process. Contexts in the query as weIl as in the documents based an co-occurrence statistics from different granularity of passages are exploited. We also investigate cross-lingual retrieval of automatic generic summaries. We have implemented our model for two different cross-lingual settings, namely, retrieving Chinese documents from English queries as weIl as retrieving English documents from Chinese queries. Extensive experiments have been conducted an a large-scale parallel corpus enabling studies an retrieval performance for two different cross-lingual settings of full-length documents as weIl as automated summaries.
    Type
    a
  5. Sparck Jones, K.: Automatic summarising : the state of the art (2007) 0.01
    0.014132454 = product of:
      0.028264908 = sum of:
        0.028264908 = product of:
          0.04239736 = sum of:
            0.00400545 = weight(_text_:a in 932) [ClassicSimilarity], result of:
              0.00400545 = score(doc=932,freq=2.0), product of:
                0.05240202 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04544656 = queryNorm
                0.07643694 = fieldWeight in 932, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=932)
            0.03839191 = weight(_text_:k in 932) [ClassicSimilarity], result of:
              0.03839191 = score(doc=932,freq=2.0), product of:
                0.16223413 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04544656 = queryNorm
                0.23664509 = fieldWeight in 932, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.046875 = fieldNorm(doc=932)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Type
    a
  6. Wu, Y.-f.B.; Li, Q.; Bot, R.S.; Chen, X.: Finding nuggets in documents : a machine learning approach (2006) 0.01
    0.013780732 = product of:
      0.027561463 = sum of:
        0.027561463 = product of:
          0.041342195 = sum of:
            0.010555287 = weight(_text_:a in 5290) [ClassicSimilarity], result of:
              0.010555287 = score(doc=5290,freq=20.0), product of:
                0.05240202 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04544656 = queryNorm
                0.20142901 = fieldWeight in 5290, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5290)
            0.03078691 = weight(_text_:22 in 5290) [ClassicSimilarity], result of:
              0.03078691 = score(doc=5290,freq=2.0), product of:
                0.15914612 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04544656 = queryNorm
                0.19345059 = fieldWeight in 5290, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5290)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Document keyphrases provide a concise summary of a document's content, offering semantic metadata summarizing a document. They can be used in many applications related to knowledge management and text mining, such as automatic text summarization, development of search engines, document clustering, document classification, thesaurus construction, and browsing interfaces. Because only a small portion of documents have keyphrases assigned by authors, and it is time-consuming and costly to manually assign keyphrases to documents, it is necessary to develop an algorithm to automatically generate keyphrases for documents. This paper describes a Keyphrase Identification Program (KIP), which extracts document keyphrases by using prior positive samples of human identified phrases to assign weights to the candidate keyphrases. The logic of our algorithm is: The more keywords a candidate keyphrase contains and the more significant these keywords are, the more likely this candidate phrase is a keyphrase. KIP's learning function can enrich the glossary database by automatically adding new identified keyphrases to the database. KIP's personalization feature will let the user build a glossary database specifically suitable for the area of his/her interest. The evaluation results show that KIP's performance is better than the systems we compared to and that the learning function is effective.
    Date
    22. 7.2006 17:25:48
    Type
    a
  7. Dunlavy, D.M.; O'Leary, D.P.; Conroy, J.M.; Schlesinger, J.D.: QCS: A system for querying, clustering and summarizing documents (2007) 0.01
    0.011614933 = product of:
      0.023229865 = sum of:
        0.023229865 = product of:
          0.034844797 = sum of:
            0.009250191 = weight(_text_:a in 947) [ClassicSimilarity], result of:
              0.009250191 = score(doc=947,freq=24.0), product of:
                0.05240202 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04544656 = queryNorm
                0.17652355 = fieldWeight in 947, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=947)
            0.025594607 = weight(_text_:k in 947) [ClassicSimilarity], result of:
              0.025594607 = score(doc=947,freq=2.0), product of:
                0.16223413 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04544656 = queryNorm
                0.15776339 = fieldWeight in 947, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.03125 = fieldNorm(doc=947)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Information retrieval systems consist of many complicated components. Research and development of such systems is often hampered by the difficulty in evaluating how each particular component would behave across multiple systems. We present a novel integrated information retrieval system-the Query, Cluster, Summarize (QCS) system-which is portable, modular, and permits experimentation with different instantiations of each of the constituent text analysis components. Most importantly, the combination of the three types of methods in the QCS design improves retrievals by providing users more focused information organized by topic. We demonstrate the improved performance by a series of experiments using standard test sets from the Document Understanding Conferences (DUC) as measured by the best known automatic metric for summarization system evaluation, ROUGE. Although the DUC data and evaluations were originally designed to test multidocument summarization, we developed a framework to extend it to the task of evaluation for each of the three components: query, clustering, and summarization. Under this framework, we then demonstrate that the QCS system (end-to-end) achieves performance as good as or better than the best summarization engines. Given a query, QCS retrieves relevant documents, separates the retrieved documents into topic clusters, and creates a single summary for each cluster. In the current implementation, Latent Semantic Indexing is used for retrieval, generalized spherical k-means is used for the document clustering, and a method coupling sentence "trimming" and a hidden Markov model, followed by a pivoted QR decomposition, is used to create a single extract summary for each cluster. The user interface is designed to provide access to detailed information in a compact and useful format. Our system demonstrates the feasibility of assembling an effective IR system from existing software libraries, the usefulness of the modularity of the design, and the value of this particular combination of modules.
    Type
    a
  8. Hobson, S.P.; Dorr, B.J.; Monz, C.; Schwartz, R.: Task-based evaluation of text summarization using Relevance Prediction (2007) 0.00
    0.0024069757 = product of:
      0.0048139514 = sum of:
        0.0048139514 = product of:
          0.014441853 = sum of:
            0.014441853 = weight(_text_:a in 938) [ClassicSimilarity], result of:
              0.014441853 = score(doc=938,freq=26.0), product of:
                0.05240202 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04544656 = queryNorm
                0.27559727 = fieldWeight in 938, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=938)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    This article introduces a new task-based evaluation measure called Relevance Prediction that is a more intuitive measure of an individual's performance on a real-world task than interannotator agreement. Relevance Prediction parallels what a user does in the real world task of browsing a set of documents using standard search tools, i.e., the user judges relevance based on a short summary and then that same user - not an independent user - decides whether to open (and judge) the corresponding document. This measure is shown to be a more reliable measure of task performance than LDC Agreement, a current gold-standard based measure used in the summarization evaluation community. Our goal is to provide a stable framework within which developers of new automatic measures may make stronger statistical statements about the effectiveness of their measures in predicting summary usefulness. We demonstrate - as a proof-of-concept methodology for automatic metric developers - that a current automatic evaluation measure has a better correlation with Relevance Prediction than with LDC Agreement and that the significance level for detected differences is higher for the former than for the latter.
    Type
    a
  9. Díaz, A.; Gervás, P.: User-model based personalized summarization (2007) 0.00
    0.0023125475 = product of:
      0.004625095 = sum of:
        0.004625095 = product of:
          0.013875285 = sum of:
            0.013875285 = weight(_text_:a in 952) [ClassicSimilarity], result of:
              0.013875285 = score(doc=952,freq=24.0), product of:
                0.05240202 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04544656 = queryNorm
                0.26478532 = fieldWeight in 952, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=952)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    The potential of summary personalization is high, because a summary that would be useless to decide the relevance of a document if summarized in a generic manner, may be useful if the right sentences are selected that match the user interest. In this paper we defend the use of a personalized summarization facility to maximize the density of relevance of selections sent by a personalized information system to a given user. The personalization is applied to the digital newspaper domain and it used a user-model that stores long and short term interests using four reference systems: sections, categories, keywords and feedback terms. On the other side, it is crucial to measure how much information is lost during the summarization process, and how this information loss may affect the ability of the user to judge the relevance of a given document. The results obtained in two personalization systems show that personalized summaries perform better than generic and generic-personalized summaries in terms of identifying documents that satisfy user preferences. We also considered a user-centred direct evaluation that showed a high level of user satisfaction with the summaries.
    Type
    a
  10. Craven, T.C.: Presentation of repeated phrases in a computer-assisted abstracting tool kit (2001) 0.00
    0.0022028852 = product of:
      0.0044057705 = sum of:
        0.0044057705 = product of:
          0.01321731 = sum of:
            0.01321731 = weight(_text_:a in 3667) [ClassicSimilarity], result of:
              0.01321731 = score(doc=3667,freq=4.0), product of:
                0.05240202 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04544656 = queryNorm
                0.25222903 = fieldWeight in 3667, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3667)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Type
    a
  11. Yusuff, A.: Automatisches Indexing and Abstracting : Grundlagen und Beispiele (2002) 0.00
    0.0022028852 = product of:
      0.0044057705 = sum of:
        0.0044057705 = product of:
          0.01321731 = sum of:
            0.01321731 = weight(_text_:a in 1577) [ClassicSimilarity], result of:
              0.01321731 = score(doc=1577,freq=4.0), product of:
                0.05240202 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04544656 = queryNorm
                0.25222903 = fieldWeight in 1577, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1577)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Imprint
    Potsdam : Fachhochschule, FB A-B-D
  12. Ercan, G.; Cicekli, I.: Using lexical chains for keyword extraction (2007) 0.00
    0.0022028852 = product of:
      0.0044057705 = sum of:
        0.0044057705 = product of:
          0.01321731 = sum of:
            0.01321731 = weight(_text_:a in 951) [ClassicSimilarity], result of:
              0.01321731 = score(doc=951,freq=16.0), product of:
                0.05240202 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04544656 = queryNorm
                0.25222903 = fieldWeight in 951, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=951)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Keywords can be considered as condensed versions of documents and short forms of their summaries. In this paper, the problem of automatic extraction of keywords from documents is treated as a supervised learning task. A lexical chain holds a set of semantically related words of a text and it can be said that a lexical chain represents the semantic content of a portion of the text. Although lexical chains have been extensively used in text summarization, their usage for keyword extraction problem has not been fully investigated. In this paper, a keyword extraction technique that uses lexical chains is described, and encouraging results are obtained.
    Type
    a
  13. Ling, X.; Jiang, J.; He, X.; Mei, Q.; Zhai, C.; Schatz, B.: Generating gene summaries from biomedical literature : a study of semi-structured summarization (2007) 0.00
    0.0020815306 = product of:
      0.0041630613 = sum of:
        0.0041630613 = product of:
          0.012489184 = sum of:
            0.012489184 = weight(_text_:a in 946) [ClassicSimilarity], result of:
              0.012489184 = score(doc=946,freq=28.0), product of:
                0.05240202 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04544656 = queryNorm
                0.23833402 = fieldWeight in 946, product of:
                  5.2915025 = tf(freq=28.0), with freq of:
                    28.0 = termFreq=28.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=946)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Most knowledge accumulated through scientific discoveries in genomics and related biomedical disciplines is buried in the vast amount of biomedical literature. Since understanding gene regulations is fundamental to biomedical research, summarizing all the existing knowledge about a gene based on literature is highly desirable to help biologists digest the literature. In this paper, we present a study of methods for automatically generating gene summaries from biomedical literature. Unlike most existing work on automatic text summarization, in which the generated summary is often a list of extracted sentences, we propose to generate a semi-structured summary which consists of sentences covering specific semantic aspects of a gene. Such a semi-structured summary is more appropriate for describing genes and poses special challenges for automatic text summarization. We propose a two-stage approach to generate such a summary for a given gene - first retrieving articles about a gene and then extracting sentences for each specified semantic aspect. We address the issue of gene name variation in the first stage and propose several different methods for sentence extraction in the second stage. We evaluate the proposed methods using a test set with 20 genes. Experiment results show that the proposed methods can generate useful semi-structured gene summaries automatically from biomedical literature, and our proposed methods outperform general purpose summarization methods. Among all the proposed methods for sentence extraction, a probabilistic language modeling approach that models gene context performs the best.
    Type
    a
  14. Zajic, D.; Dorr, B.J.; Lin, J.; Schwartz, R.: Multi-candidate reduction : sentence compression as a tool for document summarization tasks (2007) 0.00
    0.0020606103 = product of:
      0.0041212207 = sum of:
        0.0041212207 = product of:
          0.012363662 = sum of:
            0.012363662 = weight(_text_:a in 944) [ClassicSimilarity], result of:
              0.012363662 = score(doc=944,freq=14.0), product of:
                0.05240202 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04544656 = queryNorm
                0.23593865 = fieldWeight in 944, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=944)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    This article examines the application of two single-document sentence compression techniques to the problem of multi-document summarization-a "parse-and-trim" approach and a statistical noisy-channel approach. We introduce the multi-candidate reduction (MCR) framework for multi-document summarization, in which many compressed candidates are generated for each source sentence. These candidates are then selected for inclusion in the final summary based on a combination of static and dynamic features. Evaluations demonstrate that sentence compression is a valuable component of a larger multi-document summarization framework.
    Type
    a
  15. Ou, S.; Khoo, C.S.G.; Goh, D.H.: Multi-document summarization of news articles using an event-based framework (2006) 0.00
    0.0020058132 = product of:
      0.0040116264 = sum of:
        0.0040116264 = product of:
          0.012034879 = sum of:
            0.012034879 = weight(_text_:a in 657) [ClassicSimilarity], result of:
              0.012034879 = score(doc=657,freq=26.0), product of:
                0.05240202 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04544656 = queryNorm
                0.22966442 = fieldWeight in 657, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=657)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - The purpose of this research is to develop a method for automatic construction of multi-document summaries of sets of news articles that might be retrieved by a web search engine in response to a user query. Design/methodology/approach - Based on the cross-document discourse analysis, an event-based framework is proposed for integrating and organizing information extracted from different news articles. It has a hierarchical structure in which the summarized information is presented at the top level and more detailed information given at the lower levels. A tree-view interface was implemented for displaying a multi-document summary based on the framework. A preliminary user evaluation was performed by comparing the framework-based summaries against the sentence-based summaries. Findings - In a small evaluation, all the human subjects preferred the framework-based summaries to the sentence-based summaries. It indicates that the event-based framework is an effective way to summarize a set of news articles reporting an event or a series of relevant events. Research limitations/implications - Limited to event-based news articles only, not applicable to news critiques and other kinds of news articles. A summarization system based on the event-based framework is being implemented. Practical implications - Multi-document summarization of news articles can adopt the proposed event-based framework. Originality/value - An event-based framework for summarizing sets of news articles was developed and evaluated using a tree-view interface for displaying such summaries.
    Type
    a
  16. Sweeney, S.; Crestani, F.; Losada, D.E.: 'Show me more' : incremental length summarisation using novelty detection (2008) 0.00
    0.0019271232 = product of:
      0.0038542463 = sum of:
        0.0038542463 = product of:
          0.011562739 = sum of:
            0.011562739 = weight(_text_:a in 2054) [ClassicSimilarity], result of:
              0.011562739 = score(doc=2054,freq=24.0), product of:
                0.05240202 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04544656 = queryNorm
                0.22065444 = fieldWeight in 2054, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2054)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    The paper presents a study investigating the effects of incorporating novelty detection in automatic text summarisation. Condensing a textual document, automatic text summarisation can reduce the need to refer to the source document. It also offers a means to deliver device-friendly content when accessing information in non-traditional environments. An effective method of summarisation could be to produce a summary that includes only novel information. However, a consequence of focusing exclusively on novel parts may result in a loss of context, which may have an impact on the correct interpretation of the summary, with respect to the source document. In this study we compare two strategies to produce summaries that incorporate novelty in different ways: a constant length summary, which contains only novel sentences, and an incremental summary, containing additional sentences that provide context. The aim is to establish whether a summary that contains only novel sentences provides sufficient basis to determine relevance of a document, or if indeed we need to include additional sentences to provide context. Findings from the study seem to suggest that there is only a minimal difference in performance for the tasks we set our users and that the presence of contextual information is not so important. However, for the case of mobile information access, a summary that contains only novel information does offer benefits, given bandwidth constraints.
    Type
    a
  17. Dorr, B.J.; Gaasterland, T.: Exploiting aspectual features and connecting words for summarization-inspired temporal-relation extraction (2007) 0.00
    0.0018881871 = product of:
      0.0037763743 = sum of:
        0.0037763743 = product of:
          0.011329123 = sum of:
            0.011329123 = weight(_text_:a in 950) [ClassicSimilarity], result of:
              0.011329123 = score(doc=950,freq=16.0), product of:
                0.05240202 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04544656 = queryNorm
                0.2161963 = fieldWeight in 950, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=950)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents a model that incorporates contemporary theories of tense and aspect and develops a new framework for extracting temporal relations between two sentence-internal events, given their tense, aspect, and a temporal connecting word relating the two events. A linguistic constraint on event combination has been implemented to detect incorrect parser analyses and potentially apply syntactic reanalysis or semantic reinterpretation - in preparation for subsequent processing for multi-document summarization. An important contribution of this work is the extension of two different existing theoretical frameworks - Hornstein's 1990 theory of tense analysis and Allen's 1984 theory on event ordering - and the combination of both into a unified system for representing and constraining combinations of different event types (points, closed intervals, and open-ended intervals). We show that our theoretical results have been verified in a large-scale corpus analysis. The framework is designed to inform a temporally motivated sentence-ordering module in an implemented multi-document summarization system.
    Type
    a
  18. Sjöbergh, J.: Older versions of the ROUGEeval summarization evaluation system were easier to fool (2007) 0.00
    0.0017801998 = product of:
      0.0035603996 = sum of:
        0.0035603996 = product of:
          0.010681199 = sum of:
            0.010681199 = weight(_text_:a in 940) [ClassicSimilarity], result of:
              0.010681199 = score(doc=940,freq=8.0), product of:
                0.05240202 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04544656 = queryNorm
                0.20383182 = fieldWeight in 940, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=940)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    We show some limitations of the ROUGE evaluation method for automatic summarization. We present a method for automatic summarization based on a Markov model of the source text. By a simple greedy word selection strategy, summaries with high ROUGE-scores are generated. These summaries would however not be considered good by human readers. The method can be adapted to trick different settings of the ROUGEeval package.
    Type
    a
  19. Ye, S.; Chua, T.-S.; Kan, M.-Y.; Qiu, L.: Document concept lattice for text understanding and summarization (2007) 0.00
    0.0017662374 = product of:
      0.0035324749 = sum of:
        0.0035324749 = product of:
          0.010597425 = sum of:
            0.010597425 = weight(_text_:a in 941) [ClassicSimilarity], result of:
              0.010597425 = score(doc=941,freq=14.0), product of:
                0.05240202 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04544656 = queryNorm
                0.20223314 = fieldWeight in 941, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=941)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    We argue that the quality of a summary can be evaluated based on how many concepts in the original document(s) that can be preserved after summarization. Here, a concept refers to an abstract or concrete entity or its action often expressed by diverse terms in text. Summary generation can thus be considered as an optimization problem of selecting a set of sentences with minimal answer loss. In this paper, we propose a document concept lattice that indexes the hierarchy of local topics tied to a set of frequent concepts and the corresponding sentences containing these topics. The local topics will specify the promising sub-spaces related to the selected concepts and sentences. Based on this lattice, the summary is an optimized selection of a set of distinct and salient local topics that lead to maximal coverage of concepts with the given number of sentences. Our summarizer based on the concept lattice has demonstrated competitive performance in Document Understanding Conference 2005 and 2006 evaluations as well as follow-on tests.
    Type
    a
  20. Moens, M.F.; Dumortier, J.: Use of a text grammar for generating highlight abstracts of magazine articles (2000) 0.00
    0.0017415335 = product of:
      0.003483067 = sum of:
        0.003483067 = product of:
          0.010449201 = sum of:
            0.010449201 = weight(_text_:a in 4540) [ClassicSimilarity], result of:
              0.010449201 = score(doc=4540,freq=10.0), product of:
                0.05240202 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04544656 = queryNorm
                0.19940455 = fieldWeight in 4540, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4540)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Browsing a database of article abstracts is one way to select and buy relevant magazine articles online. Our research contributes to the design and development of text grammars for abstracting texts in unlimited subject domains. We developed a system that parses texts based on the text grammar of a specific text type and that extracts sentences and statements which are relevant for inclusion in the abstracts. The system employs knowledge of the discourse patterns that are typical of news stories. The results are encouraging and demonstrate the importance of discourse structures in text summarisation.
    Type
    a

Languages

  • e 40
  • d 9

Types

  • a 47
  • m 1
  • x 1
  • More… Less…