Search (38 results, page 1 of 2)

  • × language_ss:"e"
  • × theme_ss:"Automatisches Abstracting"
  • × year_i:[2000 TO 2010}
  1. Vanderwende, L.; Suzuki, H.; Brockett, J.M.; Nenkova, A.: Beyond SumBasic : task-focused summarization with sentence simplification and lexical expansion (2007) 0.00
    0.0021285866 = product of:
      0.031928796 = sum of:
        0.031928796 = sum of:
          0.0083720395 = weight(_text_:information in 948) [ClassicSimilarity], result of:
            0.0083720395 = score(doc=948,freq=4.0), product of:
              0.050870337 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.028978055 = queryNorm
              0.16457605 = fieldWeight in 948, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046875 = fieldNorm(doc=948)
          0.023556758 = weight(_text_:22 in 948) [ClassicSimilarity], result of:
            0.023556758 = score(doc=948,freq=2.0), product of:
              0.101476215 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.028978055 = queryNorm
              0.23214069 = fieldWeight in 948, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=948)
      0.06666667 = coord(1/15)
    
    Abstract
    In recent years, there has been increased interest in topic-focused multi-document summarization. In this task, automatic summaries are produced in response to a specific information request, or topic, stated by the user. The system we have designed to accomplish this task comprises four main components: a generic extractive summarization system, a topic-focusing component, sentence simplification, and lexical expansion of topic words. This paper details each of these components, together with experiments designed to quantify their individual contributions. We include an analysis of our results on two large datasets commonly used to evaluate task-focused summarization, the DUC2005 and DUC2006 datasets, using automatic metrics. Additionally, we include an analysis of our results on the DUC2006 task according to human evaluation metrics. In the human evaluation of system summaries compared to human summaries, i.e., the Pyramid method, our system ranked first out of 22 systems in terms of overall mean Pyramid score; and in the human evaluation of summary responsiveness to the topic, our system ranked third out of 35 systems.
    Source
    Information processing and management. 43(2007) no.6, S.1606-1618
  2. Wu, Y.-f.B.; Li, Q.; Bot, R.S.; Chen, X.: Finding nuggets in documents : a machine learning approach (2006) 0.00
    0.0016375937 = product of:
      0.024563905 = sum of:
        0.024563905 = sum of:
          0.0049332716 = weight(_text_:information in 5290) [ClassicSimilarity], result of:
            0.0049332716 = score(doc=5290,freq=2.0), product of:
              0.050870337 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.028978055 = queryNorm
              0.09697737 = fieldWeight in 5290, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5290)
          0.019630633 = weight(_text_:22 in 5290) [ClassicSimilarity], result of:
            0.019630633 = score(doc=5290,freq=2.0), product of:
              0.101476215 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.028978055 = queryNorm
              0.19345059 = fieldWeight in 5290, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5290)
      0.06666667 = coord(1/15)
    
    Date
    22. 7.2006 17:25:48
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.6, S.740-752
  3. Pinto, M.: Engineering the production of meta-information : the abstracting concern (2003) 0.00
    6.511586E-4 = product of:
      0.009767379 = sum of:
        0.009767379 = product of:
          0.019534757 = sum of:
            0.019534757 = weight(_text_:information in 4667) [ClassicSimilarity], result of:
              0.019534757 = score(doc=4667,freq=4.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.3840108 = fieldWeight in 4667, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4667)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    Journal of information science. 29(2003) no.5, S.405-418
  4. Craven, T.C.: Presentation of repeated phrases in a computer-assisted abstracting tool kit (2001) 0.00
    4.604387E-4 = product of:
      0.00690658 = sum of:
        0.00690658 = product of:
          0.01381316 = sum of:
            0.01381316 = weight(_text_:information in 3667) [ClassicSimilarity], result of:
              0.01381316 = score(doc=3667,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.27153665 = fieldWeight in 3667, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3667)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    Information processing and management. 37(2001) no.2, S.221-230
  5. Endres-Niggemeyer, B.: SimSum : an empirically founded simulation of summarizing (2000) 0.00
    4.604387E-4 = product of:
      0.00690658 = sum of:
        0.00690658 = product of:
          0.01381316 = sum of:
            0.01381316 = weight(_text_:information in 3343) [ClassicSimilarity], result of:
              0.01381316 = score(doc=3343,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.27153665 = fieldWeight in 3343, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3343)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    Information processing and management. 36(2000) no.4, S.659-682
  6. Harabagiu, S.; Hickl, A.; Lacatusu, F.: Satisfying information needs with multi-document summaries (2007) 0.00
    4.5571616E-4 = product of:
      0.006835742 = sum of:
        0.006835742 = product of:
          0.013671484 = sum of:
            0.013671484 = weight(_text_:information in 939) [ClassicSimilarity], result of:
              0.013671484 = score(doc=939,freq=6.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.2687516 = fieldWeight in 939, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=939)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Generating summaries that meet the information needs of a user relies on (1) several forms of question decomposition; (2) different summarization approaches; and (3) textual inference for combining the summarization strategies. This novel framework for summarization has the advantage of producing highly responsive summaries, as indicated by the evaluation results.
    Source
    Information processing and management. 43(2007) no.6, S.1619-1642
  7. Steinberger, J.; Poesio, M.; Kabadjov, M.A.; Jezek, K.: Two uses of anaphora resolution in summarization (2007) 0.00
    4.4124527E-4 = product of:
      0.0066186786 = sum of:
        0.0066186786 = product of:
          0.013237357 = sum of:
            0.013237357 = weight(_text_:information in 949) [ClassicSimilarity], result of:
              0.013237357 = score(doc=949,freq=10.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.2602176 = fieldWeight in 949, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=949)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    We propose a new method for using anaphoric information in Latent Semantic Analysis (lsa), and discuss its application to develop an lsa-based summarizer which achieves a significantly better performance than a system not using anaphoric information, and a better performance by the rouge measure than all but one of the single-document summarizers participating in DUC-2002. Anaphoric information is automatically extracted using a new release of our own anaphora resolution system, guitar, which incorporates proper noun resolution. Our summarizer also includes a new approach for automatically identifying the dimensionality reduction of a document on the basis of the desired summarization percentage. Anaphoric information is also used to check the coherence of the summary produced by our summarizer, by a reference checker module which identifies anaphoric resolution errors caused by sentence extraction.
    Source
    Information processing and management. 43(2007) no.6, S.1663-1680
  8. Sweeney, S.; Crestani, F.; Losada, D.E.: 'Show me more' : incremental length summarisation using novelty detection (2008) 0.00
    4.0279995E-4 = product of:
      0.006041999 = sum of:
        0.006041999 = product of:
          0.012083998 = sum of:
            0.012083998 = weight(_text_:information in 2054) [ClassicSimilarity], result of:
              0.012083998 = score(doc=2054,freq=12.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.23754507 = fieldWeight in 2054, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2054)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    The paper presents a study investigating the effects of incorporating novelty detection in automatic text summarisation. Condensing a textual document, automatic text summarisation can reduce the need to refer to the source document. It also offers a means to deliver device-friendly content when accessing information in non-traditional environments. An effective method of summarisation could be to produce a summary that includes only novel information. However, a consequence of focusing exclusively on novel parts may result in a loss of context, which may have an impact on the correct interpretation of the summary, with respect to the source document. In this study we compare two strategies to produce summaries that incorporate novelty in different ways: a constant length summary, which contains only novel sentences, and an incremental summary, containing additional sentences that provide context. The aim is to establish whether a summary that contains only novel sentences provides sufficient basis to determine relevance of a document, or if indeed we need to include additional sentences to provide context. Findings from the study seem to suggest that there is only a minimal difference in performance for the tasks we set our users and that the presence of contextual information is not so important. However, for the case of mobile information access, a summary that contains only novel information does offer benefits, given bandwidth constraints.
    Source
    Information processing and management. 44(2008) no.2, S.663-686
  9. Marcu, D.: Automatic abstracting and summarization (2009) 0.00
    3.987516E-4 = product of:
      0.005981274 = sum of:
        0.005981274 = product of:
          0.011962548 = sum of:
            0.011962548 = weight(_text_:information in 3748) [ClassicSimilarity], result of:
              0.011962548 = score(doc=3748,freq=6.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.23515764 = fieldWeight in 3748, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3748)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    After lying dormant for a few decades, the field of automated text summarization has experienced a tremendous resurgence of interest. Recently, many new algorithms and techniques have been proposed for identifying important information in single documents and document collections, and for mapping this information into grammatical, cohesive, and coherent abstracts. Since 1997, annual workshops, conferences, and large-scale comparative evaluations have provided a rich environment for exchanging ideas between researchers in Asia, Europe, and North America. This entry reviews the main developments in the field and provides a guiding map to those interested in understanding the strengths and weaknesses of an increasingly ubiquitous technology.
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  10. Díaz, A.; Gervás, P.: User-model based personalized summarization (2007) 0.00
    3.9466174E-4 = product of:
      0.005919926 = sum of:
        0.005919926 = product of:
          0.011839852 = sum of:
            0.011839852 = weight(_text_:information in 952) [ClassicSimilarity], result of:
              0.011839852 = score(doc=952,freq=8.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.23274569 = fieldWeight in 952, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=952)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    The potential of summary personalization is high, because a summary that would be useless to decide the relevance of a document if summarized in a generic manner, may be useful if the right sentences are selected that match the user interest. In this paper we defend the use of a personalized summarization facility to maximize the density of relevance of selections sent by a personalized information system to a given user. The personalization is applied to the digital newspaper domain and it used a user-model that stores long and short term interests using four reference systems: sections, categories, keywords and feedback terms. On the other side, it is crucial to measure how much information is lost during the summarization process, and how this information loss may affect the ability of the user to judge the relevance of a given document. The results obtained in two personalization systems show that personalized summaries perform better than generic and generic-personalized summaries in terms of identifying documents that satisfy user preferences. We also considered a user-centred direct evaluation that showed a high level of user satisfaction with the summaries.
    Source
    Information processing and management. 43(2007) no.6, S.1715-1734
  11. Nomoto, T.: Discriminative sentence compression with conditional random fields (2007) 0.00
    3.4178712E-4 = product of:
      0.0051268064 = sum of:
        0.0051268064 = product of:
          0.010253613 = sum of:
            0.010253613 = weight(_text_:information in 945) [ClassicSimilarity], result of:
              0.010253613 = score(doc=945,freq=6.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.20156369 = fieldWeight in 945, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=945)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    The paper focuses on a particular approach to automatic sentence compression which makes use of a discriminative sequence classifier known as Conditional Random Fields (CRF). We devise several features for CRF that allow it to incorporate information on nonlinear relations among words. Along with that, we address the issue of data paucity by collecting data from RSS feeds available on the Internet, and turning them into training data for use with CRF, drawing on techniques from biology and information retrieval. We also discuss a recursive application of CRF on the syntactic structure of a sentence as a way of improving the readability of the compression it generates. Experiments found that our approach works reasonably well compared to the state-of-the-art system [Knight, K., & Marcu, D. (2002). Summarization beyond sentence extraction: A probabilistic approach to sentence compression. Artificial Intelligence 139, 91-107.].
    Source
    Information processing and management. 43(2007) no.6, S.1571-1587
  12. Reeve, L.H.; Han, H.; Brooks, A.D.: ¬The use of domain-specific concepts in biomedical text summarization (2007) 0.00
    3.2888478E-4 = product of:
      0.0049332716 = sum of:
        0.0049332716 = product of:
          0.009866543 = sum of:
            0.009866543 = weight(_text_:information in 955) [ClassicSimilarity], result of:
              0.009866543 = score(doc=955,freq=8.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.19395474 = fieldWeight in 955, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=955)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Text summarization is a method for data reduction. The use of text summarization enables users to reduce the amount of text that must be read while still assimilating the core information. The data reduction offered by text summarization is particularly useful in the biomedical domain, where physicians must continuously find clinical trial study information to incorporate into their patient treatment efforts. Such efforts are often hampered by the high-volume of publications. This paper presents two independent methods (BioChain and FreqDist) for identifying salient sentences in biomedical texts using concepts derived from domain-specific resources. Our semantic-based method (BioChain) is effective at identifying thematic sentences, while our frequency-distribution method (FreqDist) removes information redundancy. The two methods are then combined to form a hybrid method (ChainFreq). An evaluation of each method is performed using the ROUGE system to compare system-generated summaries against a set of manually-generated summaries. The BioChain and FreqDist methods outperform some common summarization systems, while the ChainFreq method improves upon the base approaches. Our work shows that the best performance is achieved when the two methods are combined. The paper also presents a brief physician's evaluation of three randomly-selected papers from an evaluation corpus to show that the author's abstract does not always reflect the entire contents of the full-text.
    Source
    Information processing and management. 43(2007) no.6, S.1765-1776
  13. Yang, C.C.; Wang, F.L.: Hierarchical summarization of large documents (2008) 0.00
    3.2888478E-4 = product of:
      0.0049332716 = sum of:
        0.0049332716 = product of:
          0.009866543 = sum of:
            0.009866543 = weight(_text_:information in 1719) [ClassicSimilarity], result of:
              0.009866543 = score(doc=1719,freq=8.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.19395474 = fieldWeight in 1719, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1719)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Many automatic text summarization models have been developed in the last decades. Related research in information science has shown that human abstractors extract sentences for summaries based on the hierarchical structure of documents; however, the existing automatic summarization models do not take into account the human abstractor's behavior of sentence extraction and only consider the document as a sequence of sentences during the process of extraction of sentences as a summary. In general, a document exhibits a well-defined hierarchical structure that can be described as fractals - mathematical objects with a high degree of redundancy. In this article, we introduce the fractal summarization model based on the fractal theory. The important information is captured from the source document by exploring the hierarchical structure and salient features of the document. A condensed version of the document that is informatively close to the source document is produced iteratively using the contractive transformation in the fractal theory. The fractal summarization model is the first attempt to apply fractal theory to document summarization. It significantly improves the divergence of information coverage of summary and the precision of summary. User evaluations have been conducted. Results have indicated that fractal summarization is promising and outperforms current summarization techniques that do not consider the hierarchical structure of documents.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.6, S.887-902
  14. Soricut, R.; Marcu, D.: Abstractive headline generation using WIDL-expressions (2007) 0.00
    3.255793E-4 = product of:
      0.0048836893 = sum of:
        0.0048836893 = product of:
          0.009767379 = sum of:
            0.009767379 = weight(_text_:information in 943) [ClassicSimilarity], result of:
              0.009767379 = score(doc=943,freq=4.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.1920054 = fieldWeight in 943, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=943)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    We present a new paradigm for the automatic creation of document headlines that is based on direct transformation of relevant textual information into well-formed textual output. Starting from an input document, we automatically create compact representations of weighted finite sets of strings, called WIDL-expressions, which encode the most important topics in the document. A generic natural language generation engine performs the headline generation task, driven by both statistical knowledge encapsulated in WIDL-expressions (representing topic biases induced by the input document) and statistical knowledge encapsulated in language models (representing biases induced by the target language). Our evaluation shows similar performance in quality with a state-of-the-art, extractive approach to headline generation, and significant improvements in quality over previously proposed solutions to abstractive headline generation.
    Source
    Information processing and management. 43(2007) no.6, S.1536-1548
  15. Dunlavy, D.M.; O'Leary, D.P.; Conroy, J.M.; Schlesinger, J.D.: QCS: A system for querying, clustering and summarizing documents (2007) 0.00
    2.941635E-4 = product of:
      0.004412452 = sum of:
        0.004412452 = product of:
          0.008824904 = sum of:
            0.008824904 = weight(_text_:information in 947) [ClassicSimilarity], result of:
              0.008824904 = score(doc=947,freq=10.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.1734784 = fieldWeight in 947, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=947)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Information retrieval systems consist of many complicated components. Research and development of such systems is often hampered by the difficulty in evaluating how each particular component would behave across multiple systems. We present a novel integrated information retrieval system-the Query, Cluster, Summarize (QCS) system-which is portable, modular, and permits experimentation with different instantiations of each of the constituent text analysis components. Most importantly, the combination of the three types of methods in the QCS design improves retrievals by providing users more focused information organized by topic. We demonstrate the improved performance by a series of experiments using standard test sets from the Document Understanding Conferences (DUC) as measured by the best known automatic metric for summarization system evaluation, ROUGE. Although the DUC data and evaluations were originally designed to test multidocument summarization, we developed a framework to extend it to the task of evaluation for each of the three components: query, clustering, and summarization. Under this framework, we then demonstrate that the QCS system (end-to-end) achieves performance as good as or better than the best summarization engines. Given a query, QCS retrieves relevant documents, separates the retrieved documents into topic clusters, and creates a single summary for each cluster. In the current implementation, Latent Semantic Indexing is used for retrieval, generalized spherical k-means is used for the document clustering, and a method coupling sentence "trimming" and a hidden Markov model, followed by a pivoted QR decomposition, is used to create a single extract summary for each cluster. The user interface is designed to provide access to detailed information in a compact and useful format. Our system demonstrates the feasibility of assembling an effective IR system from existing software libraries, the usefulness of the modularity of the design, and the value of this particular combination of modules.
    Source
    Information processing and management. 43(2007) no.6, S.1588-1605
  16. Ou, S.; Khoo, C.S.G.; Goh, D.H.: Multi-document summarization of news articles using an event-based framework (2006) 0.00
    2.848226E-4 = product of:
      0.004272339 = sum of:
        0.004272339 = product of:
          0.008544678 = sum of:
            0.008544678 = weight(_text_:information in 657) [ClassicSimilarity], result of:
              0.008544678 = score(doc=657,freq=6.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.16796975 = fieldWeight in 657, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=657)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Purpose - The purpose of this research is to develop a method for automatic construction of multi-document summaries of sets of news articles that might be retrieved by a web search engine in response to a user query. Design/methodology/approach - Based on the cross-document discourse analysis, an event-based framework is proposed for integrating and organizing information extracted from different news articles. It has a hierarchical structure in which the summarized information is presented at the top level and more detailed information given at the lower levels. A tree-view interface was implemented for displaying a multi-document summary based on the framework. A preliminary user evaluation was performed by comparing the framework-based summaries against the sentence-based summaries. Findings - In a small evaluation, all the human subjects preferred the framework-based summaries to the sentence-based summaries. It indicates that the event-based framework is an effective way to summarize a set of news articles reporting an event or a series of relevant events. Research limitations/implications - Limited to event-based news articles only, not applicable to news critiques and other kinds of news articles. A summarization system based on the event-based framework is being implemented. Practical implications - Multi-document summarization of news articles can adopt the proposed event-based framework. Originality/value - An event-based framework for summarizing sets of news articles was developed and evaluated using a tree-view interface for displaying such summaries.
  17. Craven, T.C.: Abstracts produced using computer assistance (2000) 0.00
    2.79068E-4 = product of:
      0.0041860198 = sum of:
        0.0041860198 = product of:
          0.0083720395 = sum of:
            0.0083720395 = weight(_text_:information in 4809) [ClassicSimilarity], result of:
              0.0083720395 = score(doc=4809,freq=4.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.16457605 = fieldWeight in 4809, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4809)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Experimental subjects wrote abstracts using a simplified version of the TEXNET abstracting assistance software. In addition to the full text, subjects were presented with either keywords or phrases extracted automatically. The resulting abstracts, and the times taken, were recorded automatically; some additional information was gathered by oral questionnaire. Selected abstracts produced were evaluated on various criteria by independent raters. Results showed considerable variation among subjects, but 37% found the keywords or phrases 'quite' or 'very' useful in writing their abstracts. Statistical analysis failed to support several hypothesized relations: phrases were not viewed as significantly more helpful than keywords; and abstracting experience did not correlate with originality of wording, approximation of the author abstract, or greater conciseness. Requiring further study are some unanticipated strong correlations including the following: Windows experience and writing an abstract like the author's; experience reading abstracts and thinking one had written a good abstract; gender and abstract length; gender and use of words and phrases from the original text. Results have also suggested possible modifications to the TEXNET software
    Source
    Journal of the American Society for Information Science. 51(2000) no.8, S.745-756
  18. Chen, H.-H.; Kuo, J.-J.; Huang, S.-J.; Lin, C.-J.; Wung, H.-C.: ¬A summarization system for Chinese news from multiple sources (2003) 0.00
    2.79068E-4 = product of:
      0.0041860198 = sum of:
        0.0041860198 = product of:
          0.0083720395 = sum of:
            0.0083720395 = weight(_text_:information in 2115) [ClassicSimilarity], result of:
              0.0083720395 = score(doc=2115,freq=4.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.16457605 = fieldWeight in 2115, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2115)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    This article proposes a summarization system for multiple documents. It employs not only named entities and other signatures to cluster news from different sources, but also employs punctuation marks, linking elements, and topic chains to identify the meaningful units (MUs). Using nouns and verbs to identify the similar MUs, focusing and browsing models are applied to represent the summarization results. To reduce information loss during summarization, informative words in a document are introduced. For the evaluation, a question answering system (QA system) is proposed to substitute the human assessors. In large-scale experiments containing 140 questions to 17,877 documents, the results show that those models using informative words outperform pure heuristic voting-only strategy by news reporters. This model can be easily further applied to summarize multilingual news from multiple sources.
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.13, S.1224-1236
  19. Shen, D.; Yang, Q.; Chen, Z.: Noise reduction through summarization for Web-page classification (2007) 0.00
    2.79068E-4 = product of:
      0.0041860198 = sum of:
        0.0041860198 = product of:
          0.0083720395 = sum of:
            0.0083720395 = weight(_text_:information in 953) [ClassicSimilarity], result of:
              0.0083720395 = score(doc=953,freq=4.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.16457605 = fieldWeight in 953, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=953)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Due to a large variety of noisy information embedded in Web pages, Web-page classification is much more difficult than pure-text classification. In this paper, we propose to improve the Web-page classification performance by removing the noise through summarization techniques. We first give empirical evidence that ideal Web-page summaries generated by human editors can indeed improve the performance of Web-page classification algorithms. We then put forward a new Web-page summarization algorithm based on Web-page layout and evaluate it along with several other state-of-the-art text summarization algorithms on the LookSmart Web directory. Experimental results show that the classification algorithms (NB or SVM) augmented by any summarization approach can achieve an improvement by more than 5.0% as compared to pure-text-based classification algorithms. We further introduce an ensemble method to combine the different summarization algorithms. The ensemble summarization method achieves more than 12.0% improvement over pure-text based methods.
    Source
    Information processing and management. 43(2007) no.6, S.1735-1747
  20. Over, P.; Dang, H.; Harman, D.: DUC in context (2007) 0.00
    2.6310782E-4 = product of:
      0.0039466172 = sum of:
        0.0039466172 = product of:
          0.0078932345 = sum of:
            0.0078932345 = weight(_text_:information in 934) [ClassicSimilarity], result of:
              0.0078932345 = score(doc=934,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.1551638 = fieldWeight in 934, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=934)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    Information processing and management. 43(2007) no.6, S.1506-1520