Search (98 results, page 1 of 5)

  • × theme_ss:"Automatisches Abstracting"
  1. Jones, P.A.; Bradbeer, P.V.G.: Discovery of optimal weights in a concept selection system (1996) 0.03
    0.0304892 = product of:
      0.07114147 = sum of:
        0.014497695 = weight(_text_:information in 6974) [ClassicSimilarity], result of:
          0.014497695 = score(doc=6974,freq=4.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.21943474 = fieldWeight in 6974, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=6974)
        0.043046176 = weight(_text_:retrieval in 6974) [ClassicSimilarity], result of:
          0.043046176 = score(doc=6974,freq=4.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.37811437 = fieldWeight in 6974, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=6974)
        0.013597593 = product of:
          0.040792778 = sum of:
            0.040792778 = weight(_text_:22 in 6974) [ClassicSimilarity], result of:
              0.040792778 = score(doc=6974,freq=2.0), product of:
                0.13179328 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037635546 = queryNorm
                0.30952093 = fieldWeight in 6974, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6974)
          0.33333334 = coord(1/3)
      0.42857143 = coord(3/7)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  2. Salton, G.; Allan, J.; Buckley, C.; Singhal, A.: Automatic analysis, theme generation, and summarization of machine readable texts (1994) 0.03
    0.02914858 = product of:
      0.068013355 = sum of:
        0.012814272 = weight(_text_:information in 1949) [ClassicSimilarity], result of:
          0.012814272 = score(doc=1949,freq=2.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.19395474 = fieldWeight in 1949, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=1949)
        0.038047805 = weight(_text_:retrieval in 1949) [ClassicSimilarity], result of:
          0.038047805 = score(doc=1949,freq=2.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.33420905 = fieldWeight in 1949, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=1949)
        0.017151278 = product of:
          0.051453833 = sum of:
            0.051453833 = weight(_text_:29 in 1949) [ClassicSimilarity], result of:
              0.051453833 = score(doc=1949,freq=2.0), product of:
                0.13239008 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037635546 = queryNorm
                0.38865322 = fieldWeight in 1949, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1949)
          0.33333334 = coord(1/3)
      0.42857143 = coord(3/7)
    
    Date
    16. 8.1998 12:30:29
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.478-483.
  3. Kim, H.H.; Kim, Y.H.: Generic speech summarization of transcribed lecture videos : using tags and their semantic relations (2016) 0.03
    0.02553399 = product of:
      0.059579305 = sum of:
        0.006407136 = weight(_text_:information in 2640) [ClassicSimilarity], result of:
          0.006407136 = score(doc=2640,freq=2.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.09697737 = fieldWeight in 2640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2640)
        0.019023903 = weight(_text_:retrieval in 2640) [ClassicSimilarity], result of:
          0.019023903 = score(doc=2640,freq=2.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.16710453 = fieldWeight in 2640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2640)
        0.03414827 = product of:
          0.051222403 = sum of:
            0.025726916 = weight(_text_:29 in 2640) [ClassicSimilarity], result of:
              0.025726916 = score(doc=2640,freq=2.0), product of:
                0.13239008 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037635546 = queryNorm
                0.19432661 = fieldWeight in 2640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2640)
            0.025495486 = weight(_text_:22 in 2640) [ClassicSimilarity], result of:
              0.025495486 = score(doc=2640,freq=2.0), product of:
                0.13179328 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037635546 = queryNorm
                0.19345059 = fieldWeight in 2640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2640)
          0.6666667 = coord(2/3)
      0.42857143 = coord(3/7)
    
    Abstract
    We propose a tag-based framework that simulates human abstractors' ability to select significant sentences based on key concepts in a sentence as well as the semantic relations between key concepts to create generic summaries of transcribed lecture videos. The proposed extractive summarization method uses tags (viewer- and author-assigned terms) as key concepts. Our method employs Flickr tag clusters and WordNet synonyms to expand tags and detect the semantic relations between tags. This method helps select sentences that have a greater number of semantically related key concepts. To investigate the effectiveness and uniqueness of the proposed method, we compare it with an existing technique, latent semantic analysis (LSA), using intrinsic and extrinsic evaluations. The results of intrinsic evaluation show that the tag-based method is as or more effective than the LSA method. We also observe that in the extrinsic evaluation, the grand mean accuracy score of the tag-based method is higher than that of the LSA method, with a statistically significant difference. Elaborating on our results, we discuss the theoretical and practical implications of our findings for speech video summarization and retrieval.
    Date
    22. 1.2016 12:29:41
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.2, S.366-379
  4. Jiang, Y.; Meng, R.; Huang, Y.; Lu, W.; Liu, J.: Generating keyphrases for readers : a controllable keyphrase generation framework (2023) 0.02
    0.017287144 = product of:
      0.04033667 = sum of:
        0.012814272 = weight(_text_:information in 1012) [ClassicSimilarity], result of:
          0.012814272 = score(doc=1012,freq=8.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.19395474 = fieldWeight in 1012, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1012)
        0.019023903 = weight(_text_:retrieval in 1012) [ClassicSimilarity], result of:
          0.019023903 = score(doc=1012,freq=2.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.16710453 = fieldWeight in 1012, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1012)
        0.008498495 = product of:
          0.025495486 = sum of:
            0.025495486 = weight(_text_:22 in 1012) [ClassicSimilarity], result of:
              0.025495486 = score(doc=1012,freq=2.0), product of:
                0.13179328 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037635546 = queryNorm
                0.19345059 = fieldWeight in 1012, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1012)
          0.33333334 = coord(1/3)
      0.42857143 = coord(3/7)
    
    Abstract
    With the wide application of keyphrases in many Information Retrieval (IR) and Natural Language Processing (NLP) tasks, automatic keyphrase prediction has been emerging. However, these statistically important phrases are contributing increasingly less to the related tasks because the end-to-end learning mechanism enables models to learn the important semantic information of the text directly. Similarly, keyphrases are of little help for readers to quickly grasp the paper's main idea because the relationship between the keyphrase and the paper is not explicit to readers. Therefore, we propose to generate keyphrases with specific functions for readers to bridge the semantic gap between them and the information producers, and verify the effectiveness of the keyphrase function for assisting users' comprehension with a user experiment. A controllable keyphrase generation framework (the CKPG) that uses the keyphrase function as a control code to generate categorized keyphrases is proposed and implemented based on Transformer, BART, and T5, respectively. For the Computer Science domain, the Macro-avgs of , , and on the Paper with Code dataset are up to 0.680, 0.535, and 0.558, respectively. Our experimental results indicate the effectiveness of the CKPG models.
    Date
    22. 6.2023 14:55:20
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.7, S.759-774
  5. Pinto, M.: Engineering the production of meta-information : the abstracting concern (2003) 0.02
    0.016951077 = product of:
      0.059328765 = sum of:
        0.025370965 = weight(_text_:information in 4667) [ClassicSimilarity], result of:
          0.025370965 = score(doc=4667,freq=4.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.3840108 = fieldWeight in 4667, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=4667)
        0.033957798 = product of:
          0.10187339 = sum of:
            0.10187339 = weight(_text_:29 in 4667) [ClassicSimilarity], result of:
              0.10187339 = score(doc=4667,freq=4.0), product of:
                0.13239008 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037635546 = queryNorm
                0.7694941 = fieldWeight in 4667, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4667)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Date
    27.11.2005 18:29:55
    Source
    Journal of information science. 29(2003) no.5, S.405-418
  6. Lam, W.; Chan, K.; Radev, D.; Saggion, H.; Teufel, S.: Context-based generic cross-lingual retrieval of documents and automated summaries (2005) 0.02
    0.015241695 = product of:
      0.05334593 = sum of:
        0.007688564 = weight(_text_:information in 1965) [ClassicSimilarity], result of:
          0.007688564 = score(doc=1965,freq=2.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.116372846 = fieldWeight in 1965, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1965)
        0.045657367 = weight(_text_:retrieval in 1965) [ClassicSimilarity], result of:
          0.045657367 = score(doc=1965,freq=8.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.40105087 = fieldWeight in 1965, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1965)
      0.2857143 = coord(2/7)
    
    Abstract
    We develop a context-based generic cross-lingual retrieval model that can deal with different language pairs. Our model considers contexts in the query translation process. Contexts in the query as weIl as in the documents based an co-occurrence statistics from different granularity of passages are exploited. We also investigate cross-lingual retrieval of automatic generic summaries. We have implemented our model for two different cross-lingual settings, namely, retrieving Chinese documents from English queries as weIl as retrieving English documents from Chinese queries. Extensive experiments have been conducted an a large-scale parallel corpus enabling studies an retrieval performance for two different cross-lingual settings of full-length documents as weIl as automated summaries.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.2, S.129-139
  7. Johnson, F.C.; Paice, C.D.; Black, W.J.; Neal, A.P.: ¬The application of linguistic processing to automatic abstract generation (1993) 0.01
    0.014532023 = product of:
      0.050862078 = sum of:
        0.012814272 = weight(_text_:information in 2290) [ClassicSimilarity], result of:
          0.012814272 = score(doc=2290,freq=2.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.19395474 = fieldWeight in 2290, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=2290)
        0.038047805 = weight(_text_:retrieval in 2290) [ClassicSimilarity], result of:
          0.038047805 = score(doc=2290,freq=2.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.33420905 = fieldWeight in 2290, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=2290)
      0.2857143 = coord(2/7)
    
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.538-552.
  8. Marsh, E.: ¬A production rule system for message summarisation (1984) 0.01
    0.014532023 = product of:
      0.050862078 = sum of:
        0.012814272 = weight(_text_:information in 1956) [ClassicSimilarity], result of:
          0.012814272 = score(doc=1956,freq=2.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.19395474 = fieldWeight in 1956, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=1956)
        0.038047805 = weight(_text_:retrieval in 1956) [ClassicSimilarity], result of:
          0.038047805 = score(doc=1956,freq=2.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.33420905 = fieldWeight in 1956, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=1956)
      0.2857143 = coord(2/7)
    
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.534-537.
  9. Dunlavy, D.M.; O'Leary, D.P.; Conroy, J.M.; Schlesinger, J.D.: QCS: A system for querying, clustering and summarizing documents (2007) 0.01
    0.0108062085 = product of:
      0.03782173 = sum of:
        0.011461434 = weight(_text_:information in 947) [ClassicSimilarity], result of:
          0.011461434 = score(doc=947,freq=10.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.1734784 = fieldWeight in 947, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=947)
        0.026360294 = weight(_text_:retrieval in 947) [ClassicSimilarity], result of:
          0.026360294 = score(doc=947,freq=6.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.23154683 = fieldWeight in 947, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=947)
      0.2857143 = coord(2/7)
    
    Abstract
    Information retrieval systems consist of many complicated components. Research and development of such systems is often hampered by the difficulty in evaluating how each particular component would behave across multiple systems. We present a novel integrated information retrieval system-the Query, Cluster, Summarize (QCS) system-which is portable, modular, and permits experimentation with different instantiations of each of the constituent text analysis components. Most importantly, the combination of the three types of methods in the QCS design improves retrievals by providing users more focused information organized by topic. We demonstrate the improved performance by a series of experiments using standard test sets from the Document Understanding Conferences (DUC) as measured by the best known automatic metric for summarization system evaluation, ROUGE. Although the DUC data and evaluations were originally designed to test multidocument summarization, we developed a framework to extend it to the task of evaluation for each of the three components: query, clustering, and summarization. Under this framework, we then demonstrate that the QCS system (end-to-end) achieves performance as good as or better than the best summarization engines. Given a query, QCS retrieves relevant documents, separates the retrieved documents into topic clusters, and creates a single summary for each cluster. In the current implementation, Latent Semantic Indexing is used for retrieval, generalized spherical k-means is used for the document clustering, and a method coupling sentence "trimming" and a hidden Markov model, followed by a pivoted QR decomposition, is used to create a single extract summary for each cluster. The user interface is designed to provide access to detailed information in a compact and useful format. Our system demonstrates the feasibility of assembling an effective IR system from existing software libraries, the usefulness of the modularity of the design, and the value of this particular combination of modules.
    Source
    Information processing and management. 43(2007) no.6, S.1588-1605
  10. Nomoto, T.: Discriminative sentence compression with conditional random fields (2007) 0.01
    0.010327334 = product of:
      0.036145665 = sum of:
        0.013316983 = weight(_text_:information in 945) [ClassicSimilarity], result of:
          0.013316983 = score(doc=945,freq=6.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.20156369 = fieldWeight in 945, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=945)
        0.022828683 = weight(_text_:retrieval in 945) [ClassicSimilarity], result of:
          0.022828683 = score(doc=945,freq=2.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.20052543 = fieldWeight in 945, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=945)
      0.2857143 = coord(2/7)
    
    Abstract
    The paper focuses on a particular approach to automatic sentence compression which makes use of a discriminative sequence classifier known as Conditional Random Fields (CRF). We devise several features for CRF that allow it to incorporate information on nonlinear relations among words. Along with that, we address the issue of data paucity by collecting data from RSS feeds available on the Internet, and turning them into training data for use with CRF, drawing on techniques from biology and information retrieval. We also discuss a recursive application of CRF on the syntactic structure of a sentence as a way of improving the readability of the compression it generates. Experiments found that our approach works reasonably well compared to the state-of-the-art system [Knight, K., & Marcu, D. (2002). Summarization beyond sentence extraction: A probabilistic approach to sentence compression. Artificial Intelligence 139, 91-107.].
    Source
    Information processing and management. 43(2007) no.6, S.1571-1587
  11. Moens, M.-F.: Summarizing court decisions (2007) 0.01
    0.010172416 = product of:
      0.035603456 = sum of:
        0.0089699915 = weight(_text_:information in 954) [ClassicSimilarity], result of:
          0.0089699915 = score(doc=954,freq=2.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.13576832 = fieldWeight in 954, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=954)
        0.026633464 = weight(_text_:retrieval in 954) [ClassicSimilarity], result of:
          0.026633464 = score(doc=954,freq=2.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.23394634 = fieldWeight in 954, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=954)
      0.2857143 = coord(2/7)
    
    Abstract
    In the field of law there is an absolute need for summarizing the texts of court decisions in order to make the content of the cases easily accessible for legal professionals. During the SALOMON and MOSAIC projects we investigated the summarization and retrieval of legal cases. This article presents some of the main findings while integrating the research results of experiments on legal document summarization by other research groups. In addition, we propose novel avenues of research for automatic text summarization, which we currently exploit when summarizing court decisions in the ACILA project. Techniques for automated concept learning and argument recognition are here the most challenging.
    Source
    Information processing and management. 43(2007) no.6, S.1748-1764
  12. Goh, A.; Hui, S.C.: TES: a text extraction system (1996) 0.01
    0.00974298 = product of:
      0.03410043 = sum of:
        0.020502837 = weight(_text_:information in 6599) [ClassicSimilarity], result of:
          0.020502837 = score(doc=6599,freq=8.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.3103276 = fieldWeight in 6599, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=6599)
        0.013597593 = product of:
          0.040792778 = sum of:
            0.040792778 = weight(_text_:22 in 6599) [ClassicSimilarity], result of:
              0.040792778 = score(doc=6599,freq=2.0), product of:
                0.13179328 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037635546 = queryNorm
                0.30952093 = fieldWeight in 6599, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6599)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    With the onset of the information explosion arising from digital libraries and access to a wealth of information through the Internet, the need to efficiently determine the relevance of a document becomes even more urgent. Describes a text extraction system (TES), which retrieves a set of sentences from a document to form an indicative abstract. Such an automated process enables information to be filtered more quickly. Discusses the combination of various text extraction techniques. Compares results with manually produced abstracts
    Date
    26. 2.1997 10:22:43
    Source
    Microcomputers for information management. 13(1996) no.1, S.41-55
  13. Atanassova, I.; Bertin, M.; Larivière, V.: On the composition of scientific abstracts (2016) 0.01
    0.008606112 = product of:
      0.03012139 = sum of:
        0.011097487 = weight(_text_:information in 3028) [ClassicSimilarity], result of:
          0.011097487 = score(doc=3028,freq=6.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.16796975 = fieldWeight in 3028, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3028)
        0.019023903 = weight(_text_:retrieval in 3028) [ClassicSimilarity], result of:
          0.019023903 = score(doc=3028,freq=2.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.16710453 = fieldWeight in 3028, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3028)
      0.2857143 = coord(2/7)
    
    Abstract
    Purpose - Scientific abstracts reproduce only part of the information and the complexity of argumentation in a scientific article. The purpose of this paper provides a first analysis of the similarity between the text of scientific abstracts and the body of articles, using sentences as the basic textual unit. It contributes to the understanding of the structure of abstracts. Design/methodology/approach - Using sentence-based similarity metrics, the authors quantify the phenomenon of text re-use in abstracts and examine the positions of the sentences that are similar to sentences in abstracts in the introduction, methods, results and discussion structure, using a corpus of over 85,000 research articles published in the seven Public Library of Science journals. Findings - The authors provide evidence that 84 percent of abstract have at least one sentence in common with the body of the paper. Studying the distributions of sentences in the body of the articles that are re-used in abstracts, the authors show that there exists a strong relation between the rhetorical structure of articles and the zones that authors re-use when writing abstracts, with sentences mainly coming from the beginning of the introduction and the end of the conclusion. Originality/value - Scientific abstracts contain what is considered by the author(s) as information that best describe documents' content. This is a first study that examines the relation between the contents of abstracts and the rhetorical structure of scientific articles. The work might provide new insight for improving automatic abstracting tools as well as information retrieval approaches, in which text organization and structure are important features.
  14. Jones, S.; Paynter, G.W.: Automatic extractionof document keyphrases for use in digital libraries : evaluations and applications (2002) 0.01
    0.0072660116 = product of:
      0.025431039 = sum of:
        0.006407136 = weight(_text_:information in 601) [ClassicSimilarity], result of:
          0.006407136 = score(doc=601,freq=2.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.09697737 = fieldWeight in 601, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=601)
        0.019023903 = weight(_text_:retrieval in 601) [ClassicSimilarity], result of:
          0.019023903 = score(doc=601,freq=2.0), product of:
            0.11384433 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.037635546 = queryNorm
            0.16710453 = fieldWeight in 601, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=601)
      0.2857143 = coord(2/7)
    
    Abstract
    This article describes an evaluation of the Kea automatic keyphrase extraction algorithm. Document keyphrases are conventionally used as concise descriptors of document content, and are increasingly used in novel ways, including document clustering, searching and browsing interfaces, and retrieval engines. However, it is costly and time consuming to manually assign keyphrases to documents, motivating the development of tools that automatically perform this function. Previous studies have evaluated Kea's performance by measuring its ability to identify author keywords and keyphrases, but this methodology has a number of well-known limitations. The results presented in this article are based on evaluations by human assessors of the quality and appropriateness of Kea keyphrases. The results indicate that, in general, Kea produces keyphrases that are rated positively by human assessors. However, typical Kea settings can degrade performance, particularly those relating to keyphrase length and domain specificity. We found that for some settings, Kea's performance is better than that of similar systems, and that Kea's ranking of extracted keyphrases is effective. We also determined that author-specified keyphrases appear to exhibit an inherent ranking, and that they are rated highly and therefore suitable for use in training and evaluation of automatic keyphrasing systems.
    Source
    Journal of the American Society for Information Science and technology. 53(2002) no.8, S.653-677
  15. Sweeney, S.; Crestani, F.; Losada, D.E.: 'Show me more' : incremental length summarisation using novelty detection (2008) 0.01
    0.006934244 = product of:
      0.024269853 = sum of:
        0.015694214 = weight(_text_:information in 2054) [ClassicSimilarity], result of:
          0.015694214 = score(doc=2054,freq=12.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.23754507 = fieldWeight in 2054, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2054)
        0.008575639 = product of:
          0.025726916 = sum of:
            0.025726916 = weight(_text_:29 in 2054) [ClassicSimilarity], result of:
              0.025726916 = score(doc=2054,freq=2.0), product of:
                0.13239008 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037635546 = queryNorm
                0.19432661 = fieldWeight in 2054, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2054)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    The paper presents a study investigating the effects of incorporating novelty detection in automatic text summarisation. Condensing a textual document, automatic text summarisation can reduce the need to refer to the source document. It also offers a means to deliver device-friendly content when accessing information in non-traditional environments. An effective method of summarisation could be to produce a summary that includes only novel information. However, a consequence of focusing exclusively on novel parts may result in a loss of context, which may have an impact on the correct interpretation of the summary, with respect to the source document. In this study we compare two strategies to produce summaries that incorporate novelty in different ways: a constant length summary, which contains only novel sentences, and an incremental summary, containing additional sentences that provide context. The aim is to establish whether a summary that contains only novel sentences provides sufficient basis to determine relevance of a document, or if indeed we need to include additional sentences to provide context. Findings from the study seem to suggest that there is only a minimal difference in performance for the tasks we set our users and that the presence of contextual information is not so important. However, for the case of mobile information access, a summary that contains only novel information does offer benefits, given bandwidth constraints.
    Date
    29. 7.2008 19:35:12
    Source
    Information processing and management. 44(2008) no.2, S.663-686
  16. Craven, T.C.: ¬A phrase flipper for the assistance of writers of abstracts and other text (1995) 0.01
    0.006849269 = product of:
      0.02397244 = sum of:
        0.010251419 = weight(_text_:information in 4897) [ClassicSimilarity], result of:
          0.010251419 = score(doc=4897,freq=2.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.1551638 = fieldWeight in 4897, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=4897)
        0.013721022 = product of:
          0.041163065 = sum of:
            0.041163065 = weight(_text_:29 in 4897) [ClassicSimilarity], result of:
              0.041163065 = score(doc=4897,freq=2.0), product of:
                0.13239008 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037635546 = queryNorm
                0.31092256 = fieldWeight in 4897, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4897)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Date
    17. 8.1996 10:29:59
    Source
    Canadian journal of information and library science. 20(1995) nos.3/4, S.41-49
  17. Vanderwende, L.; Suzuki, H.; Brockett, J.M.; Nenkova, A.: Beyond SumBasic : task-focused summarization with sentence simplification and lexical expansion (2007) 0.01
    0.006020419 = product of:
      0.021071466 = sum of:
        0.010873271 = weight(_text_:information in 948) [ClassicSimilarity], result of:
          0.010873271 = score(doc=948,freq=4.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.16457605 = fieldWeight in 948, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=948)
        0.0101981945 = product of:
          0.030594582 = sum of:
            0.030594582 = weight(_text_:22 in 948) [ClassicSimilarity], result of:
              0.030594582 = score(doc=948,freq=2.0), product of:
                0.13179328 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037635546 = queryNorm
                0.23214069 = fieldWeight in 948, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=948)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    In recent years, there has been increased interest in topic-focused multi-document summarization. In this task, automatic summaries are produced in response to a specific information request, or topic, stated by the user. The system we have designed to accomplish this task comprises four main components: a generic extractive summarization system, a topic-focusing component, sentence simplification, and lexical expansion of topic words. This paper details each of these components, together with experiments designed to quantify their individual contributions. We include an analysis of our results on two large datasets commonly used to evaluate task-focused summarization, the DUC2005 and DUC2006 datasets, using automatic metrics. Additionally, we include an analysis of our results on the DUC2006 task according to human evaluation metrics. In the human evaluation of system summaries compared to human summaries, i.e., the Pyramid method, our system ranked first out of 22 systems in terms of overall mean Pyramid score; and in the human evaluation of summary responsiveness to the topic, our system ranked third out of 35 systems.
    Source
    Information processing and management. 43(2007) no.6, S.1606-1618
  18. Oh, H.; Nam, S.; Zhu, Y.: Structured abstract summarization of scientific articles : summarization using full-text section information (2023) 0.01
    0.0055988524 = product of:
      0.019595983 = sum of:
        0.011097487 = weight(_text_:information in 889) [ClassicSimilarity], result of:
          0.011097487 = score(doc=889,freq=6.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.16796975 = fieldWeight in 889, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=889)
        0.008498495 = product of:
          0.025495486 = sum of:
            0.025495486 = weight(_text_:22 in 889) [ClassicSimilarity], result of:
              0.025495486 = score(doc=889,freq=2.0), product of:
                0.13179328 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037635546 = queryNorm
                0.19345059 = fieldWeight in 889, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=889)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    The automatic summarization of scientific articles differs from other text genres because of the structured format and longer text length. Previous approaches have focused on tackling the lengthy nature of scientific articles, aiming to improve the computational efficiency of summarizing long text using a flat, unstructured abstract. However, the structured format of scientific articles and characteristics of each section have not been fully explored, despite their importance. The lack of a sufficient investigation and discussion of various characteristics for each section and their influence on summarization results has hindered the practical use of automatic summarization for scientific articles. To provide a balanced abstract proportionally emphasizing each section of a scientific article, the community introduced the structured abstract, an abstract with distinct, labeled sections. Using this information, in this study, we aim to understand tasks ranging from data preparation to model evaluation from diverse viewpoints. Specifically, we provide a preprocessed large-scale dataset and propose a summarization method applying the introduction, methods, results, and discussion (IMRaD) format reflecting the characteristics of each section. We also discuss the objective benchmarks and perspectives of state-of-the-art algorithms and present the challenges and research directions in this area.
    Date
    22. 1.2023 18:57:12
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.2, S.234-248
  19. Wang, S.; Koopman, R.: Embed first, then predict (2019) 0.00
    0.004280793 = product of:
      0.014982775 = sum of:
        0.006407136 = weight(_text_:information in 5400) [ClassicSimilarity], result of:
          0.006407136 = score(doc=5400,freq=2.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.09697737 = fieldWeight in 5400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5400)
        0.008575639 = product of:
          0.025726916 = sum of:
            0.025726916 = weight(_text_:29 in 5400) [ClassicSimilarity], result of:
              0.025726916 = score(doc=5400,freq=2.0), product of:
                0.13239008 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037635546 = queryNorm
                0.19432661 = fieldWeight in 5400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5400)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Date
    29. 9.2019 12:18:42
    Footnote
    Beitrag eines Special Issue: Research Information Systems and Science Classifications; including papers from "Trajectories for Research: Fathoming the Promise of the NARCIS Classification," 27-28 September 2018, The Hague, The Netherlands.
  20. Wu, Y.-f.B.; Li, Q.; Bot, R.S.; Chen, X.: Finding nuggets in documents : a machine learning approach (2006) 0.00
    0.004258752 = product of:
      0.014905632 = sum of:
        0.006407136 = weight(_text_:information in 5290) [ClassicSimilarity], result of:
          0.006407136 = score(doc=5290,freq=2.0), product of:
            0.066068366 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037635546 = queryNorm
            0.09697737 = fieldWeight in 5290, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5290)
        0.008498495 = product of:
          0.025495486 = sum of:
            0.025495486 = weight(_text_:22 in 5290) [ClassicSimilarity], result of:
              0.025495486 = score(doc=5290,freq=2.0), product of:
                0.13179328 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037635546 = queryNorm
                0.19345059 = fieldWeight in 5290, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5290)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Date
    22. 7.2006 17:25:48
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.6, S.740-752

Years

Languages

  • e 86
  • d 11
  • chi 1
  • More… Less…

Types

  • a 94
  • m 2
  • el 1
  • r 1
  • s 1
  • More… Less…