Search (85 results, page 1 of 5)

  • × language_ss:"e"
  • × theme_ss:"Automatisches Abstracting"
  • × type_ss:"a"
  1. Kim, H.H.; Kim, Y.H.: Generic speech summarization of transcribed lecture videos : using tags and their semantic relations (2016) 0.00
    0.0025604097 = product of:
      0.023043687 = sum of:
        0.0031667221 = weight(_text_:in in 2640) [ClassicSimilarity], result of:
          0.0031667221 = score(doc=2640,freq=4.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.10626988 = fieldWeight in 2640, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2640)
        0.019876964 = product of:
          0.029815445 = sum of:
            0.0149750775 = weight(_text_:29 in 2640) [ClassicSimilarity], result of:
              0.0149750775 = score(doc=2640,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.19432661 = fieldWeight in 2640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2640)
            0.014840367 = weight(_text_:22 in 2640) [ClassicSimilarity], result of:
              0.014840367 = score(doc=2640,freq=2.0), product of:
                0.076713994 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021906832 = queryNorm
                0.19345059 = fieldWeight in 2640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2640)
          0.6666667 = coord(2/3)
      0.11111111 = coord(2/18)
    
    Abstract
    We propose a tag-based framework that simulates human abstractors' ability to select significant sentences based on key concepts in a sentence as well as the semantic relations between key concepts to create generic summaries of transcribed lecture videos. The proposed extractive summarization method uses tags (viewer- and author-assigned terms) as key concepts. Our method employs Flickr tag clusters and WordNet synonyms to expand tags and detect the semantic relations between tags. This method helps select sentences that have a greater number of semantically related key concepts. To investigate the effectiveness and uniqueness of the proposed method, we compare it with an existing technique, latent semantic analysis (LSA), using intrinsic and extrinsic evaluations. The results of intrinsic evaluation show that the tag-based method is as or more effective than the LSA method. We also observe that in the extrinsic evaluation, the grand mean accuracy score of the tag-based method is higher than that of the LSA method, with a statistically significant difference. Elaborating on our results, we discuss the theoretical and practical implications of our findings for speech video summarization and retrieval.
    Date
    22. 1.2016 12:29:41
  2. Endres-Niggemeyer, B.: ¬An empirical process model of abstracting (1992) 0.00
    0.002207394 = product of:
      0.019866545 = sum of:
        0.0053741056 = weight(_text_:in in 8834) [ClassicSimilarity], result of:
          0.0053741056 = score(doc=8834,freq=2.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.18034597 = fieldWeight in 8834, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.09375 = fieldNorm(doc=8834)
        0.01449244 = weight(_text_:der in 8834) [ClassicSimilarity], result of:
          0.01449244 = score(doc=8834,freq=2.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.29615843 = fieldWeight in 8834, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.09375 = fieldNorm(doc=8834)
      0.11111111 = coord(2/18)
    
    Source
    Mensch und Maschine: Informationelle Schnittstellen der Kommunikation. Proc. des 3. Int. Symposiums für Informationswissenschaft (ISI'92), 5.-7.11.1992 in Saarbrücken. Hrsg.: H.H. Zimmermann, H.-D. Luckhardt u. A. Schulz
  3. Salton, G.; Allan, J.; Buckley, C.; Singhal, A.: Automatic analysis, theme generation, and summarization of machine readable texts (1994) 0.00
    0.0018129811 = product of:
      0.01631683 = sum of:
        0.0063334443 = weight(_text_:in in 1949) [ClassicSimilarity], result of:
          0.0063334443 = score(doc=1949,freq=4.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.21253976 = fieldWeight in 1949, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=1949)
        0.009983385 = product of:
          0.029950155 = sum of:
            0.029950155 = weight(_text_:29 in 1949) [ClassicSimilarity], result of:
              0.029950155 = score(doc=1949,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.38865322 = fieldWeight in 1949, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1949)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Date
    16. 8.1998 12:30:29
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.478-483.
  4. Jones, P.A.; Bradbeer, P.V.G.: Discovery of optimal weights in a concept selection system (1996) 0.00
    0.001675593 = product of:
      0.0150803365 = sum of:
        0.007165474 = weight(_text_:in in 6974) [ClassicSimilarity], result of:
          0.007165474 = score(doc=6974,freq=8.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.24046129 = fieldWeight in 6974, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=6974)
        0.007914863 = product of:
          0.023744587 = sum of:
            0.023744587 = weight(_text_:22 in 6974) [ClassicSimilarity], result of:
              0.023744587 = score(doc=6974,freq=2.0), product of:
                0.076713994 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021906832 = queryNorm
                0.30952093 = fieldWeight in 6974, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6974)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    Describes the application of weighting strategies to model uncertainties and probabilities in automatic abstracting systems, particularly in the concept selection phase. The weights were originally assigned in an ad hoc manner and were then refined by manual analysis of the results. The new method attempts to derive a more systematic methods and performs this using a genetic algorithm
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  5. Craven, T.C.: ¬A phrase flipper for the assistance of writers of abstracts and other text (1995) 0.00
    0.0014503849 = product of:
      0.013053464 = sum of:
        0.0050667557 = weight(_text_:in in 4897) [ClassicSimilarity], result of:
          0.0050667557 = score(doc=4897,freq=4.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.17003182 = fieldWeight in 4897, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=4897)
        0.007986708 = product of:
          0.023960123 = sum of:
            0.023960123 = weight(_text_:29 in 4897) [ClassicSimilarity], result of:
              0.023960123 = score(doc=4897,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.31092256 = fieldWeight in 4897, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4897)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    Describes computerized tools for computer assisted abstracting. FlipPhr is a Microsoft Windows application program that rearranges (flips) phrases or other expressions in accordance with rules in a grammar. The flipping may be invoked with a single keystroke from within various Windows application programs that allow cutting and pasting of text. The user may modify the grammar to provide for different kinds of flipping
    Date
    17. 8.1996 10:29:59
  6. Vanderwende, L.; Suzuki, H.; Brockett, J.M.; Nenkova, A.: Beyond SumBasic : task-focused summarization with sentence simplification and lexical expansion (2007) 0.00
    0.0014494912 = product of:
      0.013045421 = sum of:
        0.007109274 = weight(_text_:in in 948) [ClassicSimilarity], result of:
          0.007109274 = score(doc=948,freq=14.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.23857531 = fieldWeight in 948, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=948)
        0.0059361467 = product of:
          0.01780844 = sum of:
            0.01780844 = weight(_text_:22 in 948) [ClassicSimilarity], result of:
              0.01780844 = score(doc=948,freq=2.0), product of:
                0.076713994 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021906832 = queryNorm
                0.23214069 = fieldWeight in 948, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=948)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    In recent years, there has been increased interest in topic-focused multi-document summarization. In this task, automatic summaries are produced in response to a specific information request, or topic, stated by the user. The system we have designed to accomplish this task comprises four main components: a generic extractive summarization system, a topic-focusing component, sentence simplification, and lexical expansion of topic words. This paper details each of these components, together with experiments designed to quantify their individual contributions. We include an analysis of our results on two large datasets commonly used to evaluate task-focused summarization, the DUC2005 and DUC2006 datasets, using automatic metrics. Additionally, we include an analysis of our results on the DUC2006 task according to human evaluation metrics. In the human evaluation of system summaries compared to human summaries, i.e., the Pyramid method, our system ranked first out of 22 systems in terms of overall mean Pyramid score; and in the human evaluation of summary responsiveness to the topic, our system ranked third out of 35 systems.
  7. Robin, J.; McKeown, K.: Empirically designing and evaluating a new revision-based model for summary generation (1996) 0.00
    0.0012775111 = product of:
      0.0114976 = sum of:
        0.003582737 = weight(_text_:in in 6751) [ClassicSimilarity], result of:
          0.003582737 = score(doc=6751,freq=2.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.120230645 = fieldWeight in 6751, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=6751)
        0.007914863 = product of:
          0.023744587 = sum of:
            0.023744587 = weight(_text_:22 in 6751) [ClassicSimilarity], result of:
              0.023744587 = score(doc=6751,freq=2.0), product of:
                0.076713994 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021906832 = queryNorm
                0.30952093 = fieldWeight in 6751, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6751)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    Presents a system for summarizing quantitative data in natural language, focusing on the use of a corpus of basketball game summaries, drawn from online news services, to empirically shape the system design and to evaluate the approach. Initial corpus analysis revealed characteristics of textual summaries that challenge the capabilities of current language generation systems. A revision based corpus analysis was used to identify and encode the revision rules of the system. Presents a quantitative evaluation, using several test corpora, to measure the robustness of the new revision based model
    Date
    6. 3.1997 16:22:15
  8. Sweeney, S.; Crestani, F.; Losada, D.E.: 'Show me more' : incremental length summarisation using novelty detection (2008) 0.00
    0.0011640685 = product of:
      0.010476616 = sum of:
        0.0054849237 = weight(_text_:in in 2054) [ClassicSimilarity], result of:
          0.0054849237 = score(doc=2054,freq=12.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.18406484 = fieldWeight in 2054, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2054)
        0.0049916925 = product of:
          0.0149750775 = sum of:
            0.0149750775 = weight(_text_:29 in 2054) [ClassicSimilarity], result of:
              0.0149750775 = score(doc=2054,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.19432661 = fieldWeight in 2054, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2054)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    The paper presents a study investigating the effects of incorporating novelty detection in automatic text summarisation. Condensing a textual document, automatic text summarisation can reduce the need to refer to the source document. It also offers a means to deliver device-friendly content when accessing information in non-traditional environments. An effective method of summarisation could be to produce a summary that includes only novel information. However, a consequence of focusing exclusively on novel parts may result in a loss of context, which may have an impact on the correct interpretation of the summary, with respect to the source document. In this study we compare two strategies to produce summaries that incorporate novelty in different ways: a constant length summary, which contains only novel sentences, and an incremental summary, containing additional sentences that provide context. The aim is to establish whether a summary that contains only novel sentences provides sufficient basis to determine relevance of a document, or if indeed we need to include additional sentences to provide context. Findings from the study seem to suggest that there is only a minimal difference in performance for the tasks we set our users and that the presence of contextual information is not so important. However, for the case of mobile information access, a summary that contains only novel information does offer benefits, given bandwidth constraints.
    Date
    29. 7.2008 19:35:12
  9. Uyttendaele, C.; Moens, M.-F.; Dumortier, J.: SALOMON: automatic abstracting of legal cases for effective access to court decisions (1998) 0.00
    0.0011248072 = product of:
      0.010123264 = sum of:
        0.0031348949 = weight(_text_:in in 495) [ClassicSimilarity], result of:
          0.0031348949 = score(doc=495,freq=2.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.10520181 = fieldWeight in 495, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=495)
        0.006988369 = product of:
          0.020965107 = sum of:
            0.020965107 = weight(_text_:29 in 495) [ClassicSimilarity], result of:
              0.020965107 = score(doc=495,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.27205724 = fieldWeight in 495, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=495)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    The SALOMON project summarises Belgian criminal cases in order to improve access to the large number of existing and future cases. A double methodology was used when developing SALOMON: the cases are processed by employing additional knowledge to interpret structural patterns and features on the one hand and by way of occurrence statistics of index terms on the other. SALOMON performs an initial categorisation and structuring of the cases and subsequently extracts the most relevant text units of the alleged offences and of the opinion of the court. The SALOMON techniques do not themselves solve any legal questions, but they do guide the use effectively towards relevant texts
    Date
    17. 7.1996 14:16:29
  10. Pinto, M.: Engineering the production of meta-information : the abstracting concern (2003) 0.00
    0.0010981164 = product of:
      0.019766094 = sum of:
        0.019766094 = product of:
          0.05929828 = sum of:
            0.05929828 = weight(_text_:29 in 4667) [ClassicSimilarity], result of:
              0.05929828 = score(doc=4667,freq=4.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.7694941 = fieldWeight in 4667, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4667)
          0.33333334 = coord(1/3)
      0.055555556 = coord(1/18)
    
    Date
    27.11.2005 18:29:55
    Source
    Journal of information science. 29(2003) no.5, S.405-418
  11. Wang, S.; Koopman, R.: Embed first, then predict (2019) 0.00
    0.0010522349 = product of:
      0.0094701145 = sum of:
        0.0044784215 = weight(_text_:in in 5400) [ClassicSimilarity], result of:
          0.0044784215 = score(doc=5400,freq=8.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.15028831 = fieldWeight in 5400, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5400)
        0.0049916925 = product of:
          0.0149750775 = sum of:
            0.0149750775 = weight(_text_:29 in 5400) [ClassicSimilarity], result of:
              0.0149750775 = score(doc=5400,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.19432661 = fieldWeight in 5400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5400)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    Automatic subject prediction is a desirable feature for modern digital library systems, as manual indexing can no longer cope with the rapid growth of digital collections. It is also desirable to be able to identify a small set of entities (e.g., authors, citations, bibliographic records) which are most relevant to a query. This gets more difficult when the amount of data increases dramatically. Data sparsity and model scalability are the major challenges to solving this type of extreme multilabel classification problem automatically. In this paper, we propose to address this problem in two steps: we first embed different types of entities into the same semantic space, where similarity could be computed easily; second, we propose a novel non-parametric method to identify the most relevant entities in addition to direct semantic similarities. We show how effectively this approach predicts even very specialised subjects, which are associated with few documents in the training set and are more problematic for a classifier.
    Date
    29. 9.2019 12:18:42
  12. Wu, Y.-f.B.; Li, Q.; Bot, R.S.; Chen, X.: Finding nuggets in documents : a machine learning approach (2006) 0.00
    9.015012E-4 = product of:
      0.008113511 = sum of:
        0.0031667221 = weight(_text_:in in 5290) [ClassicSimilarity], result of:
          0.0031667221 = score(doc=5290,freq=4.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.10626988 = fieldWeight in 5290, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5290)
        0.0049467892 = product of:
          0.014840367 = sum of:
            0.014840367 = weight(_text_:22 in 5290) [ClassicSimilarity], result of:
              0.014840367 = score(doc=5290,freq=2.0), product of:
                0.076713994 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021906832 = queryNorm
                0.19345059 = fieldWeight in 5290, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5290)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    Document keyphrases provide a concise summary of a document's content, offering semantic metadata summarizing a document. They can be used in many applications related to knowledge management and text mining, such as automatic text summarization, development of search engines, document clustering, document classification, thesaurus construction, and browsing interfaces. Because only a small portion of documents have keyphrases assigned by authors, and it is time-consuming and costly to manually assign keyphrases to documents, it is necessary to develop an algorithm to automatically generate keyphrases for documents. This paper describes a Keyphrase Identification Program (KIP), which extracts document keyphrases by using prior positive samples of human identified phrases to assign weights to the candidate keyphrases. The logic of our algorithm is: The more keywords a candidate keyphrase contains and the more significant these keywords are, the more likely this candidate phrase is a keyphrase. KIP's learning function can enrich the glossary database by automatically adding new identified keyphrases to the database. KIP's personalization feature will let the user build a glossary database specifically suitable for the area of his/her interest. The evaluation results show that KIP's performance is better than the systems we compared to and that the learning function is effective.
    Date
    22. 7.2006 17:25:48
  13. Oh, H.; Nam, S.; Zhu, Y.: Structured abstract summarization of scientific articles : summarization using full-text section information (2023) 0.00
    9.015012E-4 = product of:
      0.008113511 = sum of:
        0.0031667221 = weight(_text_:in in 889) [ClassicSimilarity], result of:
          0.0031667221 = score(doc=889,freq=4.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.10626988 = fieldWeight in 889, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=889)
        0.0049467892 = product of:
          0.014840367 = sum of:
            0.014840367 = weight(_text_:22 in 889) [ClassicSimilarity], result of:
              0.014840367 = score(doc=889,freq=2.0), product of:
                0.076713994 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021906832 = queryNorm
                0.19345059 = fieldWeight in 889, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=889)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    The automatic summarization of scientific articles differs from other text genres because of the structured format and longer text length. Previous approaches have focused on tackling the lengthy nature of scientific articles, aiming to improve the computational efficiency of summarizing long text using a flat, unstructured abstract. However, the structured format of scientific articles and characteristics of each section have not been fully explored, despite their importance. The lack of a sufficient investigation and discussion of various characteristics for each section and their influence on summarization results has hindered the practical use of automatic summarization for scientific articles. To provide a balanced abstract proportionally emphasizing each section of a scientific article, the community introduced the structured abstract, an abstract with distinct, labeled sections. Using this information, in this study, we aim to understand tasks ranging from data preparation to model evaluation from diverse viewpoints. Specifically, we provide a preprocessed large-scale dataset and propose a summarization method applying the introduction, methods, results, and discussion (IMRaD) format reflecting the characteristics of each section. We also discuss the objective benchmarks and perspectives of state-of-the-art algorithms and present the challenges and research directions in this area.
    Date
    22. 1.2023 18:57:12
  14. Jiang, Y.; Meng, R.; Huang, Y.; Lu, W.; Liu, J.: Generating keyphrases for readers : a controllable keyphrase generation framework (2023) 0.00
    7.984445E-4 = product of:
      0.0071860002 = sum of:
        0.0022392108 = weight(_text_:in in 1012) [ClassicSimilarity], result of:
          0.0022392108 = score(doc=1012,freq=2.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.07514416 = fieldWeight in 1012, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1012)
        0.0049467892 = product of:
          0.014840367 = sum of:
            0.014840367 = weight(_text_:22 in 1012) [ClassicSimilarity], result of:
              0.014840367 = score(doc=1012,freq=2.0), product of:
                0.076713994 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021906832 = queryNorm
                0.19345059 = fieldWeight in 1012, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1012)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    With the wide application of keyphrases in many Information Retrieval (IR) and Natural Language Processing (NLP) tasks, automatic keyphrase prediction has been emerging. However, these statistically important phrases are contributing increasingly less to the related tasks because the end-to-end learning mechanism enables models to learn the important semantic information of the text directly. Similarly, keyphrases are of little help for readers to quickly grasp the paper's main idea because the relationship between the keyphrase and the paper is not explicit to readers. Therefore, we propose to generate keyphrases with specific functions for readers to bridge the semantic gap between them and the information producers, and verify the effectiveness of the keyphrase function for assisting users' comprehension with a user experiment. A controllable keyphrase generation framework (the CKPG) that uses the keyphrase function as a control code to generate categorized keyphrases is proposed and implemented based on Transformer, BART, and T5, respectively. For the Computer Science domain, the Macro-avgs of , , and on the Paper with Code dataset are up to 0.680, 0.535, and 0.558, respectively. Our experimental results indicate the effectiveness of the CKPG models.
    Date
    22. 6.2023 14:55:20
  15. Kannan, R.; Ghinea, G.; Swaminathan, S.: What do you wish to see? : A summarization system for movies based on user preferences (2015) 0.00
    7.251924E-4 = product of:
      0.006526732 = sum of:
        0.0025333778 = weight(_text_:in in 2683) [ClassicSimilarity], result of:
          0.0025333778 = score(doc=2683,freq=4.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.08501591 = fieldWeight in 2683, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=2683)
        0.003993354 = product of:
          0.011980061 = sum of:
            0.011980061 = weight(_text_:29 in 2683) [ClassicSimilarity], result of:
              0.011980061 = score(doc=2683,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.15546128 = fieldWeight in 2683, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2683)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    Video summarization aims at producing a compact version of a full-length video while preserving the significant content of the original video. Movie summarization condenses a full-length movie into a summary that still retains the most significant and interesting content of the original movie. In the past, several movie summarization systems have been proposed to generate a movie summary based on low-level video features such as color, motion, texture, etc. However, a generic summary, which is common to everyone and is produced based only on low-level video features will not satisfy every user. As users' preferences for the summary differ vastly for the same movie, there is a need for a personalized movie summarization system nowadays. To address this demand, this paper proposes a novel system to generate semantically meaningful video summaries for the same movie, which are tailored to the preferences and interests of a user. For a given movie, shots and scenes are automatically detected and their high-level features are semi-automatically annotated. Preferences over high-level movie features are explicitly collected from the user using a query interface. The user preferences are generated by means of a stored-query. Movie summaries are generated at shot level and scene level, where shots or scenes are selected for summary skim based on the similarity measured between shots and scenes, and the user's preferences. The proposed movie summarization system is evaluated subjectively using a sample of 20 subjects with eight movies in the English language. The quality of the generated summaries is assessed by informativeness, enjoyability, relevance, and acceptance metrics and Quality of Perception measures. Further, the usability of the proposed summarization system is subjectively evaluated by conducting a questionnaire survey. The experimental results on the performance of the proposed movie summarization approach show the potential of the proposed system.
    Date
    25. 1.2016 18:45:29
  16. Goh, A.; Hui, S.C.: TES: a text extraction system (1996) 0.00
    4.3971458E-4 = product of:
      0.007914863 = sum of:
        0.007914863 = product of:
          0.023744587 = sum of:
            0.023744587 = weight(_text_:22 in 6599) [ClassicSimilarity], result of:
              0.023744587 = score(doc=6599,freq=2.0), product of:
                0.076713994 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021906832 = queryNorm
                0.30952093 = fieldWeight in 6599, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6599)
          0.33333334 = coord(1/3)
      0.055555556 = coord(1/18)
    
    Date
    26. 2.1997 10:22:43
  17. Johnson, F.: Automatic abstracting research (1995) 0.00
    3.9808187E-4 = product of:
      0.007165474 = sum of:
        0.007165474 = weight(_text_:in in 3847) [ClassicSimilarity], result of:
          0.007165474 = score(doc=3847,freq=8.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.24046129 = fieldWeight in 3847, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=3847)
      0.055555556 = coord(1/18)
    
    Abstract
    Discusses the attraction for researchers of the prospect of automatically generating abstracts but notes that the promise of superseding the human effort has yet to be realized. Notes ways in which progress in automatic abstracting research may come about and suggests a shift in the aim from reproducing the conventional benefits of abstracts to accentuating the advantages to users of the computerized representation of information in large textual databases
  18. Soricut, R.; Marcu, D.: Abstractive headline generation using WIDL-expressions (2007) 0.00
    3.8943547E-4 = product of:
      0.0070098387 = sum of:
        0.0070098387 = weight(_text_:in in 943) [ClassicSimilarity], result of:
          0.0070098387 = score(doc=943,freq=10.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.23523843 = fieldWeight in 943, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=943)
      0.055555556 = coord(1/18)
    
    Abstract
    We present a new paradigm for the automatic creation of document headlines that is based on direct transformation of relevant textual information into well-formed textual output. Starting from an input document, we automatically create compact representations of weighted finite sets of strings, called WIDL-expressions, which encode the most important topics in the document. A generic natural language generation engine performs the headline generation task, driven by both statistical knowledge encapsulated in WIDL-expressions (representing topic biases induced by the input document) and statistical knowledge encapsulated in language models (representing biases induced by the target language). Our evaluation shows similar performance in quality with a state-of-the-art, extractive approach to headline generation, and significant improvements in quality over previously proposed solutions to abstractive headline generation.
  19. Atanassova, I.; Bertin, M.; Larivière, V.: On the composition of scientific abstracts (2016) 0.00
    3.7320176E-4 = product of:
      0.0067176316 = sum of:
        0.0067176316 = weight(_text_:in in 3028) [ClassicSimilarity], result of:
          0.0067176316 = score(doc=3028,freq=18.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.22543246 = fieldWeight in 3028, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3028)
      0.055555556 = coord(1/18)
    
    Abstract
    Purpose - Scientific abstracts reproduce only part of the information and the complexity of argumentation in a scientific article. The purpose of this paper provides a first analysis of the similarity between the text of scientific abstracts and the body of articles, using sentences as the basic textual unit. It contributes to the understanding of the structure of abstracts. Design/methodology/approach - Using sentence-based similarity metrics, the authors quantify the phenomenon of text re-use in abstracts and examine the positions of the sentences that are similar to sentences in abstracts in the introduction, methods, results and discussion structure, using a corpus of over 85,000 research articles published in the seven Public Library of Science journals. Findings - The authors provide evidence that 84 percent of abstract have at least one sentence in common with the body of the paper. Studying the distributions of sentences in the body of the articles that are re-used in abstracts, the authors show that there exists a strong relation between the rhetorical structure of articles and the zones that authors re-use when writing abstracts, with sentences mainly coming from the beginning of the introduction and the end of the conclusion. Originality/value - Scientific abstracts contain what is considered by the author(s) as information that best describe documents' content. This is a first study that examines the relation between the contents of abstracts and the rhetorical structure of scientific articles. The work might provide new insight for improving automatic abstracting tools as well as information retrieval approaches, in which text organization and structure are important features.
  20. Martinez-Romo, J.; Araujo, L.; Fernandez, A.D.: SemGraph : extracting keyphrases following a novel semantic graph-based approach (2016) 0.00
    3.656616E-4 = product of:
      0.0065819086 = sum of:
        0.0065819086 = weight(_text_:in in 2832) [ClassicSimilarity], result of:
          0.0065819086 = score(doc=2832,freq=12.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.22087781 = fieldWeight in 2832, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2832)
      0.055555556 = coord(1/18)
    
    Abstract
    Keyphrases represent the main topics a text is about. In this article, we introduce SemGraph, an unsupervised algorithm for extracting keyphrases from a collection of texts based on a semantic relationship graph. The main novelty of this algorithm is its ability to identify semantic relationships between words whose presence is statistically significant. Our method constructs a co-occurrence graph in which words appearing in the same document are linked, provided their presence in the collection is statistically significant with respect to a null model. Furthermore, the graph obtained is enriched with information from WordNet. We have used the most recent and standardized benchmark to evaluate the system ability to detect the keyphrases that are part of the text. The result is a method that achieves an improvement of 5.3% and 7.28% in F measure over the two labeled sets of keyphrases used in the evaluation of SemEval-2010.

Years