Search (99 results, page 1 of 5)

  • × theme_ss:"Automatisches Abstracting"
  1. Su, H.: Automatic abstracting (1996) 0.03
    0.026604077 = product of:
      0.07981223 = sum of:
        0.025194373 = weight(_text_:library in 150) [ClassicSimilarity], result of:
          0.025194373 = score(doc=150,freq=2.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.29050803 = fieldWeight in 150, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.078125 = fieldNorm(doc=150)
        0.023576811 = weight(_text_:of in 150) [ClassicSimilarity], result of:
          0.023576811 = score(doc=150,freq=14.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.45711282 = fieldWeight in 150, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=150)
        0.031041041 = product of:
          0.062082082 = sum of:
            0.062082082 = weight(_text_:problems in 150) [ClassicSimilarity], result of:
              0.062082082 = score(doc=150,freq=2.0), product of:
                0.13613719 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.03298316 = queryNorm
                0.4560259 = fieldWeight in 150, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.078125 = fieldNorm(doc=150)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    Presents an introductory overview of research into the automatic construction of abstracts from the texts of documents. Discusses the origin and definition of automatic abstracting; reasons for using automatic abstracting; methods of automatic abstracting; and evaluation problems
    Source
    Bulletin of the Library Association of China. 1996, no.56, Jun., S.41-47
  2. Edmundson, H.P.; Wyllis, R.E.: Problems in automatic abstracting (1964) 0.01
    0.012429585 = product of:
      0.055933133 = sum of:
        0.012475675 = weight(_text_:of in 3670) [ClassicSimilarity], result of:
          0.012475675 = score(doc=3670,freq=2.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.24188137 = fieldWeight in 3670, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.109375 = fieldNorm(doc=3670)
        0.04345746 = product of:
          0.08691492 = sum of:
            0.08691492 = weight(_text_:problems in 3670) [ClassicSimilarity], result of:
              0.08691492 = score(doc=3670,freq=2.0), product of:
                0.13613719 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.03298316 = queryNorm
                0.63843626 = fieldWeight in 3670, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3670)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Source
    Communications of the Association for Computing Machinery. 7(1964) no.1, S.259-263
  3. Paice, C.D.: Automatic abstracting (1994) 0.01
    0.012126425 = product of:
      0.054568913 = sum of:
        0.040310998 = weight(_text_:library in 1255) [ClassicSimilarity], result of:
          0.040310998 = score(doc=1255,freq=2.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.46481284 = fieldWeight in 1255, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.125 = fieldNorm(doc=1255)
        0.014257914 = weight(_text_:of in 1255) [ClassicSimilarity], result of:
          0.014257914 = score(doc=1255,freq=2.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.27643585 = fieldWeight in 1255, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.125 = fieldNorm(doc=1255)
      0.22222222 = coord(2/9)
    
    Source
    Encyclopedia of library and information science. Vol.53, [=Suppl.16]
  4. Paice, C.D.: Automatic abstracting (1994) 0.01
    0.01187836 = product of:
      0.05345262 = sum of:
        0.035630222 = weight(_text_:library in 917) [ClassicSimilarity], result of:
          0.035630222 = score(doc=917,freq=4.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.4108404 = fieldWeight in 917, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.078125 = fieldNorm(doc=917)
        0.017822394 = weight(_text_:of in 917) [ClassicSimilarity], result of:
          0.017822394 = score(doc=917,freq=8.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.34554482 = fieldWeight in 917, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=917)
      0.22222222 = coord(2/9)
    
    Abstract
    The final report of the 2nd British Library abstracting project (the BLAB project), 1990-1992, which was carried out partly at the Computing Department of Lancaster University, and partly at the Centre for Computational Linguistics, UMIST. This project built on the results of the first project, of 1985-1987, to build a system designed create abstracts automatically from given texts
    Imprint
    London : British Library
  5. Johnson, F.C.: ¬A critical view of system-centered to user-centered evaluation of automatic abstracting research (1999) 0.01
    0.010834404 = product of:
      0.04875482 = sum of:
        0.030233247 = weight(_text_:library in 2994) [ClassicSimilarity], result of:
          0.030233247 = score(doc=2994,freq=2.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.34860963 = fieldWeight in 2994, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.09375 = fieldNorm(doc=2994)
        0.018521573 = weight(_text_:of in 2994) [ClassicSimilarity], result of:
          0.018521573 = score(doc=2994,freq=6.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.3591007 = fieldWeight in 2994, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=2994)
      0.22222222 = coord(2/9)
    
    Source
    New review of information and library research. 5(1999), S.49-63
  6. McKeown, K.; Robin, J.; Kukich, K.: Generating concise natural language summaries (1995) 0.01
    0.008878275 = product of:
      0.039952237 = sum of:
        0.008911197 = weight(_text_:of in 2932) [ClassicSimilarity], result of:
          0.008911197 = score(doc=2932,freq=2.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.17277241 = fieldWeight in 2932, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=2932)
        0.031041041 = product of:
          0.062082082 = sum of:
            0.062082082 = weight(_text_:problems in 2932) [ClassicSimilarity], result of:
              0.062082082 = score(doc=2932,freq=2.0), product of:
                0.13613719 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.03298316 = queryNorm
                0.4560259 = fieldWeight in 2932, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2932)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Description of the problems for summary generation, the applications developed (for basket ball games - STREAK and for telephone network planning activity - PLANDOC), the linguistic constructions that the systems use to convey information concisely and the textual constraints that determine what information gets included
  7. Craven, T.C.: ¬A computer-aided abstracting tool kit (1993) 0.01
    0.0086704325 = product of:
      0.039016947 = sum of:
        0.020155499 = weight(_text_:library in 6506) [ClassicSimilarity], result of:
          0.020155499 = score(doc=6506,freq=2.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.23240642 = fieldWeight in 6506, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0625 = fieldNorm(doc=6506)
        0.018861448 = weight(_text_:of in 6506) [ClassicSimilarity], result of:
          0.018861448 = score(doc=6506,freq=14.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.36569026 = fieldWeight in 6506, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=6506)
      0.22222222 = coord(2/9)
    
    Abstract
    Describes the abstracting assistance features being prototyped in the TEXNET text network management system. Sentence weighting methods include: weithing negatively or positively on the stems in a selected passage; weighting on general lists of cue words, adjusting weights of selected segments; and weighting of occurrence of frequent stems. The user may adjust a number of parameters: the minimum strength of extracts; the threshold for frequent word/stems and the amount sentence weight is to be adjusted for each weighting type
    Source
    Canadian journal of information and library science. 18(1993) no.2, S.20-31
  8. Johnson, F.: Automatic abstracting research (1995) 0.01
    0.008359512 = product of:
      0.037617806 = sum of:
        0.020155499 = weight(_text_:library in 3847) [ClassicSimilarity], result of:
          0.020155499 = score(doc=3847,freq=2.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.23240642 = fieldWeight in 3847, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0625 = fieldNorm(doc=3847)
        0.017462308 = weight(_text_:of in 3847) [ClassicSimilarity], result of:
          0.017462308 = score(doc=3847,freq=12.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.33856338 = fieldWeight in 3847, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=3847)
      0.22222222 = coord(2/9)
    
    Abstract
    Discusses the attraction for researchers of the prospect of automatically generating abstracts but notes that the promise of superseding the human effort has yet to be realized. Notes ways in which progress in automatic abstracting research may come about and suggests a shift in the aim from reproducing the conventional benefits of abstracts to accentuating the advantages to users of the computerized representation of information in large textual databases
    Source
    Library review. 44(1995) no.8, S.28-36
  9. Craven, T.C.: ¬A phrase flipper for the assistance of writers of abstracts and other text (1995) 0.01
    0.008021408 = product of:
      0.036096334 = sum of:
        0.020155499 = weight(_text_:library in 4897) [ClassicSimilarity], result of:
          0.020155499 = score(doc=4897,freq=2.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.23240642 = fieldWeight in 4897, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0625 = fieldNorm(doc=4897)
        0.015940834 = weight(_text_:of in 4897) [ClassicSimilarity], result of:
          0.015940834 = score(doc=4897,freq=10.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.3090647 = fieldWeight in 4897, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=4897)
      0.22222222 = coord(2/9)
    
    Abstract
    Describes computerized tools for computer assisted abstracting. FlipPhr is a Microsoft Windows application program that rearranges (flips) phrases or other expressions in accordance with rules in a grammar. The flipping may be invoked with a single keystroke from within various Windows application programs that allow cutting and pasting of text. The user may modify the grammar to provide for different kinds of flipping
    Source
    Canadian journal of information and library science. 20(1995) nos.3/4, S.41-49
  10. Robin, J.; McKeown, K.: Empirically designing and evaluating a new revision-based model for summary generation (1996) 0.01
    0.007852746 = product of:
      0.03533736 = sum of:
        0.017462308 = weight(_text_:of in 6751) [ClassicSimilarity], result of:
          0.017462308 = score(doc=6751,freq=12.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.33856338 = fieldWeight in 6751, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=6751)
        0.017875053 = product of:
          0.035750106 = sum of:
            0.035750106 = weight(_text_:22 in 6751) [ClassicSimilarity], result of:
              0.035750106 = score(doc=6751,freq=2.0), product of:
                0.11550141 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03298316 = queryNorm
                0.30952093 = fieldWeight in 6751, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6751)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Presents a system for summarizing quantitative data in natural language, focusing on the use of a corpus of basketball game summaries, drawn from online news services, to empirically shape the system design and to evaluate the approach. Initial corpus analysis revealed characteristics of textual summaries that challenge the capabilities of current language generation systems. A revision based corpus analysis was used to identify and encode the revision rules of the system. Presents a quantitative evaluation, using several test corpora, to measure the robustness of the new revision based model
    Date
    6. 3.1997 16:22:15
  11. Atanassova, I.; Bertin, M.; Larivière, V.: On the composition of scientific abstracts (2016) 0.01
    0.007547885 = product of:
      0.033965483 = sum of:
        0.0125971865 = weight(_text_:library in 3028) [ClassicSimilarity], result of:
          0.0125971865 = score(doc=3028,freq=2.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.14525402 = fieldWeight in 3028, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3028)
        0.021368299 = weight(_text_:of in 3028) [ClassicSimilarity], result of:
          0.021368299 = score(doc=3028,freq=46.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.41429368 = fieldWeight in 3028, product of:
              6.78233 = tf(freq=46.0), with freq of:
                46.0 = termFreq=46.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3028)
      0.22222222 = coord(2/9)
    
    Abstract
    Purpose - Scientific abstracts reproduce only part of the information and the complexity of argumentation in a scientific article. The purpose of this paper provides a first analysis of the similarity between the text of scientific abstracts and the body of articles, using sentences as the basic textual unit. It contributes to the understanding of the structure of abstracts. Design/methodology/approach - Using sentence-based similarity metrics, the authors quantify the phenomenon of text re-use in abstracts and examine the positions of the sentences that are similar to sentences in abstracts in the introduction, methods, results and discussion structure, using a corpus of over 85,000 research articles published in the seven Public Library of Science journals. Findings - The authors provide evidence that 84 percent of abstract have at least one sentence in common with the body of the paper. Studying the distributions of sentences in the body of the articles that are re-used in abstracts, the authors show that there exists a strong relation between the rhetorical structure of articles and the zones that authors re-use when writing abstracts, with sentences mainly coming from the beginning of the introduction and the end of the conclusion. Originality/value - Scientific abstracts contain what is considered by the author(s) as information that best describe documents' content. This is a first study that examines the relation between the contents of abstracts and the rhetorical structure of scientific articles. The work might provide new insight for improving automatic abstracting tools as well as information retrieval approaches, in which text organization and structure are important features.
    Source
    Journal of documentation. 72(2016) no.4, S.636-647
  12. Goh, A.; Hui, S.C.: TES: a text extraction system (1996) 0.01
    0.0075146416 = product of:
      0.033815887 = sum of:
        0.015940834 = weight(_text_:of in 6599) [ClassicSimilarity], result of:
          0.015940834 = score(doc=6599,freq=10.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.3090647 = fieldWeight in 6599, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=6599)
        0.017875053 = product of:
          0.035750106 = sum of:
            0.035750106 = weight(_text_:22 in 6599) [ClassicSimilarity], result of:
              0.035750106 = score(doc=6599,freq=2.0), product of:
                0.11550141 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03298316 = queryNorm
                0.30952093 = fieldWeight in 6599, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6599)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    With the onset of the information explosion arising from digital libraries and access to a wealth of information through the Internet, the need to efficiently determine the relevance of a document becomes even more urgent. Describes a text extraction system (TES), which retrieves a set of sentences from a document to form an indicative abstract. Such an automated process enables information to be filtered more quickly. Discusses the combination of various text extraction techniques. Compares results with manually produced abstracts
    Date
    26. 2.1997 10:22:43
  13. Jones, P.A.; Bradbeer, P.V.G.: Discovery of optimal weights in a concept selection system (1996) 0.01
    0.0075146416 = product of:
      0.033815887 = sum of:
        0.015940834 = weight(_text_:of in 6974) [ClassicSimilarity], result of:
          0.015940834 = score(doc=6974,freq=10.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.3090647 = fieldWeight in 6974, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=6974)
        0.017875053 = product of:
          0.035750106 = sum of:
            0.035750106 = weight(_text_:22 in 6974) [ClassicSimilarity], result of:
              0.035750106 = score(doc=6974,freq=2.0), product of:
                0.11550141 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03298316 = queryNorm
                0.30952093 = fieldWeight in 6974, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6974)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Describes the application of weighting strategies to model uncertainties and probabilities in automatic abstracting systems, particularly in the concept selection phase. The weights were originally assigned in an ad hoc manner and were then refined by manual analysis of the results. The new method attempts to derive a more systematic methods and performs this using a genetic algorithm
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  14. Kannan, R.; Ghinea, G.; Swaminathan, S.: What do you wish to see? : A summarization system for movies based on user preferences (2015) 0.01
    0.007378841 = product of:
      0.033204783 = sum of:
        0.011822038 = weight(_text_:of in 2683) [ClassicSimilarity], result of:
          0.011822038 = score(doc=2683,freq=22.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.2292085 = fieldWeight in 2683, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=2683)
        0.021382743 = product of:
          0.042765487 = sum of:
            0.042765487 = weight(_text_:etc in 2683) [ClassicSimilarity], result of:
              0.042765487 = score(doc=2683,freq=2.0), product of:
                0.17865302 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.03298316 = queryNorm
                0.23937736 = fieldWeight in 2683, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2683)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Video summarization aims at producing a compact version of a full-length video while preserving the significant content of the original video. Movie summarization condenses a full-length movie into a summary that still retains the most significant and interesting content of the original movie. In the past, several movie summarization systems have been proposed to generate a movie summary based on low-level video features such as color, motion, texture, etc. However, a generic summary, which is common to everyone and is produced based only on low-level video features will not satisfy every user. As users' preferences for the summary differ vastly for the same movie, there is a need for a personalized movie summarization system nowadays. To address this demand, this paper proposes a novel system to generate semantically meaningful video summaries for the same movie, which are tailored to the preferences and interests of a user. For a given movie, shots and scenes are automatically detected and their high-level features are semi-automatically annotated. Preferences over high-level movie features are explicitly collected from the user using a query interface. The user preferences are generated by means of a stored-query. Movie summaries are generated at shot level and scene level, where shots or scenes are selected for summary skim based on the similarity measured between shots and scenes, and the user's preferences. The proposed movie summarization system is evaluated subjectively using a sample of 20 subjects with eight movies in the English language. The quality of the generated summaries is assessed by informativeness, enjoyability, relevance, and acceptance metrics and Quality of Perception measures. Further, the usability of the proposed summarization system is subjectively evaluated by conducting a questionnaire survey. The experimental results on the performance of the proposed movie summarization approach show the potential of the proposed system.
  15. Marcu, D.: Automatic abstracting and summarization (2009) 0.01
    0.006691497 = product of:
      0.030111736 = sum of:
        0.01763606 = weight(_text_:library in 3748) [ClassicSimilarity], result of:
          0.01763606 = score(doc=3748,freq=2.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.20335563 = fieldWeight in 3748, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3748)
        0.012475675 = weight(_text_:of in 3748) [ClassicSimilarity], result of:
          0.012475675 = score(doc=3748,freq=8.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.24188137 = fieldWeight in 3748, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3748)
      0.22222222 = coord(2/9)
    
    Abstract
    After lying dormant for a few decades, the field of automated text summarization has experienced a tremendous resurgence of interest. Recently, many new algorithms and techniques have been proposed for identifying important information in single documents and document collections, and for mapping this information into grammatical, cohesive, and coherent abstracts. Since 1997, annual workshops, conferences, and large-scale comparative evaluations have provided a rich environment for exchanging ideas between researchers in Asia, Europe, and North America. This entry reviews the main developments in the field and provides a guiding map to those interested in understanding the strengths and weaknesses of an increasingly ubiquitous technology.
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  16. Vanderwende, L.; Suzuki, H.; Brockett, J.M.; Nenkova, A.: Beyond SumBasic : task-focused summarization with sentence simplification and lexical expansion (2007) 0.01
    0.0065436536 = product of:
      0.029446442 = sum of:
        0.016040152 = weight(_text_:of in 948) [ClassicSimilarity], result of:
          0.016040152 = score(doc=948,freq=18.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.3109903 = fieldWeight in 948, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=948)
        0.013406289 = product of:
          0.026812578 = sum of:
            0.026812578 = weight(_text_:22 in 948) [ClassicSimilarity], result of:
              0.026812578 = score(doc=948,freq=2.0), product of:
                0.11550141 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03298316 = queryNorm
                0.23214069 = fieldWeight in 948, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=948)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    In recent years, there has been increased interest in topic-focused multi-document summarization. In this task, automatic summaries are produced in response to a specific information request, or topic, stated by the user. The system we have designed to accomplish this task comprises four main components: a generic extractive summarization system, a topic-focusing component, sentence simplification, and lexical expansion of topic words. This paper details each of these components, together with experiments designed to quantify their individual contributions. We include an analysis of our results on two large datasets commonly used to evaluate task-focused summarization, the DUC2005 and DUC2006 datasets, using automatic metrics. Additionally, we include an analysis of our results on the DUC2006 task according to human evaluation metrics. In the human evaluation of system summaries compared to human summaries, i.e., the Pyramid method, our system ranked first out of 22 systems in terms of overall mean Pyramid score; and in the human evaluation of summary responsiveness to the topic, our system ranked third out of 35 systems.
  17. Ou, S.; Khoo, S.G.; Goh, D.H.: Automatic multidocument summarization of research abstracts : design and user evaluation (2007) 0.01
    0.00636935 = product of:
      0.028662074 = sum of:
        0.0125971865 = weight(_text_:library in 522) [ClassicSimilarity], result of:
          0.0125971865 = score(doc=522,freq=2.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.14525402 = fieldWeight in 522, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=522)
        0.016064888 = weight(_text_:of in 522) [ClassicSimilarity], result of:
          0.016064888 = score(doc=522,freq=26.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.31146988 = fieldWeight in 522, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=522)
      0.22222222 = coord(2/9)
    
    Abstract
    The purpose of this study was to develop a method for automatic construction of multidocument summaries of sets of research abstracts that may be retrieved by a digital library or search engine in response to a user query. Sociology dissertation abstracts were selected as the sample domain in this study. A variable-based framework was proposed for integrating and organizing research concepts and relationships as well as research methods and contextual relations extracted from different dissertation abstracts. Based on the framework, a new summarization method was developed, which parses the discourse structure of abstracts, extracts research concepts and relationships, integrates the information across different abstracts, and organizes and presents them in a Web-based interface. The focus of this article is on the user evaluation that was performed to assess the overall quality and usefulness of the summaries. Two types of variable-based summaries generated using the summarization method-with or without the use of a taxonomy-were compared against a sentence-based summary that lists only the research-objective sentences extracted from each abstract and another sentence-based summary generated using the MEAD system that extracts important sentences. The evaluation results indicate that the majority of sociological researchers (70%) and general users (64%) preferred the variable-based summaries generated with the use of the taxonomy.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.10, S.1419-1435
  18. Oh, H.; Nam, S.; Zhu, Y.: Structured abstract summarization of scientific articles : summarization using full-text section information (2023) 0.01
    0.0063174153 = product of:
      0.028428368 = sum of:
        0.01725646 = weight(_text_:of in 889) [ClassicSimilarity], result of:
          0.01725646 = score(doc=889,freq=30.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.33457235 = fieldWeight in 889, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=889)
        0.011171908 = product of:
          0.022343816 = sum of:
            0.022343816 = weight(_text_:22 in 889) [ClassicSimilarity], result of:
              0.022343816 = score(doc=889,freq=2.0), product of:
                0.11550141 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03298316 = queryNorm
                0.19345059 = fieldWeight in 889, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=889)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    The automatic summarization of scientific articles differs from other text genres because of the structured format and longer text length. Previous approaches have focused on tackling the lengthy nature of scientific articles, aiming to improve the computational efficiency of summarizing long text using a flat, unstructured abstract. However, the structured format of scientific articles and characteristics of each section have not been fully explored, despite their importance. The lack of a sufficient investigation and discussion of various characteristics for each section and their influence on summarization results has hindered the practical use of automatic summarization for scientific articles. To provide a balanced abstract proportionally emphasizing each section of a scientific article, the community introduced the structured abstract, an abstract with distinct, labeled sections. Using this information, in this study, we aim to understand tasks ranging from data preparation to model evaluation from diverse viewpoints. Specifically, we provide a preprocessed large-scale dataset and propose a summarization method applying the introduction, methods, results, and discussion (IMRaD) format reflecting the characteristics of each section. We also discuss the objective benchmarks and perspectives of state-of-the-art algorithms and present the challenges and research directions in this area.
    Date
    22. 1.2023 18:57:12
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.2, S.234-248
  19. Finegan-Dollak, C.; Radev, D.R.: Sentence simplification, compression, and disaggregation for summarization of sophisticated documents (2016) 0.01
    0.0062495233 = product of:
      0.028122855 = sum of:
        0.012602335 = weight(_text_:of in 3122) [ClassicSimilarity], result of:
          0.012602335 = score(doc=3122,freq=16.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.24433708 = fieldWeight in 3122, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3122)
        0.0155205205 = product of:
          0.031041041 = sum of:
            0.031041041 = weight(_text_:problems in 3122) [ClassicSimilarity], result of:
              0.031041041 = score(doc=3122,freq=2.0), product of:
                0.13613719 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.03298316 = queryNorm
                0.22801295 = fieldWeight in 3122, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3122)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Sophisticated documents like legal cases and biomedical articles can contain unusually long sentences. Extractive summarizers can select such sentences-potentially adding hundreds of unnecessary words to the summary-or exclude them and lose important content. Sentence simplification or compression seems on the surface to be a promising solution. However, compression removes words before the selection algorithm can use them, and simplification generates sentences that may be ambiguous in an extractive summary. We therefore compare the performance of an extractive summarizer selecting from the sentences of the original document with that of the summarizer selecting from sentences shortened in three ways: simplification, compression, and disaggregation, which splits one sentence into several according to rules designed to keep all meaning. We find that on legal cases and biomedical articles, these shortening methods generate ungrammatical output. Human evaluators performed an extrinsic evaluation consisting of comprehension questions about the summaries. Evaluators given compressed, simplified, or disaggregated versions of the summaries answered fewer questions correctly than did those given summaries with unaltered sentences. Error analysis suggests 2 causes: Altered sentences sometimes interact with the sentence selection algorithm, and alterations to sentences sometimes obscure information in the summary. We discuss future work to alleviate these problems.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.10, S.2437-2453
  20. Kim, H.H.; Kim, Y.H.: Generic speech summarization of transcribed lecture videos : using tags and their semantic relations (2016) 0.01
    0.005453045 = product of:
      0.024538701 = sum of:
        0.013366793 = weight(_text_:of in 2640) [ClassicSimilarity], result of:
          0.013366793 = score(doc=2640,freq=18.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.25915858 = fieldWeight in 2640, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2640)
        0.011171908 = product of:
          0.022343816 = sum of:
            0.022343816 = weight(_text_:22 in 2640) [ClassicSimilarity], result of:
              0.022343816 = score(doc=2640,freq=2.0), product of:
                0.11550141 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03298316 = queryNorm
                0.19345059 = fieldWeight in 2640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2640)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    We propose a tag-based framework that simulates human abstractors' ability to select significant sentences based on key concepts in a sentence as well as the semantic relations between key concepts to create generic summaries of transcribed lecture videos. The proposed extractive summarization method uses tags (viewer- and author-assigned terms) as key concepts. Our method employs Flickr tag clusters and WordNet synonyms to expand tags and detect the semantic relations between tags. This method helps select sentences that have a greater number of semantically related key concepts. To investigate the effectiveness and uniqueness of the proposed method, we compare it with an existing technique, latent semantic analysis (LSA), using intrinsic and extrinsic evaluations. The results of intrinsic evaluation show that the tag-based method is as or more effective than the LSA method. We also observe that in the extrinsic evaluation, the grand mean accuracy score of the tag-based method is higher than that of the LSA method, with a statistically significant difference. Elaborating on our results, we discuss the theoretical and practical implications of our findings for speech video summarization and retrieval.
    Date
    22. 1.2016 12:29:41
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.2, S.366-379

Years

Languages

  • e 94
  • d 3
  • chi 2
  • More… Less…

Types

  • a 95
  • m 3
  • el 1
  • r 1
  • More… Less…