Search (95 results, page 2 of 5)

  • × language_ss:"e"
  • × theme_ss:"Automatisches Abstracting"
  1. Endres-Niggemeyer, B.: SimSum : an empirically founded simulation of summarizing (2000) 0.01
    0.008234787 = product of:
      0.020586967 = sum of:
        0.009535614 = weight(_text_:a in 3343) [ClassicSimilarity], result of:
          0.009535614 = score(doc=3343,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17835285 = fieldWeight in 3343, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=3343)
        0.011051352 = product of:
          0.022102704 = sum of:
            0.022102704 = weight(_text_:information in 3343) [ClassicSimilarity], result of:
              0.022102704 = score(doc=3343,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.27153665 = fieldWeight in 3343, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3343)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    Information processing and management. 36(2000) no.4, S.659-682
    Type
    a
  2. McKeown, K.; Robin, J.; Kukich, K.: Generating concise natural language summaries (1995) 0.01
    0.008193462 = product of:
      0.020483656 = sum of:
        0.0068111527 = weight(_text_:a in 2932) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=2932,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 2932, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=2932)
        0.013672504 = product of:
          0.027345007 = sum of:
            0.027345007 = weight(_text_:information in 2932) [ClassicSimilarity], result of:
              0.027345007 = score(doc=2932,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.3359395 = fieldWeight in 2932, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2932)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Description of the problems for summary generation, the applications developed (for basket ball games - STREAK and for telephone network planning activity - PLANDOC), the linguistic constructions that the systems use to convey information concisely and the textual constraints that determine what information gets included
    Source
    Information processing and management. 31(1995) no.5, S.703-733
    Type
    a
  3. Harabagiu, S.; Hickl, A.; Lacatusu, F.: Satisfying information needs with multi-document summaries (2007) 0.01
    0.008150326 = product of:
      0.020375814 = sum of:
        0.009437811 = weight(_text_:a in 939) [ClassicSimilarity], result of:
          0.009437811 = score(doc=939,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17652355 = fieldWeight in 939, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=939)
        0.010938003 = product of:
          0.021876005 = sum of:
            0.021876005 = weight(_text_:information in 939) [ClassicSimilarity], result of:
              0.021876005 = score(doc=939,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2687516 = fieldWeight in 939, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=939)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Generating summaries that meet the information needs of a user relies on (1) several forms of question decomposition; (2) different summarization approaches; and (3) textual inference for combining the summarization strategies. This novel framework for summarization has the advantage of producing highly responsive summaries, as indicated by the evaluation results.
    Source
    Information processing and management. 43(2007) no.6, S.1619-1642
    Type
    a
  4. Abdi, A.; Idris, N.; Alguliev, R.M.; Aliguliyev, R.M.: Automatic summarization assessment through a combination of semantic and syntactic information for intelligent educational systems (2015) 0.01
    0.008100862 = product of:
      0.020252153 = sum of:
        0.013554024 = weight(_text_:a in 2681) [ClassicSimilarity], result of:
          0.013554024 = score(doc=2681,freq=22.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.25351265 = fieldWeight in 2681, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2681)
        0.0066981306 = product of:
          0.013396261 = sum of:
            0.013396261 = weight(_text_:information in 2681) [ClassicSimilarity], result of:
              0.013396261 = score(doc=2681,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16457605 = fieldWeight in 2681, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2681)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Summary writing is a process for creating a short version of a source text. It can be used as a measure of understanding. As grading students' summaries is a very time-consuming task, computer-assisted assessment can help teachers perform the grading more effectively. Several techniques, such as BLEU, ROUGE, N-gram co-occurrence, Latent Semantic Analysis (LSA), LSA_Ngram and LSA_ERB, have been proposed to support the automatic assessment of students' summaries. Since these techniques are more suitable for long texts, their performance is not satisfactory for the evaluation of short summaries. This paper proposes a specialized method that works well in assessing short summaries. Our proposed method integrates the semantic relations between words, and their syntactic composition. As a result, the proposed method is able to obtain high accuracy and improve the performance compared with the current techniques. Experiments have displayed that it is to be preferred over the existing techniques. A summary evaluation system based on the proposed method has also been developed.
    Source
    Information processing and management. 51(2015) no.4, S.340-358
    Type
    a
  5. Marcu, D.: Automatic abstracting and summarization (2009) 0.01
    0.008092757 = product of:
      0.020231893 = sum of:
        0.010661141 = weight(_text_:a in 3748) [ClassicSimilarity], result of:
          0.010661141 = score(doc=3748,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.19940455 = fieldWeight in 3748, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3748)
        0.009570752 = product of:
          0.019141505 = sum of:
            0.019141505 = weight(_text_:information in 3748) [ClassicSimilarity], result of:
              0.019141505 = score(doc=3748,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23515764 = fieldWeight in 3748, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3748)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    After lying dormant for a few decades, the field of automated text summarization has experienced a tremendous resurgence of interest. Recently, many new algorithms and techniques have been proposed for identifying important information in single documents and document collections, and for mapping this information into grammatical, cohesive, and coherent abstracts. Since 1997, annual workshops, conferences, and large-scale comparative evaluations have provided a rich environment for exchanging ideas between researchers in Asia, Europe, and North America. This entry reviews the main developments in the field and provides a guiding map to those interested in understanding the strengths and weaknesses of an increasingly ubiquitous technology.
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
    Type
    a
  6. Hobson, S.P.; Dorr, B.J.; Monz, C.; Schwartz, R.: Task-based evaluation of text summarization using Relevance Prediction (2007) 0.01
    0.007788428 = product of:
      0.01947107 = sum of:
        0.014734776 = weight(_text_:a in 938) [ClassicSimilarity], result of:
          0.014734776 = score(doc=938,freq=26.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.27559727 = fieldWeight in 938, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=938)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 938) [ClassicSimilarity], result of:
              0.009472587 = score(doc=938,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 938, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=938)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This article introduces a new task-based evaluation measure called Relevance Prediction that is a more intuitive measure of an individual's performance on a real-world task than interannotator agreement. Relevance Prediction parallels what a user does in the real world task of browsing a set of documents using standard search tools, i.e., the user judges relevance based on a short summary and then that same user - not an independent user - decides whether to open (and judge) the corresponding document. This measure is shown to be a more reliable measure of task performance than LDC Agreement, a current gold-standard based measure used in the summarization evaluation community. Our goal is to provide a stable framework within which developers of new automatic measures may make stronger statistical statements about the effectiveness of their measures in predicting summary usefulness. We demonstrate - as a proof-of-concept methodology for automatic metric developers - that a current automatic evaluation measure has a better correlation with Relevance Prediction than with LDC Agreement and that the significance level for detected differences is higher for the former than for the latter.
    Source
    Information processing and management. 43(2007) no.6, S.1482-1499
    Type
    a
  7. Ou, S.; Khoo, C.S.G.; Goh, D.H.: Multi-document summarization of news articles using an event-based framework (2006) 0.01
    0.0076460927 = product of:
      0.019115232 = sum of:
        0.012278981 = weight(_text_:a in 657) [ClassicSimilarity], result of:
          0.012278981 = score(doc=657,freq=26.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.22966442 = fieldWeight in 657, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=657)
        0.006836252 = product of:
          0.013672504 = sum of:
            0.013672504 = weight(_text_:information in 657) [ClassicSimilarity], result of:
              0.013672504 = score(doc=657,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16796975 = fieldWeight in 657, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=657)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Purpose - The purpose of this research is to develop a method for automatic construction of multi-document summaries of sets of news articles that might be retrieved by a web search engine in response to a user query. Design/methodology/approach - Based on the cross-document discourse analysis, an event-based framework is proposed for integrating and organizing information extracted from different news articles. It has a hierarchical structure in which the summarized information is presented at the top level and more detailed information given at the lower levels. A tree-view interface was implemented for displaying a multi-document summary based on the framework. A preliminary user evaluation was performed by comparing the framework-based summaries against the sentence-based summaries. Findings - In a small evaluation, all the human subjects preferred the framework-based summaries to the sentence-based summaries. It indicates that the event-based framework is an effective way to summarize a set of news articles reporting an event or a series of relevant events. Research limitations/implications - Limited to event-based news articles only, not applicable to news critiques and other kinds of news articles. A summarization system based on the event-based framework is being implemented. Practical implications - Multi-document summarization of news articles can adopt the proposed event-based framework. Originality/value - An event-based framework for summarizing sets of news articles was developed and evaluated using a tree-view interface for displaying such summaries.
    Type
    a
  8. Ercan, G.; Cicekli, I.: Using lexical chains for keyword extraction (2007) 0.01
    0.0076044286 = product of:
      0.019011071 = sum of:
        0.013485395 = weight(_text_:a in 951) [ClassicSimilarity], result of:
          0.013485395 = score(doc=951,freq=16.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.25222903 = fieldWeight in 951, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=951)
        0.005525676 = product of:
          0.011051352 = sum of:
            0.011051352 = weight(_text_:information in 951) [ClassicSimilarity], result of:
              0.011051352 = score(doc=951,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13576832 = fieldWeight in 951, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=951)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Keywords can be considered as condensed versions of documents and short forms of their summaries. In this paper, the problem of automatic extraction of keywords from documents is treated as a supervised learning task. A lexical chain holds a set of semantically related words of a text and it can be said that a lexical chain represents the semantic content of a portion of the text. Although lexical chains have been extensively used in text summarization, their usage for keyword extraction problem has not been fully investigated. In this paper, a keyword extraction technique that uses lexical chains is described, and encouraging results are obtained.
    Source
    Information processing and management. 43(2007) no.6, S.1705-1714
    Type
    a
  9. Craven, T.C.: ¬A phrase flipper for the assistance of writers of abstracts and other text (1995) 0.01
    0.007399688 = product of:
      0.01849922 = sum of:
        0.012184162 = weight(_text_:a in 4897) [ClassicSimilarity], result of:
          0.012184162 = score(doc=4897,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.22789092 = fieldWeight in 4897, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=4897)
        0.006315058 = product of:
          0.012630116 = sum of:
            0.012630116 = weight(_text_:information in 4897) [ClassicSimilarity], result of:
              0.012630116 = score(doc=4897,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1551638 = fieldWeight in 4897, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4897)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Describes computerized tools for computer assisted abstracting. FlipPhr is a Microsoft Windows application program that rearranges (flips) phrases or other expressions in accordance with rules in a grammar. The flipping may be invoked with a single keystroke from within various Windows application programs that allow cutting and pasting of text. The user may modify the grammar to provide for different kinds of flipping
    Source
    Canadian journal of information and library science. 20(1995) nos.3/4, S.41-49
    Type
    a
  10. Martinez-Romo, J.; Araujo, L.; Fernandez, A.D.: SemGraph : extracting keyphrases following a novel semantic graph-based approach (2016) 0.01
    0.0073028165 = product of:
      0.01825704 = sum of:
        0.01155891 = weight(_text_:a in 2832) [ClassicSimilarity], result of:
          0.01155891 = score(doc=2832,freq=16.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.2161963 = fieldWeight in 2832, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2832)
        0.0066981306 = product of:
          0.013396261 = sum of:
            0.013396261 = weight(_text_:information in 2832) [ClassicSimilarity], result of:
              0.013396261 = score(doc=2832,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16457605 = fieldWeight in 2832, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2832)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Keyphrases represent the main topics a text is about. In this article, we introduce SemGraph, an unsupervised algorithm for extracting keyphrases from a collection of texts based on a semantic relationship graph. The main novelty of this algorithm is its ability to identify semantic relationships between words whose presence is statistically significant. Our method constructs a co-occurrence graph in which words appearing in the same document are linked, provided their presence in the collection is statistically significant with respect to a null model. Furthermore, the graph obtained is enriched with information from WordNet. We have used the most recent and standardized benchmark to evaluate the system ability to detect the keyphrases that are part of the text. The result is a method that achieves an improvement of 5.3% and 7.28% in F measure over the two labeled sets of keyphrases used in the evaluation of SemEval-2010.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.1, S.71-82
    Type
    a
  11. Zajic, D.; Dorr, B.J.; Lin, J.; Schwartz, R.: Multi-candidate reduction : sentence compression as a tool for document summarization tasks (2007) 0.01
    0.0072560436 = product of:
      0.01814011 = sum of:
        0.012614433 = weight(_text_:a in 944) [ClassicSimilarity], result of:
          0.012614433 = score(doc=944,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.23593865 = fieldWeight in 944, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=944)
        0.005525676 = product of:
          0.011051352 = sum of:
            0.011051352 = weight(_text_:information in 944) [ClassicSimilarity], result of:
              0.011051352 = score(doc=944,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13576832 = fieldWeight in 944, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=944)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This article examines the application of two single-document sentence compression techniques to the problem of multi-document summarization-a "parse-and-trim" approach and a statistical noisy-channel approach. We introduce the multi-candidate reduction (MCR) framework for multi-document summarization, in which many compressed candidates are generated for each source sentence. These candidates are then selected for inclusion in the final summary based on a combination of static and dynamic features. Evaluations demonstrate that sentence compression is a valuable component of a larger multi-document summarization framework.
    Source
    Information processing and management. 43(2007) no.6, S.1549-1570
    Type
    a
  12. Salton, G.; Allan, J.; Buckley, C.; Singhal, A.: Automatic analysis, theme generation, and summarization of machine readable texts (1994) 0.01
    0.0070104985 = product of:
      0.017526247 = sum of:
        0.009632425 = weight(_text_:a in 1949) [ClassicSimilarity], result of:
          0.009632425 = score(doc=1949,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18016359 = fieldWeight in 1949, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=1949)
        0.007893822 = product of:
          0.015787644 = sum of:
            0.015787644 = weight(_text_:information in 1949) [ClassicSimilarity], result of:
              0.015787644 = score(doc=1949,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.19395474 = fieldWeight in 1949, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1949)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.478-483.
    Type
    a
  13. Marsh, E.: ¬A production rule system for message summarisation (1984) 0.01
    0.0070104985 = product of:
      0.017526247 = sum of:
        0.009632425 = weight(_text_:a in 1956) [ClassicSimilarity], result of:
          0.009632425 = score(doc=1956,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18016359 = fieldWeight in 1956, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=1956)
        0.007893822 = product of:
          0.015787644 = sum of:
            0.015787644 = weight(_text_:information in 1956) [ClassicSimilarity], result of:
              0.015787644 = score(doc=1956,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.19395474 = fieldWeight in 1956, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1956)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.534-537.
    Type
    a
  14. Kim, H.H.; Kim, Y.H.: ERP/MMR algorithm for classifying topic-relevant and topic-irrelevant visual shots of documentary videos (2019) 0.01
    0.006951616 = product of:
      0.01737904 = sum of:
        0.011797264 = weight(_text_:a in 5358) [ClassicSimilarity], result of:
          0.011797264 = score(doc=5358,freq=24.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.22065444 = fieldWeight in 5358, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5358)
        0.0055817757 = product of:
          0.011163551 = sum of:
            0.011163551 = weight(_text_:information in 5358) [ClassicSimilarity], result of:
              0.011163551 = score(doc=5358,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13714671 = fieldWeight in 5358, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5358)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    We propose and evaluate a video summarization method based on a topic relevance model, a maximal marginal relevance (MMR), and discriminant analysis to generate a semantically meaningful video skim. The topic relevance model uses event-related potential (ERP) components to describe the process of topic relevance judgment. More specifically, the topic relevance model indicates that N400 and P600, which have been successfully applied to the mismatch process of a stimulus and the discourse-internal reorganization and integration process of a stimulus, respectively, are used for the topic mismatch process of a topic-irrelevant video shot and the topic formation process of a topic-relevant video shot. To evaluate our proposed ERP/MMR-based method, we compared the video skims generated by the ERP/MMR-based, ERP-based, and shot boundary detection (SBD) methods with ground truth skims. The results showed that at a significance level of 0.05, the ROUGE-1 scores of the ERP/MMR method are statistically higher than those of the SBD method, and the diversity scores of the ERP/MMR method are statistically higher than those of the ERP method. This study suggested that the proposed method may be applied to the construction of a video skim without operational intervention, such as the insertion of a black screen between video shots.
    Footnote
    Beitrag in einem 'Special issue on neuro-information science'.
    Source
    Journal of the Association for Information Science and Technology. 70(2019) no.9, S.931-941
    Type
    a
  15. Soricut, R.; Marcu, D.: Abstractive headline generation using WIDL-expressions (2007) 0.01
    0.0069400403 = product of:
      0.0173501 = sum of:
        0.009535614 = weight(_text_:a in 943) [ClassicSimilarity], result of:
          0.009535614 = score(doc=943,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17835285 = fieldWeight in 943, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=943)
        0.007814486 = product of:
          0.015628971 = sum of:
            0.015628971 = weight(_text_:information in 943) [ClassicSimilarity], result of:
              0.015628971 = score(doc=943,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1920054 = fieldWeight in 943, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=943)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    We present a new paradigm for the automatic creation of document headlines that is based on direct transformation of relevant textual information into well-formed textual output. Starting from an input document, we automatically create compact representations of weighted finite sets of strings, called WIDL-expressions, which encode the most important topics in the document. A generic natural language generation engine performs the headline generation task, driven by both statistical knowledge encapsulated in WIDL-expressions (representing topic biases induced by the input document) and statistical knowledge encapsulated in language models (representing biases induced by the target language). Our evaluation shows similar performance in quality with a state-of-the-art, extractive approach to headline generation, and significant improvements in quality over previously proposed solutions to abstractive headline generation.
    Source
    Information processing and management. 43(2007) no.6, S.1536-1548
    Type
    a
  16. Endres-Niggemeyer, B.: Summarizing information (1998) 0.01
    0.0069366493 = product of:
      0.017341623 = sum of:
        0.009138121 = weight(_text_:a in 688) [ClassicSimilarity], result of:
          0.009138121 = score(doc=688,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1709182 = fieldWeight in 688, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=688)
        0.008203502 = product of:
          0.016407004 = sum of:
            0.016407004 = weight(_text_:information in 688) [ClassicSimilarity], result of:
              0.016407004 = score(doc=688,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.20156369 = fieldWeight in 688, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=688)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Summarizing is the process of reducing the large information size of something like a novel or a scientific paper to a short summary or abstract comprising only the most essential points. Summarizing is frequent in everyday communication, but it is also a professional skill for journalists and others. Automated summarizing functions are urgently needed by Internet users who wish to avoid being overwhelmed by information. This book presents the state of the art and surveys related research; it deals with everyday and professional summarizing as well as computerized approaches. The author focuses in detail on the cognitive pro-cess involved in summarizing and supports this with a multimedia simulation systems on the accompanying CD-ROM
  17. Craven, T.C.: ¬A computer-aided abstracting tool kit (1993) 0.01
    0.0068851607 = product of:
      0.017212901 = sum of:
        0.010897844 = weight(_text_:a in 6506) [ClassicSimilarity], result of:
          0.010897844 = score(doc=6506,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.20383182 = fieldWeight in 6506, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=6506)
        0.006315058 = product of:
          0.012630116 = sum of:
            0.012630116 = weight(_text_:information in 6506) [ClassicSimilarity], result of:
              0.012630116 = score(doc=6506,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1551638 = fieldWeight in 6506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6506)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Describes the abstracting assistance features being prototyped in the TEXNET text network management system. Sentence weighting methods include: weithing negatively or positively on the stems in a selected passage; weighting on general lists of cue words, adjusting weights of selected segments; and weighting of occurrence of frequent stems. The user may adjust a number of parameters: the minimum strength of extracts; the threshold for frequent word/stems and the amount sentence weight is to be adjusted for each weighting type
    Source
    Canadian journal of information and library science. 18(1993) no.2, S.20-31
    Type
    a
  18. Brandow, R.; Mitze, K.; Rau, L.F.: Automatic condensation of electronic publications by sentence selection (1995) 0.01
    0.0068851607 = product of:
      0.017212901 = sum of:
        0.010897844 = weight(_text_:a in 2929) [ClassicSimilarity], result of:
          0.010897844 = score(doc=2929,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.20383182 = fieldWeight in 2929, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=2929)
        0.006315058 = product of:
          0.012630116 = sum of:
            0.012630116 = weight(_text_:information in 2929) [ClassicSimilarity], result of:
              0.012630116 = score(doc=2929,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1551638 = fieldWeight in 2929, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2929)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Description of a system that performs domain-independent automatic condensation of news from a large commercial news service encompassing 41 different publications. This system was evaluated against a system that condensed the same articles using only the first portions of the texts (the löead), up to the target length of the summaries. 3 lengths of articles were evaluated for 250 documents by both systems, totalling 1.500 suitability judgements in all. The lead-based summaries outperformed the 'intelligent' summaries significantly, achieving acceptability ratings of over 90%, compared to 74,7%
    Source
    Information processing and management. 31(1995) no.5, S.675-685
    Type
    a
  19. Sjöbergh, J.: Older versions of the ROUGEeval summarization evaluation system were easier to fool (2007) 0.01
    0.0068851607 = product of:
      0.017212901 = sum of:
        0.010897844 = weight(_text_:a in 940) [ClassicSimilarity], result of:
          0.010897844 = score(doc=940,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.20383182 = fieldWeight in 940, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=940)
        0.006315058 = product of:
          0.012630116 = sum of:
            0.012630116 = weight(_text_:information in 940) [ClassicSimilarity], result of:
              0.012630116 = score(doc=940,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1551638 = fieldWeight in 940, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=940)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    We show some limitations of the ROUGE evaluation method for automatic summarization. We present a method for automatic summarization based on a Markov model of the source text. By a simple greedy word selection strategy, summaries with high ROUGE-scores are generated. These summaries would however not be considered good by human readers. The method can be adapted to trick different settings of the ROUGEeval package.
    Source
    Information processing and management. 43(2007) no.6, S.1500-1505
    Type
    a
  20. Endres-Niggemeyer, B.; Maier, E.; Sigel, A.: How to implement a naturalistic model of abstracting : four core working steps of an expert abstractor (1995) 0.01
    0.0068817483 = product of:
      0.01720437 = sum of:
        0.011678694 = weight(_text_:a in 2930) [ClassicSimilarity], result of:
          0.011678694 = score(doc=2930,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.21843673 = fieldWeight in 2930, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2930)
        0.005525676 = product of:
          0.011051352 = sum of:
            0.011051352 = weight(_text_:information in 2930) [ClassicSimilarity], result of:
              0.011051352 = score(doc=2930,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13576832 = fieldWeight in 2930, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2930)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    4 working steps taken from a comprehensive empirical model of expert abstracting are studied in order to prepare an explorative implementation of a simulation model. It aims at explaining the knowledge processing activities during professional summarizing. Following the case-based and holistic strategy of qualitative empirical research, the main features of the simulation system were developed by investigating in detail a small but central test case - 4 working steps where an expert abstractor discovers what the paper is about and drafts the topic sentence of the abstract
    Source
    Information processing and management. 31(1995) no.5, S.631-674
    Type
    a

Years

Types

  • a 92
  • el 1
  • m 1
  • r 1
  • s 1
  • More… Less…