Search (10 results, page 1 of 1)

  • × theme_ss:"Automatisches Abstracting"
  1. Ruda, S.: Maschinenunterstützte Kondensierung von Fachtexten mit CONNY : Abstracting am Beispiel eines 'Nachrichten für Dokumentation'-Textkorpus (1994) 0.01
    0.006297461 = product of:
      0.06297461 = sum of:
        0.06297461 = weight(_text_:allgemeine in 2376) [ClassicSimilarity], result of:
          0.06297461 = score(doc=2376,freq=2.0), product of:
            0.15515608 = queryWeight, product of:
              5.2479978 = idf(docFreq=631, maxDocs=44218)
              0.029564815 = queryNorm
            0.4058791 = fieldWeight in 2376, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2479978 = idf(docFreq=631, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2376)
      0.1 = coord(1/10)
    
    Abstract
    Als Textkorpus sind von 50 verschiedenen Autoren verfaßte Dokumente der Zeitschrift 'Nachrichten für Dokumentation' aus einem Zwanzigjahreszeitraum (1969-1989) herangezogen worden. Die Untersuchung der Abstracts hat ergeben, daß lediglich 15 von 50 Abstracts aus ausschließlich 'normgerechten' Abstractsätzen bestehen und kein Abstract allen Anforderungen der Richtlinien genügt. Insofern signalisieren sie die Abstracting-Richtlinien als 'Wunschdenken', was die Idee des maschinenunterstützten Abstracting nach linguistischen Merkmalen bekräftigt. CONNY ist ein interaktives linhuistisches Abstracting-Modell für Fachtexte, das dem Abstractor auf der Oberflächenstruktur operierende allgemeine Abstracting-Richtlinien anbietet. Es kondendiert die als abstractrelevant bewertenden Primärtextteile auf Primärtext-, Satz- und Abstractebene hinsichtlich Lexik, Syntax und Semantik
  2. Meyer, R.: Allein, es wär' so schön gewesen : Der Copernic Summarzier kann Internettexte leider nicht befriedigend und sinnvoll zusammenfassen (2002) 0.00
    0.0031487306 = product of:
      0.031487305 = sum of:
        0.031487305 = weight(_text_:allgemeine in 648) [ClassicSimilarity], result of:
          0.031487305 = score(doc=648,freq=2.0), product of:
            0.15515608 = queryWeight, product of:
              5.2479978 = idf(docFreq=631, maxDocs=44218)
              0.029564815 = queryNorm
            0.20293956 = fieldWeight in 648, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2479978 = idf(docFreq=631, maxDocs=44218)
              0.02734375 = fieldNorm(doc=648)
      0.1 = coord(1/10)
    
    Source
    Frankfurter Allgemeine Zeitung. Nr.xxx vom 9.7.2002, S.xx
  3. Goh, A.; Hui, S.C.: TES: a text extraction system (1996) 0.00
    0.0016022498 = product of:
      0.016022498 = sum of:
        0.016022498 = product of:
          0.032044996 = sum of:
            0.032044996 = weight(_text_:22 in 6599) [ClassicSimilarity], result of:
              0.032044996 = score(doc=6599,freq=2.0), product of:
                0.10353094 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029564815 = queryNorm
                0.30952093 = fieldWeight in 6599, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6599)
          0.5 = coord(1/2)
      0.1 = coord(1/10)
    
    Date
    26. 2.1997 10:22:43
  4. Robin, J.; McKeown, K.: Empirically designing and evaluating a new revision-based model for summary generation (1996) 0.00
    0.0016022498 = product of:
      0.016022498 = sum of:
        0.016022498 = product of:
          0.032044996 = sum of:
            0.032044996 = weight(_text_:22 in 6751) [ClassicSimilarity], result of:
              0.032044996 = score(doc=6751,freq=2.0), product of:
                0.10353094 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029564815 = queryNorm
                0.30952093 = fieldWeight in 6751, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6751)
          0.5 = coord(1/2)
      0.1 = coord(1/10)
    
    Date
    6. 3.1997 16:22:15
  5. Jones, P.A.; Bradbeer, P.V.G.: Discovery of optimal weights in a concept selection system (1996) 0.00
    0.0016022498 = product of:
      0.016022498 = sum of:
        0.016022498 = product of:
          0.032044996 = sum of:
            0.032044996 = weight(_text_:22 in 6974) [ClassicSimilarity], result of:
              0.032044996 = score(doc=6974,freq=2.0), product of:
                0.10353094 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029564815 = queryNorm
                0.30952093 = fieldWeight in 6974, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6974)
          0.5 = coord(1/2)
      0.1 = coord(1/10)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  6. Vanderwende, L.; Suzuki, H.; Brockett, J.M.; Nenkova, A.: Beyond SumBasic : task-focused summarization with sentence simplification and lexical expansion (2007) 0.00
    0.0012016873 = product of:
      0.012016872 = sum of:
        0.012016872 = product of:
          0.024033744 = sum of:
            0.024033744 = weight(_text_:22 in 948) [ClassicSimilarity], result of:
              0.024033744 = score(doc=948,freq=2.0), product of:
                0.10353094 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029564815 = queryNorm
                0.23214069 = fieldWeight in 948, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=948)
          0.5 = coord(1/2)
      0.1 = coord(1/10)
    
    Abstract
    In recent years, there has been increased interest in topic-focused multi-document summarization. In this task, automatic summaries are produced in response to a specific information request, or topic, stated by the user. The system we have designed to accomplish this task comprises four main components: a generic extractive summarization system, a topic-focusing component, sentence simplification, and lexical expansion of topic words. This paper details each of these components, together with experiments designed to quantify their individual contributions. We include an analysis of our results on two large datasets commonly used to evaluate task-focused summarization, the DUC2005 and DUC2006 datasets, using automatic metrics. Additionally, we include an analysis of our results on the DUC2006 task according to human evaluation metrics. In the human evaluation of system summaries compared to human summaries, i.e., the Pyramid method, our system ranked first out of 22 systems in terms of overall mean Pyramid score; and in the human evaluation of summary responsiveness to the topic, our system ranked third out of 35 systems.
  7. Wu, Y.-f.B.; Li, Q.; Bot, R.S.; Chen, X.: Finding nuggets in documents : a machine learning approach (2006) 0.00
    0.0010014061 = product of:
      0.010014061 = sum of:
        0.010014061 = product of:
          0.020028122 = sum of:
            0.020028122 = weight(_text_:22 in 5290) [ClassicSimilarity], result of:
              0.020028122 = score(doc=5290,freq=2.0), product of:
                0.10353094 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029564815 = queryNorm
                0.19345059 = fieldWeight in 5290, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5290)
          0.5 = coord(1/2)
      0.1 = coord(1/10)
    
    Date
    22. 7.2006 17:25:48
  8. Kim, H.H.; Kim, Y.H.: Generic speech summarization of transcribed lecture videos : using tags and their semantic relations (2016) 0.00
    0.0010014061 = product of:
      0.010014061 = sum of:
        0.010014061 = product of:
          0.020028122 = sum of:
            0.020028122 = weight(_text_:22 in 2640) [ClassicSimilarity], result of:
              0.020028122 = score(doc=2640,freq=2.0), product of:
                0.10353094 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029564815 = queryNorm
                0.19345059 = fieldWeight in 2640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2640)
          0.5 = coord(1/2)
      0.1 = coord(1/10)
    
    Date
    22. 1.2016 12:29:41
  9. Oh, H.; Nam, S.; Zhu, Y.: Structured abstract summarization of scientific articles : summarization using full-text section information (2023) 0.00
    0.0010014061 = product of:
      0.010014061 = sum of:
        0.010014061 = product of:
          0.020028122 = sum of:
            0.020028122 = weight(_text_:22 in 889) [ClassicSimilarity], result of:
              0.020028122 = score(doc=889,freq=2.0), product of:
                0.10353094 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029564815 = queryNorm
                0.19345059 = fieldWeight in 889, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=889)
          0.5 = coord(1/2)
      0.1 = coord(1/10)
    
    Date
    22. 1.2023 18:57:12
  10. Jiang, Y.; Meng, R.; Huang, Y.; Lu, W.; Liu, J.: Generating keyphrases for readers : a controllable keyphrase generation framework (2023) 0.00
    0.0010014061 = product of:
      0.010014061 = sum of:
        0.010014061 = product of:
          0.020028122 = sum of:
            0.020028122 = weight(_text_:22 in 1012) [ClassicSimilarity], result of:
              0.020028122 = score(doc=1012,freq=2.0), product of:
                0.10353094 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029564815 = queryNorm
                0.19345059 = fieldWeight in 1012, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1012)
          0.5 = coord(1/2)
      0.1 = coord(1/10)
    
    Date
    22. 6.2023 14:55:20