Search (31 results, page 1 of 2)

  • × language_ss:"e"
  • × theme_ss:"Automatisches Abstracting"
  • × year_i:[1990 TO 2000}
  1. Endres-Niggemeyer, B.: ¬An empirical process model of abstracting (1992) 0.02
    0.019806186 = product of:
      0.11883711 = sum of:
        0.014048031 = weight(_text_:und in 8834) [ClassicSimilarity], result of:
          0.014048031 = score(doc=8834,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.29385152 = fieldWeight in 8834, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=8834)
        0.08206912 = weight(_text_:informationswissenschaft in 8834) [ClassicSimilarity], result of:
          0.08206912 = score(doc=8834,freq=4.0), product of:
            0.09716552 = queryWeight, product of:
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.021569785 = queryNorm
            0.84463215 = fieldWeight in 8834, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.09375 = fieldNorm(doc=8834)
        0.0052914224 = weight(_text_:in in 8834) [ClassicSimilarity], result of:
          0.0052914224 = score(doc=8834,freq=2.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.18034597 = fieldWeight in 8834, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.09375 = fieldNorm(doc=8834)
        0.014048031 = weight(_text_:und in 8834) [ClassicSimilarity], result of:
          0.014048031 = score(doc=8834,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.29385152 = fieldWeight in 8834, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=8834)
        0.0033805002 = weight(_text_:s in 8834) [ClassicSimilarity], result of:
          0.0033805002 = score(doc=8834,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.14414869 = fieldWeight in 8834, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.09375 = fieldNorm(doc=8834)
      0.16666667 = coord(5/30)
    
    Pages
    S.219-228
    Series
    Schriften zur Informationswissenschaft; Bd.7
    Source
    Mensch und Maschine: Informationelle Schnittstellen der Kommunikation. Proc. des 3. Int. Symposiums für Informationswissenschaft (ISI'92), 5.-7.11.1992 in Saarbrücken. Hrsg.: H.H. Zimmermann, H.-D. Luckhardt u. A. Schulz
  2. Jones, P.A.; Bradbeer, P.V.G.: Discovery of optimal weights in a concept selection system (1996) 0.00
    0.002099853 = product of:
      0.02099853 = sum of:
        0.00705523 = weight(_text_:in in 6974) [ClassicSimilarity], result of:
          0.00705523 = score(doc=6974,freq=8.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.24046129 = fieldWeight in 6974, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=6974)
        0.002253667 = weight(_text_:s in 6974) [ClassicSimilarity], result of:
          0.002253667 = score(doc=6974,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 6974, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=6974)
        0.011689632 = product of:
          0.023379264 = sum of:
            0.023379264 = weight(_text_:22 in 6974) [ClassicSimilarity], result of:
              0.023379264 = score(doc=6974,freq=2.0), product of:
                0.07553371 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021569785 = queryNorm
                0.30952093 = fieldWeight in 6974, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6974)
          0.5 = coord(1/2)
      0.1 = coord(3/30)
    
    Abstract
    Describes the application of weighting strategies to model uncertainties and probabilities in automatic abstracting systems, particularly in the concept selection phase. The weights were originally assigned in an ad hoc manner and were then refined by manual analysis of the results. The new method attempts to derive a more systematic methods and performs this using a genetic algorithm
    Pages
    S.145-153
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  3. Robin, J.; McKeown, K.: Empirically designing and evaluating a new revision-based model for summary generation (1996) 0.00
    0.0017470915 = product of:
      0.017470915 = sum of:
        0.003527615 = weight(_text_:in in 6751) [ClassicSimilarity], result of:
          0.003527615 = score(doc=6751,freq=2.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.120230645 = fieldWeight in 6751, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=6751)
        0.002253667 = weight(_text_:s in 6751) [ClassicSimilarity], result of:
          0.002253667 = score(doc=6751,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 6751, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=6751)
        0.011689632 = product of:
          0.023379264 = sum of:
            0.023379264 = weight(_text_:22 in 6751) [ClassicSimilarity], result of:
              0.023379264 = score(doc=6751,freq=2.0), product of:
                0.07553371 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021569785 = queryNorm
                0.30952093 = fieldWeight in 6751, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6751)
          0.5 = coord(1/2)
      0.1 = coord(3/30)
    
    Abstract
    Presents a system for summarizing quantitative data in natural language, focusing on the use of a corpus of basketball game summaries, drawn from online news services, to empirically shape the system design and to evaluate the approach. Initial corpus analysis revealed characteristics of textual summaries that challenge the capabilities of current language generation systems. A revision based corpus analysis was used to identify and encode the revision rules of the system. Presents a quantitative evaluation, using several test corpora, to measure the robustness of the new revision based model
    Date
    6. 3.1997 16:22:15
    Source
    Artificial intelligence. 85(1996) nos.1/2, S.135-179
  4. Liu, J.; Wu, Y.; Zhou, L.: ¬A hybrid method for abstracting newspaper articles (1999) 0.00
    0.0015820917 = product of:
      0.015820917 = sum of:
        0.003527615 = weight(_text_:in in 4059) [ClassicSimilarity], result of:
          0.003527615 = score(doc=4059,freq=2.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.120230645 = fieldWeight in 4059, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=4059)
        0.010039635 = product of:
          0.030118903 = sum of:
            0.030118903 = weight(_text_:l in 4059) [ClassicSimilarity], result of:
              0.030118903 = score(doc=4059,freq=2.0), product of:
                0.0857324 = queryWeight, product of:
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.021569785 = queryNorm
                0.35131297 = fieldWeight in 4059, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4059)
          0.33333334 = coord(1/3)
        0.002253667 = weight(_text_:s in 4059) [ClassicSimilarity], result of:
          0.002253667 = score(doc=4059,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 4059, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=4059)
      0.1 = coord(3/30)
    
    Abstract
    This paper introduces a hybrid method for abstracting Chinese text. It integrates the statistical approach with language understanding. Some linguistics heuristics and segmentation are also incorporated into the abstracting process. The prototype system is of a multipurpose type catering for various users with different reqirements. Initial responses show that the proposed method contributes much to the flexibility and accuracy of the automatic Chinese abstracting system. In practice, the present work provides a path to developing an intelligent Chinese system for automating the information
    Source
    Journal of the American Society for Information Science. 50(1999) no.13, S.1234-1245
  5. Goh, A.; Hui, S.C.: TES: a text extraction system (1996) 0.00
    9.2955335E-4 = product of:
      0.0139433 = sum of:
        0.002253667 = weight(_text_:s in 6599) [ClassicSimilarity], result of:
          0.002253667 = score(doc=6599,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 6599, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=6599)
        0.011689632 = product of:
          0.023379264 = sum of:
            0.023379264 = weight(_text_:22 in 6599) [ClassicSimilarity], result of:
              0.023379264 = score(doc=6599,freq=2.0), product of:
                0.07553371 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021569785 = queryNorm
                0.30952093 = fieldWeight in 6599, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6599)
          0.5 = coord(1/2)
      0.06666667 = coord(2/30)
    
    Date
    26. 2.1997 10:22:43
    Source
    Microcomputers for information management. 13(1996) no.1, S.41-55
  6. Advances in automatic text summarization (1999) 0.00
    7.4102223E-4 = product of:
      0.011115333 = sum of:
        0.006236001 = weight(_text_:in in 6191) [ClassicSimilarity], result of:
          0.006236001 = score(doc=6191,freq=4.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.21253976 = fieldWeight in 6191, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=6191)
        0.0048793326 = weight(_text_:s in 6191) [ClassicSimilarity], result of:
          0.0048793326 = score(doc=6191,freq=6.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.20806074 = fieldWeight in 6191, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.078125 = fieldNorm(doc=6191)
      0.06666667 = coord(2/30)
    
    Footnote
    Rez. in: Knowledge organization 27(2000) no.3, S.178-180 (H. Saggion)
    Pages
    434 S
    Type
    s
  7. Johnson, F.C.; Paice, C.D.; Black, W.J.; Neal, A.P.: ¬The application of linguistic processing to automatic abstract generation (1993) 0.00
    6.813307E-4 = product of:
      0.01021996 = sum of:
        0.006236001 = weight(_text_:in in 2290) [ClassicSimilarity], result of:
          0.006236001 = score(doc=2290,freq=4.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.21253976 = fieldWeight in 2290, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=2290)
        0.003983958 = weight(_text_:s in 2290) [ClassicSimilarity], result of:
          0.003983958 = score(doc=2290,freq=4.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.16988087 = fieldWeight in 2290, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.078125 = fieldNorm(doc=2290)
      0.06666667 = coord(2/30)
    
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.538-552.
    Source
    Journal of document and text management. 1(1993), S.215-241
  8. Salton, G.; Allan, J.; Buckley, C.; Singhal, A.: Automatic analysis, theme generation, and summarization of machine readable texts (1994) 0.00
    6.813307E-4 = product of:
      0.01021996 = sum of:
        0.006236001 = weight(_text_:in in 1949) [ClassicSimilarity], result of:
          0.006236001 = score(doc=1949,freq=4.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.21253976 = fieldWeight in 1949, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=1949)
        0.003983958 = weight(_text_:s in 1949) [ClassicSimilarity], result of:
          0.003983958 = score(doc=1949,freq=4.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.16988087 = fieldWeight in 1949, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.078125 = fieldNorm(doc=1949)
      0.06666667 = coord(2/30)
    
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.478-483.
    Source
    Science. 264(1994), S.1421-1426
  9. Bateman, J.; Teich, E.: Selective information presentation in an integrated publication system : an application of genre-driven text generation (1995) 0.00
    6.7448284E-4 = product of:
      0.010117242 = sum of:
        0.0061733257 = weight(_text_:in in 2928) [ClassicSimilarity], result of:
          0.0061733257 = score(doc=2928,freq=2.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.21040362 = fieldWeight in 2928, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.109375 = fieldNorm(doc=2928)
        0.003943917 = weight(_text_:s in 2928) [ClassicSimilarity], result of:
          0.003943917 = score(doc=2928,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.16817348 = fieldWeight in 2928, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.109375 = fieldNorm(doc=2928)
      0.06666667 = coord(2/30)
    
    Source
    Information processing and management. 31(1995) no.5, S.753-767
  10. Johnson, F.: Automatic abstracting research (1995) 0.00
    6.2059314E-4 = product of:
      0.009308897 = sum of:
        0.00705523 = weight(_text_:in in 3847) [ClassicSimilarity], result of:
          0.00705523 = score(doc=3847,freq=8.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.24046129 = fieldWeight in 3847, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=3847)
        0.002253667 = weight(_text_:s in 3847) [ClassicSimilarity], result of:
          0.002253667 = score(doc=3847,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 3847, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=3847)
      0.06666667 = coord(2/30)
    
    Abstract
    Discusses the attraction for researchers of the prospect of automatically generating abstracts but notes that the promise of superseding the human effort has yet to be realized. Notes ways in which progress in automatic abstracting research may come about and suggests a shift in the aim from reproducing the conventional benefits of abstracts to accentuating the advantages to users of the computerized representation of information in large textual databases
    Source
    Library review. 44(1995) no.8, S.28-36
  11. Ahmad, K.: Text summarisation : the role of lexical cohesion analysis (1995) 0.00
    5.575784E-4 = product of:
      0.008363675 = sum of:
        0.006110009 = weight(_text_:in in 5795) [ClassicSimilarity], result of:
          0.006110009 = score(doc=5795,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.2082456 = fieldWeight in 5795, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=5795)
        0.002253667 = weight(_text_:s in 5795) [ClassicSimilarity], result of:
          0.002253667 = score(doc=5795,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 5795, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=5795)
      0.06666667 = coord(2/30)
    
    Abstract
    The work in automatic text summary focuses mainly on computational models of texts. The artificial intelligence related work in text summary deals mainly with narrative texts such as newspaper reports and stories. Presents a study on the summary of non-narrative texts such as those in scientific and technical communication. Discusses syntactic cohesion; lexical cohesion; complex lexical repetition; simple and complex paraphrase; bonds and links; and Tele-pattan; an architecture for cohesion based text analysis and summarisation system working on SGML
    Source
    New review of document and text management. 1995, no.1, S.321-335
  12. Moens, M.-F.; Uyttendaele, C.; Dumotier, J.: Abstracting of legal cases : the potential of clustering based on the selection of representative objects (1999) 0.00
    5.0708273E-4 = product of:
      0.007606241 = sum of:
        0.005915991 = weight(_text_:in in 2944) [ClassicSimilarity], result of:
          0.005915991 = score(doc=2944,freq=10.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.20163295 = fieldWeight in 2944, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2944)
        0.0016902501 = weight(_text_:s in 2944) [ClassicSimilarity], result of:
          0.0016902501 = score(doc=2944,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.072074346 = fieldWeight in 2944, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=2944)
      0.06666667 = coord(2/30)
    
    Abstract
    The SALOMON project automatically summarizes Belgian criminal cases in order to improve access to the large number of existing and future court decisions. SALOMON extracts text units from the case text to form a case summary. Such a case summary facilitates the rapid determination of the relevance of the case or may be employed in text search. an important part of the research concerns the development of techniques for automatic recognition of representative text paragraphs (or sentences) in texts of unrestricted domains. these techniques are employed to eliminate redundant material in the case texts, and to identify informative text paragraphs which are relevant to include in the case summary. An evaluation of a test set of 700 criminal cases demonstrates that the algorithms have an application potential for automatic indexing, abstracting, and text linkage
    Source
    Journal of the American Society for Information Science. 50(1999) no.2, S.151-161
  13. Craven, T.C.: ¬A computer-aided abstracting tool kit (1993) 0.00
    4.8283124E-4 = product of:
      0.007242468 = sum of:
        0.004988801 = weight(_text_:in in 6506) [ClassicSimilarity], result of:
          0.004988801 = score(doc=6506,freq=4.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.17003182 = fieldWeight in 6506, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=6506)
        0.002253667 = weight(_text_:s in 6506) [ClassicSimilarity], result of:
          0.002253667 = score(doc=6506,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 6506, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=6506)
      0.06666667 = coord(2/30)
    
    Abstract
    Describes the abstracting assistance features being prototyped in the TEXNET text network management system. Sentence weighting methods include: weithing negatively or positively on the stems in a selected passage; weighting on general lists of cue words, adjusting weights of selected segments; and weighting of occurrence of frequent stems. The user may adjust a number of parameters: the minimum strength of extracts; the threshold for frequent word/stems and the amount sentence weight is to be adjusted for each weighting type
    Source
    Canadian journal of information and library science. 18(1993) no.2, S.20-31
  14. Craven, T.C.: ¬A phrase flipper for the assistance of writers of abstracts and other text (1995) 0.00
    4.8283124E-4 = product of:
      0.007242468 = sum of:
        0.004988801 = weight(_text_:in in 4897) [ClassicSimilarity], result of:
          0.004988801 = score(doc=4897,freq=4.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.17003182 = fieldWeight in 4897, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=4897)
        0.002253667 = weight(_text_:s in 4897) [ClassicSimilarity], result of:
          0.002253667 = score(doc=4897,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 4897, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=4897)
      0.06666667 = coord(2/30)
    
    Abstract
    Describes computerized tools for computer assisted abstracting. FlipPhr is a Microsoft Windows application program that rearranges (flips) phrases or other expressions in accordance with rules in a grammar. The flipping may be invoked with a single keystroke from within various Windows application programs that allow cutting and pasting of text. The user may modify the grammar to provide for different kinds of flipping
    Source
    Canadian journal of information and library science. 20(1995) nos.3/4, S.41-49
  15. Moens, M.-F.; Uyttendaele, C.: Automatic text structuring and categorization as a first step in summarizing legal cases (1997) 0.00
    4.6544487E-4 = product of:
      0.0069816727 = sum of:
        0.0052914224 = weight(_text_:in in 2256) [ClassicSimilarity], result of:
          0.0052914224 = score(doc=2256,freq=8.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.18034597 = fieldWeight in 2256, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2256)
        0.0016902501 = weight(_text_:s in 2256) [ClassicSimilarity], result of:
          0.0016902501 = score(doc=2256,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.072074346 = fieldWeight in 2256, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=2256)
      0.06666667 = coord(2/30)
    
    Abstract
    The SALOMON system automatically summarizes Belgian criminal cases in order to improve access to the large number of existing and future court decisions. SALOMON extracts relevant text units from the case text to form a case summary. Such a case profile facilitates the rapid determination of the relevance of the case or may be employed in text search. In a first important abstracting step SALOMON performs an initial categorization of legal criminal cases and structures the case text into separate legally relevant and irrelevant components. A text grammar represented as a semantic network is used to automatically determine the category of the case and its components. Extracts from the case general data and identifies text portions relevant for further abstracting. Prior knowledge of the text structure and its indicative cues may support automatic abstracting. A text grammar is a promising form for representing the knowledge involved
    Source
    Information processing and management. 33(1997) no.6, S.727-737
  16. Craven, T.C.: ¬An experiment in the use of tools for computer-assisted abstracting (1996) 0.00
    4.6485878E-4 = product of:
      0.0069728815 = sum of:
        0.0045825066 = weight(_text_:in in 7426) [ClassicSimilarity], result of:
          0.0045825066 = score(doc=7426,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.1561842 = fieldWeight in 7426, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=7426)
        0.002390375 = weight(_text_:s in 7426) [ClassicSimilarity], result of:
          0.002390375 = score(doc=7426,freq=4.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.101928525 = fieldWeight in 7426, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=7426)
      0.06666667 = coord(2/30)
    
    Abstract
    Experimental subjects wrote abstracts of an article using a simplified version of the TEXNET abstracting assistance software. In addition to the fulltext, the 35 subjects were presented with either keywords or phrases extracted automatically. The resulting abstracts, and the times taken, were recorded automatically; some additional information was gathered by oral questionnaire. Results showed considerable variation among subjects, but 37% found the keywords or phrases quite or very useful in writing their abstracts. Statistical analysis failed to support deveral hypothesised relations; phrases were not viewed as significantly more helpful than keywords; and abstracting experience did not correlate with originality of wording, approximation of the author abstract, or greater conciseness. Results also suggested possible modifications to the software
    Pages
    S.203-208
    Source
    Global complexity: information, chaos and control. Proceedings of the 59th Annual Meeting of the American Society for Information Science, ASIS'96, Baltimore, Maryland, 21-24 Oct 1996. Ed.: S. Hardin
  17. Endres-Niggemeyer, B.; Maier, E.; Sigel, A.: How to implement a naturalistic model of abstracting : four core working steps of an expert abstractor (1995) 0.00
    4.2247731E-4 = product of:
      0.0063371593 = sum of:
        0.004365201 = weight(_text_:in in 2930) [ClassicSimilarity], result of:
          0.004365201 = score(doc=2930,freq=4.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.14877784 = fieldWeight in 2930, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2930)
        0.0019719584 = weight(_text_:s in 2930) [ClassicSimilarity], result of:
          0.0019719584 = score(doc=2930,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.08408674 = fieldWeight in 2930, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2930)
      0.06666667 = coord(2/30)
    
    Abstract
    4 working steps taken from a comprehensive empirical model of expert abstracting are studied in order to prepare an explorative implementation of a simulation model. It aims at explaining the knowledge processing activities during professional summarizing. Following the case-based and holistic strategy of qualitative empirical research, the main features of the simulation system were developed by investigating in detail a small but central test case - 4 working steps where an expert abstractor discovers what the paper is about and drafts the topic sentence of the abstract
    Source
    Information processing and management. 31(1995) no.5, S.631-674
  18. Endres-Niggemeyer, B.: Summarizing information (1998) 0.00
    4.181838E-4 = product of:
      0.0062727565 = sum of:
        0.0045825066 = weight(_text_:in in 688) [ClassicSimilarity], result of:
          0.0045825066 = score(doc=688,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.1561842 = fieldWeight in 688, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=688)
        0.0016902501 = weight(_text_:s in 688) [ClassicSimilarity], result of:
          0.0016902501 = score(doc=688,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.072074346 = fieldWeight in 688, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=688)
      0.06666667 = coord(2/30)
    
    Abstract
    Summarizing is the process of reducing the large information size of something like a novel or a scientific paper to a short summary or abstract comprising only the most essential points. Summarizing is frequent in everyday communication, but it is also a professional skill for journalists and others. Automated summarizing functions are urgently needed by Internet users who wish to avoid being overwhelmed by information. This book presents the state of the art and surveys related research; it deals with everyday and professional summarizing as well as computerized approaches. The author focuses in detail on the cognitive pro-cess involved in summarizing and supports this with a multimedia simulation systems on the accompanying CD-ROM
    Pages
    VII, 375 S. + 1 CD-ROM
  19. Brandow, R.; Mitze, K.; Rau, L.F.: Automatic condensation of electronic publications by sentence selection (1995) 0.00
    3.854188E-4 = product of:
      0.0057812817 = sum of:
        0.003527615 = weight(_text_:in in 2929) [ClassicSimilarity], result of:
          0.003527615 = score(doc=2929,freq=2.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.120230645 = fieldWeight in 2929, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=2929)
        0.002253667 = weight(_text_:s in 2929) [ClassicSimilarity], result of:
          0.002253667 = score(doc=2929,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 2929, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=2929)
      0.06666667 = coord(2/30)
    
    Abstract
    Description of a system that performs domain-independent automatic condensation of news from a large commercial news service encompassing 41 different publications. This system was evaluated against a system that condensed the same articles using only the first portions of the texts (the löead), up to the target length of the summaries. 3 lengths of articles were evaluated for 250 documents by both systems, totalling 1.500 suitability judgements in all. The lead-based summaries outperformed the 'intelligent' summaries significantly, achieving acceptability ratings of over 90%, compared to 74,7%
    Source
    Information processing and management. 31(1995) no.5, S.675-685
  20. Sparck Jones, K.; Endres-Niggemeyer, B.: Introduction: automatic summarizing (1995) 0.00
    3.854188E-4 = product of:
      0.0057812817 = sum of:
        0.003527615 = weight(_text_:in in 2931) [ClassicSimilarity], result of:
          0.003527615 = score(doc=2931,freq=2.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.120230645 = fieldWeight in 2931, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=2931)
        0.002253667 = weight(_text_:s in 2931) [ClassicSimilarity], result of:
          0.002253667 = score(doc=2931,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 2931, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=2931)
      0.06666667 = coord(2/30)
    
    Abstract
    Automatic summarizing is a research topic whose time has come. The papers illustrate some of the relevant work already under way. Places these papers in their wider context: why research and development on automatic summarizing is timely, what areas of work and ideas it should draw on, how future investigations and experiments can be effectively framed
    Source
    Information processing and management. 31(1995) no.5, S.625-630