Search (65 results, page 4 of 4)

  • × theme_ss:"Referieren"
  1. Ou, S.; Khoo, C.; Goh, D.H.; Heng, H.-Y.: Automatic discourse parsing of sociology dissertation abstracts as sentence categorization (2004) 0.00
    2.2785808E-4 = product of:
      0.003417871 = sum of:
        0.003417871 = product of:
          0.006835742 = sum of:
            0.006835742 = weight(_text_:information in 2676) [ClassicSimilarity], result of:
              0.006835742 = score(doc=2676,freq=6.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.1343758 = fieldWeight in 2676, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2676)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    We investigated an approach to automatic discourse parsing of sociology dissertation abstracts as a sentence categorization task. Decision tree induction was used for the automatic categorization. Three models were developed. Model 1 made use of word tokens found in the sentences. Model 2 made use of both word tokens and sentence position in the abstract. In addition to the attributes used in Model 2, Model 3 also considered information regarding the presence of indicator words in surrounding sentences. Model 3 obtained the highest accuracy rate of 74.5 % when applied to a test sample, compared to 71.6% for Model 2 and 60.8% for Model 1. The results indicated that information about sentence position can substantially increase the accuracy of categorization, and indicator words in earlier sentences (before the sentence being processed) also contribute to the categorization accuracy.
    Source
    Knowledge organization and the global information society: Proceedings of the 8th International ISKO Conference 13-16 July 2004, London, UK. Ed.: I.C. McIlwaine
  2. Wheatley, A.; Armstrong, C.J.: Metadata, recall, and abstracts : can abstracts ever be reliable indicators of document value? (1997) 0.00
    1.9733087E-4 = product of:
      0.002959963 = sum of:
        0.002959963 = product of:
          0.005919926 = sum of:
            0.005919926 = weight(_text_:information in 824) [ClassicSimilarity], result of:
              0.005919926 = score(doc=824,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.116372846 = fieldWeight in 824, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=824)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Abstracts from 7 Internet subject trees (Euroferret, Excite, Infoseek, Lycos Top 5%, Magellan, WebCrawler, Yahoo!), 5 Internet subject gateways (ADAM, EEVL, NetFirst, OMNI, SOSIG), and 3 online databases (ERIC, ISI, LISA) were examined for their subject content, treatment of various enriching features, physical properties such as overall length, anf their readability. Considerable differences were measured, and consistent similarities among abstracts from each type of source were demonstrated. Internet subject tree abstracts were generally the shortest, and online database abstracts the longest. Subject tree and online database abstracts were the most informative, but the level of coverage of document features such as tables, bibliographies, and geographical constraints were disappointingly poor. On balance, the Internet gateways appeared to be providing the most satisfactory abstracts. The authors discuss the continuing role in networked information retrieval of abstracts and their functional analoques such as metadata
  3. Lancaster, F.W.: Indexing and abstracting in theory and practice (1998) 0.00
    1.9733087E-4 = product of:
      0.002959963 = sum of:
        0.002959963 = product of:
          0.005919926 = sum of:
            0.005919926 = weight(_text_:information in 4141) [ClassicSimilarity], result of:
              0.005919926 = score(doc=4141,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.116372846 = fieldWeight in 4141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4141)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Footnote
    Rez. in: JASIS 50(1999) no.8, S.728-730 (J.-E. Mai); Indexer 21(1999) no.3, S.148 (P.F. Booth); Managing information 6(1999) no.1, S.48 (S.T. Clarke); Electronic library 17(1999) no.3, S.193 (F. Parry)
  4. Spina, D.; Trippas, J.R.; Cavedon, L.; Sanderson, M.: Extracting audio summaries to support effective spoken document search (2017) 0.00
    1.9733087E-4 = product of:
      0.002959963 = sum of:
        0.002959963 = product of:
          0.005919926 = sum of:
            0.005919926 = weight(_text_:information in 3788) [ClassicSimilarity], result of:
              0.005919926 = score(doc=3788,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.116372846 = fieldWeight in 3788, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3788)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.9, S.2101-2115
  5. Montesi, M.; Mackenzie Owen, J.: Revision of author abstracts : how it is carried out by LISA editors (2007) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 807) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=807,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 807, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=807)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Purpose - The literature on abstracts recommends the revision of author supplied abstracts before their inclusion in database collections. However, little guidance is given on how to carry out such revision, and few studies exist on this topic. The purpose of this research paper is to first survey 187 bibliographic databases to ascertain how many did revise abstracts, and then study the practical amendments made by one of these, i.e. LISA (Library and Information Science Abstracts). Design/methodology/approach - Database policies were established by e-mail or through alternative sources, with 136 databases out of 187 exhaustively documented. Differences between 100 author-supplied abstracts and the corresponding 100 LISA amended abstracts were classified into sentence-level and beyond sentence-level categories, and then as additions, deletions and rephrasing of text. Findings - Revision of author abstracts was carried out by 66 databases, but in just 32 cases did it imply more than spelling, shortening of length and formula representation. In LISA, amendments were often non-systematic and inconsistent, but still pointed to significant aspects which were discussed. Originality/value - Amendments made by LISA editors are important in multi- and inter-disciplinary research, since they tend to clarify certain aspects such as terminology, and suggest that abstracts should not always be considered as substitutes for the original document. From this point-of-view, the revision of abstracts can be considered as an important factor in enhancing a database's quality.

Languages

  • e 49
  • d 15
  • f 1
  • More… Less…

Types

  • a 49
  • m 10
  • r 2
  • s 2
  • b 1
  • el 1
  • n 1
  • More… Less…