Search (49 results, page 3 of 3)

  • × language_ss:"e"
  • × theme_ss:"Referieren"
  1. Cleveland, D.B.; Cleveland, A.D.: Introduction to abstracting and indexing (1990) 0.00
    2.6310782E-4 = product of:
      0.0039466172 = sum of:
        0.0039466172 = product of:
          0.0078932345 = sum of:
            0.0078932345 = weight(_text_:information in 317) [ClassicSimilarity], result of:
              0.0078932345 = score(doc=317,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.1551638 = fieldWeight in 317, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=317)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Footnote
    Rez. in: Journal of the American Society for information Science. 42(1991) S.532-539 (B.H. Weinberg)
  2. Hartley, J.; Betts, L.: Revising and polishing a structured abstract : is it worth the time and effort? (2008) 0.00
    2.3255666E-4 = product of:
      0.0034883497 = sum of:
        0.0034883497 = product of:
          0.0069766995 = sum of:
            0.0069766995 = weight(_text_:information in 2362) [ClassicSimilarity], result of:
              0.0069766995 = score(doc=2362,freq=4.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.13714671 = fieldWeight in 2362, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2362)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Many writers of structured abstracts spend a good deal of time revising and polishing their texts - but is it worth it? Do readers notice the difference? In this paper we report three studies of readers using rating scales to judge (electronically) the clarity of an original and a revised abstract, both as a whole and in its constituent parts. In Study 1, with approximately 250 academics and research workers, we found some significant differences in favor of the revised abstract, but in Study 2, with approximately 210 information scientists, we found no significant effects. Pooling the data from Studies 1 and 2, however, in Study 3, led to significant differences at a higher probability level between the perception of the original and revised abstract as a whole and between the same components as found in Study 1. These results thus indicate that the revised abstract as a whole, as well as certain specific components of it, were judged significantly clearer than the original one. In short, the results of these experiments show that readers can and do perceive differences between original and revised texts - sometimes - and that therefore these efforts are worth the time and effort.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.12, S.1870-1877
  3. Endres-Niggemeyer, B.; Maier, E.; Sigel, A.: How to implement a naturalistic model of abstracting : four core working steps of an expert abstractor (1995) 0.00
    2.3021935E-4 = product of:
      0.00345329 = sum of:
        0.00345329 = product of:
          0.00690658 = sum of:
            0.00690658 = weight(_text_:information in 2930) [ClassicSimilarity], result of:
              0.00690658 = score(doc=2930,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.13576832 = fieldWeight in 2930, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2930)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    Information processing and management. 31(1995) no.5, S.631-674
  4. Rothkegel, A.: Abstracting from the perspective of text production (1995) 0.00
    2.3021935E-4 = product of:
      0.00345329 = sum of:
        0.00345329 = product of:
          0.00690658 = sum of:
            0.00690658 = weight(_text_:information in 3740) [ClassicSimilarity], result of:
              0.00690658 = score(doc=3740,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.13576832 = fieldWeight in 3740, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3740)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    Information processing and management. 31(1995) no.5, S.777-784
  5. Ou, S.; Khoo, C.; Goh, D.H.; Heng, H.-Y.: Automatic discourse parsing of sociology dissertation abstracts as sentence categorization (2004) 0.00
    2.2785808E-4 = product of:
      0.003417871 = sum of:
        0.003417871 = product of:
          0.006835742 = sum of:
            0.006835742 = weight(_text_:information in 2676) [ClassicSimilarity], result of:
              0.006835742 = score(doc=2676,freq=6.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.1343758 = fieldWeight in 2676, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2676)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    We investigated an approach to automatic discourse parsing of sociology dissertation abstracts as a sentence categorization task. Decision tree induction was used for the automatic categorization. Three models were developed. Model 1 made use of word tokens found in the sentences. Model 2 made use of both word tokens and sentence position in the abstract. In addition to the attributes used in Model 2, Model 3 also considered information regarding the presence of indicator words in surrounding sentences. Model 3 obtained the highest accuracy rate of 74.5 % when applied to a test sample, compared to 71.6% for Model 2 and 60.8% for Model 1. The results indicated that information about sentence position can substantially increase the accuracy of categorization, and indicator words in earlier sentences (before the sentence being processed) also contribute to the categorization accuracy.
    Source
    Knowledge organization and the global information society: Proceedings of the 8th International ISKO Conference 13-16 July 2004, London, UK. Ed.: I.C. McIlwaine
  6. Wheatley, A.; Armstrong, C.J.: Metadata, recall, and abstracts : can abstracts ever be reliable indicators of document value? (1997) 0.00
    1.9733087E-4 = product of:
      0.002959963 = sum of:
        0.002959963 = product of:
          0.005919926 = sum of:
            0.005919926 = weight(_text_:information in 824) [ClassicSimilarity], result of:
              0.005919926 = score(doc=824,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.116372846 = fieldWeight in 824, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=824)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Abstracts from 7 Internet subject trees (Euroferret, Excite, Infoseek, Lycos Top 5%, Magellan, WebCrawler, Yahoo!), 5 Internet subject gateways (ADAM, EEVL, NetFirst, OMNI, SOSIG), and 3 online databases (ERIC, ISI, LISA) were examined for their subject content, treatment of various enriching features, physical properties such as overall length, anf their readability. Considerable differences were measured, and consistent similarities among abstracts from each type of source were demonstrated. Internet subject tree abstracts were generally the shortest, and online database abstracts the longest. Subject tree and online database abstracts were the most informative, but the level of coverage of document features such as tables, bibliographies, and geographical constraints were disappointingly poor. On balance, the Internet gateways appeared to be providing the most satisfactory abstracts. The authors discuss the continuing role in networked information retrieval of abstracts and their functional analoques such as metadata
  7. Lancaster, F.W.: Indexing and abstracting in theory and practice (1998) 0.00
    1.9733087E-4 = product of:
      0.002959963 = sum of:
        0.002959963 = product of:
          0.005919926 = sum of:
            0.005919926 = weight(_text_:information in 4141) [ClassicSimilarity], result of:
              0.005919926 = score(doc=4141,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.116372846 = fieldWeight in 4141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4141)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Footnote
    Rez. in: JASIS 50(1999) no.8, S.728-730 (J.-E. Mai); Indexer 21(1999) no.3, S.148 (P.F. Booth); Managing information 6(1999) no.1, S.48 (S.T. Clarke); Electronic library 17(1999) no.3, S.193 (F. Parry)
  8. Spina, D.; Trippas, J.R.; Cavedon, L.; Sanderson, M.: Extracting audio summaries to support effective spoken document search (2017) 0.00
    1.9733087E-4 = product of:
      0.002959963 = sum of:
        0.002959963 = product of:
          0.005919926 = sum of:
            0.005919926 = weight(_text_:information in 3788) [ClassicSimilarity], result of:
              0.005919926 = score(doc=3788,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.116372846 = fieldWeight in 3788, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3788)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.9, S.2101-2115
  9. Montesi, M.; Mackenzie Owen, J.: Revision of author abstracts : how it is carried out by LISA editors (2007) 0.00
    1.6444239E-4 = product of:
      0.0024666358 = sum of:
        0.0024666358 = product of:
          0.0049332716 = sum of:
            0.0049332716 = weight(_text_:information in 807) [ClassicSimilarity], result of:
              0.0049332716 = score(doc=807,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.09697737 = fieldWeight in 807, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=807)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    Purpose - The literature on abstracts recommends the revision of author supplied abstracts before their inclusion in database collections. However, little guidance is given on how to carry out such revision, and few studies exist on this topic. The purpose of this research paper is to first survey 187 bibliographic databases to ascertain how many did revise abstracts, and then study the practical amendments made by one of these, i.e. LISA (Library and Information Science Abstracts). Design/methodology/approach - Database policies were established by e-mail or through alternative sources, with 136 databases out of 187 exhaustively documented. Differences between 100 author-supplied abstracts and the corresponding 100 LISA amended abstracts were classified into sentence-level and beyond sentence-level categories, and then as additions, deletions and rephrasing of text. Findings - Revision of author abstracts was carried out by 66 databases, but in just 32 cases did it imply more than spelling, shortening of length and formula representation. In LISA, amendments were often non-systematic and inconsistent, but still pointed to significant aspects which were discussed. Originality/value - Amendments made by LISA editors are important in multi- and inter-disciplinary research, since they tend to clarify certain aspects such as terminology, and suggest that abstracts should not always be considered as substitutes for the original document. From this point-of-view, the revision of abstracts can be considered as an important factor in enhancing a database's quality.

Years

Types

  • a 35
  • m 10
  • s 2
  • b 1
  • r 1
  • More… Less…