Search (60 results, page 2 of 3)

  • × language_ss:"e"
  • × theme_ss:"Referieren"
  1. Lancaster, F.W.: Indexing and abstracting in theory and practice (2003) 0.01
    0.006647099 = product of:
      0.016617747 = sum of:
        0.004086692 = weight(_text_:a in 4913) [ClassicSimilarity], result of:
          0.004086692 = score(doc=4913,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.07643694 = fieldWeight in 4913, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4913)
        0.012531055 = product of:
          0.02506211 = sum of:
            0.02506211 = weight(_text_:information in 4913) [ClassicSimilarity], result of:
              0.02506211 = score(doc=4913,freq=14.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.3078936 = fieldWeight in 4913, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4913)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    Covers: indexing principles and practice; precoordinate indexes; consistency and quality of indexing; types and functions of abstracts; writing an abstract; evaluation theory and practice; approaches used in indexing and abstracting services; indexing enhancement; natural language in information retrieval; indexing and abstracting of imaginative works; databases of images and sound; automatic indexing and abstracting; the future of indexing and abstracting services
    Footnote
    Rez. in: JASIST 57(2006) no.1, S.144-145 (H. Saggion): "... This volume is a very valuable source of information for not only students and professionals in library and information science but also for individuals and institutions involved in knowledge management and organization activities. Because of its broad coverage of the information science topic, teachers will find the contents of this book useful for courses in the areas of information technology, digital as well as traditional libraries, and information science in general."
    Imprint
    Champaign, IL : Graduate School of Library and Information Science
  2. Hartley, J.; Betts, L.: Common weaknesses in traditional abstracts in the social sciences (2009) 0.01
    0.006548052 = product of:
      0.01637013 = sum of:
        0.005779455 = weight(_text_:a in 3115) [ClassicSimilarity], result of:
          0.005779455 = score(doc=3115,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10809815 = fieldWeight in 3115, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3115)
        0.010590675 = product of:
          0.02118135 = sum of:
            0.02118135 = weight(_text_:information in 3115) [ClassicSimilarity], result of:
              0.02118135 = score(doc=3115,freq=10.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2602176 = fieldWeight in 3115, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3115)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Detailed checklists and questionnaires have been used in the past to assess the quality of structured abstracts in the medical sciences. The aim of this article is to report the findings when a simpler checklist was used to evaluate the quality of 100 traditional abstracts published in 53 different social science journals. Most of these abstracts contained information about the aims, methods, and results of the studies. However, many did not report details about the sample sizes, ages, or sexes of the participants, or where the research was carried out. The correlation between the lengths of the abstracts and the amount of information present was 0.37 (p < .001), suggesting that word limits for abstracts may restrict the presence of key information to some extent. We conclude that authors can improve the quality of information in traditional abstracts in the social sciences by using the simple checklist provided in this article.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.10, S.2010-2018
    Type
    a
  3. Rothkegel, A.: Abstracting from the perspective of text production (1995) 0.01
    0.006474727 = product of:
      0.016186817 = sum of:
        0.010661141 = weight(_text_:a in 3740) [ClassicSimilarity], result of:
          0.010661141 = score(doc=3740,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.19940455 = fieldWeight in 3740, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3740)
        0.005525676 = product of:
          0.011051352 = sum of:
            0.011051352 = weight(_text_:information in 3740) [ClassicSimilarity], result of:
              0.011051352 = score(doc=3740,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13576832 = fieldWeight in 3740, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3740)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    An abstract itself is a text which is subjected to general and specific conditions of text production. The goal - namely the forming of the abstract as a text - controls the whole process of abstracting. This goal oriented view contrasts to most approaches in this domain which are source text oriented. Production strategies are described in terms of text structure building processes which are reconstructed with methods of modelling in the area of text linguistics and computational linguistics. This leads to a close relationship between thr representation of the model and the resulting text. Gives examples in which authentic material of abstracts is analyzed according to the model. The model itself integrates 3 text levels which are combined and represented in terms of the writer's activities
    Source
    Information processing and management. 31(1995) no.5, S.777-784
    Type
    a
  4. Busch-Lauer, I.-A.: Abstracts in German medical journals : a linguistic analysis (1995) 0.01
    0.0064290287 = product of:
      0.016072571 = sum of:
        0.008258085 = weight(_text_:a in 3677) [ClassicSimilarity], result of:
          0.008258085 = score(doc=3677,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1544581 = fieldWeight in 3677, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3677)
        0.007814486 = product of:
          0.015628971 = sum of:
            0.015628971 = weight(_text_:information in 3677) [ClassicSimilarity], result of:
              0.015628971 = score(doc=3677,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1920054 = fieldWeight in 3677, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3677)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Compares formats and linguistic devices of German abstracts and their English equivalents, written by German medical scholars to English native speakers. The source is 20 abstracts taken from German medical journals representing different degrees of specialism. The analysis includes: the overall length of articles/abstracts; the representation/arrangement of sections; the linguistic devices. Results show no correlation between the length of articles and the length of abstracts. In contrast to native speaking author abstracts, 'background information' predominated in the structure of the studied German non-native speaker abstracts, whereas 'purpose of study' and 'conclusions' were not clearly stated. In linguistic terms, the German abstracts frequently contained lexical hegdes, complex and enumerating sentence structure; passive voice and post tense as well as various types of linking structures
    Source
    Information processing and management. 31(1995) no.5, S.769-776
    Type
    a
  5. O'Rourke, A.J.: Structured abstracts in information retrieval from biomedical databases : a literature survey (1997) 0.01
    0.0064290287 = product of:
      0.016072571 = sum of:
        0.008258085 = weight(_text_:a in 85) [ClassicSimilarity], result of:
          0.008258085 = score(doc=85,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1544581 = fieldWeight in 85, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=85)
        0.007814486 = product of:
          0.015628971 = sum of:
            0.015628971 = weight(_text_:information in 85) [ClassicSimilarity], result of:
              0.015628971 = score(doc=85,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1920054 = fieldWeight in 85, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=85)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Clear guidelines have been provided for structuring the abstracts of original research and review articles and, in the past 10 years, several major medical periodicals have adopted the policy of including such abstracts with all their articles. A review of the literature reveals that proponents claim that structured abstracts enhance peer review, improve information retrieval, and ease critical appraisal. However, some periodicals have not adopted structured abstracts and their opponents claim that they make articles longer and harder to read and restrict author originality. Concludes that previous research on structured abstracts focused on how closely they followed prescribed structure and include salient points of the full text, rather than their role in increasing the usefulness of the article
    Type
    a
  6. Bakewell, K.G.B.; Rowland, G.: Indexing and abstracting (1993) 0.01
    0.0063011474 = product of:
      0.015752869 = sum of:
        0.009437811 = weight(_text_:a in 5540) [ClassicSimilarity], result of:
          0.009437811 = score(doc=5540,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17652355 = fieldWeight in 5540, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=5540)
        0.006315058 = product of:
          0.012630116 = sum of:
            0.012630116 = weight(_text_:information in 5540) [ClassicSimilarity], result of:
              0.012630116 = score(doc=5540,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1551638 = fieldWeight in 5540, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5540)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    State of the art review of UK developments in indexing and abstracting druing the period 1986-1990 covering: bibliographies of indexing and abstracting; British standards (including the revised British Standard on indexing, BS 3700); Wheatley Medal and Carey Award; a list of indexes published during this period; the role of the computer and automatic indexing; hypermedia; PRECIS; POPSI, relational indexing; thesauri; education and training; the indexing process, newspaper indexing; fiction indexes; the indexing profession; and a review of abstracting and indexing services
    Source
    British librarianship and information work 1986-1990. Ed. by D. Bromley and A.M. Allott
    Type
    a
  7. Montesi, M.; Urdiciain, B.G.: Recent linguistic research into author abstracts : its value for information science (2005) 0.01
    0.006112744 = product of:
      0.01528186 = sum of:
        0.007078358 = weight(_text_:a in 4823) [ClassicSimilarity], result of:
          0.007078358 = score(doc=4823,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.13239266 = fieldWeight in 4823, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4823)
        0.008203502 = product of:
          0.016407004 = sum of:
            0.016407004 = weight(_text_:information in 4823) [ClassicSimilarity], result of:
              0.016407004 = score(doc=4823,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.20156369 = fieldWeight in 4823, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4823)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper is a review of genre analysis of author abstracts carried out in the area of English for Special Purposes (ESP) since 1990. Given the descriptive character of such analysis, it can be valuable for Information Science (IS), as it provides a picture of the variation in author abstracts, depending an the discipline, culture and language of the author, and the envisaged context. The authors claim that such knowledge can be useful for information professionals who need to revise author abstracts, or use them for other activities in the organization of knowledge, such as subject analysis and control of vocabulary. With this purpose in mind, we summarize various findings of ESP research. We describe how abstracts vary in structure, content and discourse, and how linguists explain such variations. Other factors taken into account are the stylistic and discoursal features of the abstract, lexical choices, and the possible sources of blas. In conclusion, we show how such findings can have practical and theoretical implications for IS.
    Type
    a
  8. Hartley, J.; Betts, L.: Revising and polishing a structured abstract : is it worth the time and effort? (2008) 0.01
    0.0060856803 = product of:
      0.015214201 = sum of:
        0.009632425 = weight(_text_:a in 2362) [ClassicSimilarity], result of:
          0.009632425 = score(doc=2362,freq=16.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18016359 = fieldWeight in 2362, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2362)
        0.0055817757 = product of:
          0.011163551 = sum of:
            0.011163551 = weight(_text_:information in 2362) [ClassicSimilarity], result of:
              0.011163551 = score(doc=2362,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13714671 = fieldWeight in 2362, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2362)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Many writers of structured abstracts spend a good deal of time revising and polishing their texts - but is it worth it? Do readers notice the difference? In this paper we report three studies of readers using rating scales to judge (electronically) the clarity of an original and a revised abstract, both as a whole and in its constituent parts. In Study 1, with approximately 250 academics and research workers, we found some significant differences in favor of the revised abstract, but in Study 2, with approximately 210 information scientists, we found no significant effects. Pooling the data from Studies 1 and 2, however, in Study 3, led to significant differences at a higher probability level between the perception of the original and revised abstract as a whole and between the same components as found in Study 1. These results thus indicate that the revised abstract as a whole, as well as certain specific components of it, were judged significantly clearer than the original one. In short, the results of these experiments show that readers can and do perceive differences between original and revised texts - sometimes - and that therefore these efforts are worth the time and effort.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.12, S.1870-1877
    Type
    a
  9. Endres-Niggemeyer, B.: Summarising text for intelligent communication : results of the Dagstuhl seminar (1994) 0.01
    0.005948606 = product of:
      0.014871514 = sum of:
        0.008173384 = weight(_text_:a in 8867) [ClassicSimilarity], result of:
          0.008173384 = score(doc=8867,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 8867, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=8867)
        0.0066981306 = product of:
          0.013396261 = sum of:
            0.013396261 = weight(_text_:information in 8867) [ClassicSimilarity], result of:
              0.013396261 = score(doc=8867,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16457605 = fieldWeight in 8867, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=8867)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    As a result of the transition to full-text storage, multimedia and networking, information systems are becoming more efficient but at the same time more difficult to use, in particular because users are confronted with information volumes that increasingly exceed individual processing capacities. Consequently, there is an increase in the demand for user aids such as summarising techniques. Against this background, the interdisciplinary Dagstuhl Seminar 'Summarising Text for Intelligent Communication' (Dec. 1993) outlined the academic state of the art with regard to summarising (abstracting) and proposed future directions for research and system development. Research is currently shifting its attention from text summarising to summarising states of affairs. Recycling solutions are put forward in order to satisfy short-term needs for summarisation products. In the medium and long term, it is necessary to devise concepts and methods of intelligent summarising which have a better formal and empirical grounding and a more modular organisation
    Type
    a
  10. Bowman, J.H.: Annotation: a lost art in cataloguing (2007) 0.01
    0.005822873 = product of:
      0.014557183 = sum of:
        0.0067426977 = weight(_text_:a in 255) [ClassicSimilarity], result of:
          0.0067426977 = score(doc=255,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12611452 = fieldWeight in 255, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=255)
        0.007814486 = product of:
          0.015628971 = sum of:
            0.015628971 = weight(_text_:information in 255) [ClassicSimilarity], result of:
              0.015628971 = score(doc=255,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1920054 = fieldWeight in 255, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=255)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Public library catalogues in early twentieth-century Britain frequently included annotations, either to clarify obscure titles or to provide further information about the subject-matter of the books they described. Two manuals giving instruction on how to do this were published at that time. Following World War I, with the decline of the printed catalogue, this kind of annotation became rarer, and was almost confined to bulletins of new books. The early issues of the British National Bibliography included some annotations in exceptional cases. Parallels are drawn with the provision of table-of-contents information in present-day OPAC's.
    Type
    a
  11. Ou, S.; Khoo, C.; Goh, D.H.; Heng, H.-Y.: Automatic discourse parsing of sociology dissertation abstracts as sentence categorization (2004) 0.01
    0.005802007 = product of:
      0.014505018 = sum of:
        0.009036016 = weight(_text_:a in 2676) [ClassicSimilarity], result of:
          0.009036016 = score(doc=2676,freq=22.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.16900843 = fieldWeight in 2676, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=2676)
        0.0054690014 = product of:
          0.010938003 = sum of:
            0.010938003 = weight(_text_:information in 2676) [ClassicSimilarity], result of:
              0.010938003 = score(doc=2676,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1343758 = fieldWeight in 2676, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2676)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    We investigated an approach to automatic discourse parsing of sociology dissertation abstracts as a sentence categorization task. Decision tree induction was used for the automatic categorization. Three models were developed. Model 1 made use of word tokens found in the sentences. Model 2 made use of both word tokens and sentence position in the abstract. In addition to the attributes used in Model 2, Model 3 also considered information regarding the presence of indicator words in surrounding sentences. Model 3 obtained the highest accuracy rate of 74.5 % when applied to a test sample, compared to 71.6% for Model 2 and 60.8% for Model 1. The results indicated that information about sentence position can substantially increase the accuracy of categorization, and indicator words in earlier sentences (before the sentence being processed) also contribute to the categorization accuracy.
    Content
    1. Introduction This paper reports our initial effort to develop an automatic method for parsing the discourse structure of sociology dissertation abstracts. This study is part of a broader study to develop a method for multi-document summarization. Accurate discourse parsing will make it easier to perform automatic multi-document summarization of dissertation abstracts. In a previous study, we determined that the macro-level structure of dissertation abstracts typically has five sections (Khoo et al., 2002). In this study, we treated discourse parsing as a text categorization problem - assigning each sentence in a dissertation abstract to one of the five predefined sections or categories. Decision tree induction, a machine-learning method, was applied to word tokens found in the abstracts to construct a decision tree model for the categorization purpose. Decision tree induction was selected primarily because decision tree models are easy to interpret and can be converted to rules that can be incorporated in other computer programs. A well-known decision-tree induction program, C5.0 (Quinlan, 1993), was used in this study.
    Source
    Knowledge organization and the global information society: Proceedings of the 8th International ISKO Conference 13-16 July 2004, London, UK. Ed.: I.C. McIlwaine
    Type
    a
  12. Abstracting and indexing services in perspective : Miles Conrad memorial lectures 1969-1983. Commemorating the twenty-fifth anniversary of the National Federation of Abstracting and Information Services (1983) 0.01
    0.005751905 = product of:
      0.014379762 = sum of:
        0.005448922 = weight(_text_:a in 689) [ClassicSimilarity], result of:
          0.005448922 = score(doc=689,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10191591 = fieldWeight in 689, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=689)
        0.0089308405 = product of:
          0.017861681 = sum of:
            0.017861681 = weight(_text_:information in 689) [ClassicSimilarity], result of:
              0.017861681 = score(doc=689,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.21943474 = fieldWeight in 689, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=689)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    Enthält u.a. die Beiträge: BAKER, D.B.: Abstracting and indexing services: past, present, and future; KENNEDY, H.E.: A perspective on fifteen years in the abstracting and indexing field; WEIL, B.H.: Will abstracts survive technological developments? and will "cheaper is better" win out?; KILGOUR, F.G.: Comparative development of abstracting and indexing, and monograph cataloging; ROWLETT, R.J.: Abstracts, who needs them?
    Imprint
    Arlington : Information Resources Pr.
  13. Jizba, L.: Reflections on summarizing and abstracting : implications for Internet Web documents, and standardized library cataloging databases (1997) 0.01
    0.005735424 = product of:
      0.0143385595 = sum of:
        0.004767807 = weight(_text_:a in 701) [ClassicSimilarity], result of:
          0.004767807 = score(doc=701,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.089176424 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=701)
        0.009570752 = product of:
          0.019141505 = sum of:
            0.019141505 = weight(_text_:information in 701) [ClassicSimilarity], result of:
              0.019141505 = score(doc=701,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23515764 = fieldWeight in 701, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=701)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Comments on the value of abstracts or summary notes to information available online via the Internet and WWW and concludes that automated abstracting techniques would be highly useful if routinely applied to cataloguing or metadata for Internet documents and documents in other databases. Information seekers need external summary information to assess content and value of retrieved documents. Examines traditional models for writers, in library audiovisual cataloguing, periodical databases and archival work, along with innovative new model databases featuring robust cataloguing summaries. Notes recent developments in automated techniques, computational research, and machine summarization of digital images. Recommendations are made for future designers of cataloguing and metadata standards
    Type
    a
  14. Parekh, R.L.: Advanced indexing and abstracting practices (2000) 0.01
    0.005593183 = product of:
      0.013982957 = sum of:
        0.005779455 = weight(_text_:a in 119) [ClassicSimilarity], result of:
          0.005779455 = score(doc=119,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10809815 = fieldWeight in 119, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=119)
        0.008203502 = product of:
          0.016407004 = sum of:
            0.016407004 = weight(_text_:information in 119) [ClassicSimilarity], result of:
              0.016407004 = score(doc=119,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.20156369 = fieldWeight in 119, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=119)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Indexing and abstracting are not activities that should be looked upon as ends in themselves. It is the results of these activities that should be evaluated and this can only be done within the context of a particular database, whether in printed or machine-readable form. In this context, the indexing can be judged successful if it allows searchers to locate items they want without having to look at many they do not want. This book intended primarily as a text to be used in teaching indexing and abstracting of Library and information science. It is an immense value to all individuals and institutions involved in information retrieval and related activities, including librarians, managers of information centres and database producers.
  15. Spina, D.; Trippas, J.R.; Cavedon, L.; Sanderson, M.: Extracting audio summaries to support effective spoken document search (2017) 0.01
    0.005549766 = product of:
      0.013874415 = sum of:
        0.009138121 = weight(_text_:a in 3788) [ClassicSimilarity], result of:
          0.009138121 = score(doc=3788,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1709182 = fieldWeight in 3788, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3788)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 3788) [ClassicSimilarity], result of:
              0.009472587 = score(doc=3788,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 3788, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3788)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    We address the challenge of extracting query biased audio summaries from podcasts to support users in making relevance decisions in spoken document search via an audio-only communication channel. We performed a crowdsourced experiment that demonstrates that transcripts of spoken documents created using Automated Speech Recognition (ASR), even with significant errors, are effective sources of document summaries or "snippets" for supporting users in making relevance judgments against a query. In particular, the results show that summaries generated from ASR transcripts are comparable, in utility and user-judged preference, to spoken summaries generated from error-free manual transcripts of the same collection. We also observed that content-based audio summaries are at least as preferred as synthesized summaries obtained from manually curated metadata, such as title and description. We describe a methodology for constructing a new test collection, which we have made publicly available.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.9, S.2101-2115
    Type
    a
  16. Wang, F.L.; Yang, C.C.: ¬The impact analysis of language differences on an automatic multilingual text summarization system (2006) 0.01
    0.0055169817 = product of:
      0.013792454 = sum of:
        0.005898632 = weight(_text_:a in 5049) [ClassicSimilarity], result of:
          0.005898632 = score(doc=5049,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.11032722 = fieldWeight in 5049, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5049)
        0.007893822 = product of:
          0.015787644 = sum of:
            0.015787644 = weight(_text_:information in 5049) [ClassicSimilarity], result of:
              0.015787644 = score(doc=5049,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.19395474 = fieldWeight in 5049, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5049)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Based on the salient features of the documents, automatic text summarization systems extract the key sentences from source documents. This process supports the users in evaluating the relevance of the extracted documents returned by information retrieval systems. Because of this tool, efficient filtering can be achieved. Indirectly, these systems help to resolve the problem of information overloading. Many automatic text summarization systems have been implemented for use with different languages. It has been established that the grammatical and lexical differences between languages have a significant effect on text processing. However, the impact of the language differences on the automatic text summarization systems has not yet been investigated. The authors provide an impact analysis of language difference on automatic text summarization. It includes the effect on the extraction processes, the scoring mechanisms, the performance, and the matching of the extracted sentences, using the parallel corpus in English and Chinese as the tested object. The analysis results provide a greater understanding of language differences and promote the future development of more advanced text summarization techniques.
    Footnote
    Beitrag einer special topic section on multilingual information systems
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.5, S.684-696
    Type
    a
  17. Sauperl, A.; Klasinc, J.; Luzar, S.: Components of abstracts : logical structure of scholarly abstracts in pharmacology, sociology, and linguistics and literature (2008) 0.01
    0.0054589617 = product of:
      0.0136474045 = sum of:
        0.0068111527 = weight(_text_:a in 1961) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=1961,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 1961, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1961)
        0.006836252 = product of:
          0.013672504 = sum of:
            0.013672504 = weight(_text_:information in 1961) [ClassicSimilarity], result of:
              0.013672504 = score(doc=1961,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16796975 = fieldWeight in 1961, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1961)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The international standard ISO 214:1976 defines an abstract as "an abbreviated, accurate representation of the contents of a document" (p. 1) that should "enable readers to identify the basic content of a document quickly and accurately to determine relevance" (p. 1). It also should be useful in computerized searching. The ISO standard suggests including the following elements: purpose, methods, results, and conclusions. Researchers have often challenged this structure and found that different disciplines and cultures prefer different information content. These claims are partially supported by the findings of our research into the structure of pharmacology, sociology, and Slovenian language and literature abstracts of papers published in international and Slovenian scientific periodicals. The three disciplines have different information content. Slovenian pharmacology abstracts differ in content from those in international periodicals while the differences between international and Slovenian abstracts are small in sociology. In the field of Slovenian language and literature, only domestic abstracts were studied. The identified differences can in part be attributed to the disciplines, but also to the different role of journals and papers in the professional society and to differences in perception of the role of abstracts. The findings raise questions about the structure of abstracts required by some publishers of international journals.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.9, S.1420-1432
    Type
    a
  18. Wheatley, A.; Armstrong, C.J.: Metadata, recall, and abstracts : can abstracts ever be reliable indicators of document value? (1997) 0.00
    0.0042062993 = product of:
      0.0105157485 = sum of:
        0.005779455 = weight(_text_:a in 824) [ClassicSimilarity], result of:
          0.005779455 = score(doc=824,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10809815 = fieldWeight in 824, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=824)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 824) [ClassicSimilarity], result of:
              0.009472587 = score(doc=824,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 824, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=824)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Abstracts from 7 Internet subject trees (Euroferret, Excite, Infoseek, Lycos Top 5%, Magellan, WebCrawler, Yahoo!), 5 Internet subject gateways (ADAM, EEVL, NetFirst, OMNI, SOSIG), and 3 online databases (ERIC, ISI, LISA) were examined for their subject content, treatment of various enriching features, physical properties such as overall length, anf their readability. Considerable differences were measured, and consistent similarities among abstracts from each type of source were demonstrated. Internet subject tree abstracts were generally the shortest, and online database abstracts the longest. Subject tree and online database abstracts were the most informative, but the level of coverage of document features such as tables, bibliographies, and geographical constraints were disappointingly poor. On balance, the Internet gateways appeared to be providing the most satisfactory abstracts. The authors discuss the continuing role in networked information retrieval of abstracts and their functional analoques such as metadata
    Type
    a
  19. Hartley, J.; Betts, L.: ¬The effects of spacing and titles on judgments of the effectiveness of structured abstracts (2007) 0.00
    0.004096731 = product of:
      0.010241828 = sum of:
        0.0034055763 = weight(_text_:a in 1325) [ClassicSimilarity], result of:
          0.0034055763 = score(doc=1325,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.06369744 = fieldWeight in 1325, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1325)
        0.006836252 = product of:
          0.013672504 = sum of:
            0.013672504 = weight(_text_:information in 1325) [ClassicSimilarity], result of:
              0.013672504 = score(doc=1325,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16796975 = fieldWeight in 1325, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1325)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Previous research assessing the effectiveness of structured abstracts has been limited in two respects. First, when comparing structured abstracts with traditional ones, investigators usually have rewritten the original abstracts, and thus confounded changes in the layout with changes in both the wording and the content of the text. Second, investigators have not always included the title of the article together with the abstract when asking participants to judge the quality of the abstracts, yet titles alert readers to the meaning of the materials that follow. The aim of this research was to redress these limitations. Three studies were carried out. Four versions of each of four abstracts were prepared. These versions consisted of structured/traditional abstracts matched in content, with and without titles. In Study 1, 64 undergraduates each rated one of these abstracts on six separate rating scales. In Study 2, 225 academics and research workers rated the abstracts electronically, and in Study 3, 252 information scientists did likewise. In Studies 1 and 3, the respondents rated the structured abstracts significantly more favorably than they did the traditional ones, but the presence or absence of titles had no effect on their judgments. In Study 2, no main effects were observed for structure or for titles. The layout of the text, together with the subheadings, contributed to the higher ratings of effectiveness for structured abstracts, but the presence or absence of titles had no clear effects in these experimental studies. It is likely that this spatial organization, together with the greater amount of information normally provided in structured abstracts, explains why structured abstracts are generally judged to be superior to traditional ones.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.14, S.2335-2340
    Type
    a
  20. Montesi, M.; Mackenzie Owen, J.: Revision of author abstracts : how it is carried out by LISA editors (2007) 0.00
    0.0035052493 = product of:
      0.008763123 = sum of:
        0.0048162127 = weight(_text_:a in 807) [ClassicSimilarity], result of:
          0.0048162127 = score(doc=807,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.090081796 = fieldWeight in 807, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=807)
        0.003946911 = product of:
          0.007893822 = sum of:
            0.007893822 = weight(_text_:information in 807) [ClassicSimilarity], result of:
              0.007893822 = score(doc=807,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.09697737 = fieldWeight in 807, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=807)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Purpose - The literature on abstracts recommends the revision of author supplied abstracts before their inclusion in database collections. However, little guidance is given on how to carry out such revision, and few studies exist on this topic. The purpose of this research paper is to first survey 187 bibliographic databases to ascertain how many did revise abstracts, and then study the practical amendments made by one of these, i.e. LISA (Library and Information Science Abstracts). Design/methodology/approach - Database policies were established by e-mail or through alternative sources, with 136 databases out of 187 exhaustively documented. Differences between 100 author-supplied abstracts and the corresponding 100 LISA amended abstracts were classified into sentence-level and beyond sentence-level categories, and then as additions, deletions and rephrasing of text. Findings - Revision of author abstracts was carried out by 66 databases, but in just 32 cases did it imply more than spelling, shortening of length and formula representation. In LISA, amendments were often non-systematic and inconsistent, but still pointed to significant aspects which were discussed. Originality/value - Amendments made by LISA editors are important in multi- and inter-disciplinary research, since they tend to clarify certain aspects such as terminology, and suggest that abstracts should not always be considered as substitutes for the original document. From this point-of-view, the revision of abstracts can be considered as an important factor in enhancing a database's quality.
    Type
    a

Types

  • a 45
  • m 10
  • r 2
  • s 2
  • b 1
  • More… Less…