Search (14 results, page 1 of 1)

  • × year_i:[2000 TO 2010}
  • × theme_ss:"Referieren"
  1. Koltay, T.: Abstracting: information literacy on a professional level (2009) 0.01
    0.0055089183 = product of:
      0.022035673 = sum of:
        0.022035673 = weight(_text_:information in 3610) [ClassicSimilarity], result of:
          0.022035673 = score(doc=3610,freq=14.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.3592092 = fieldWeight in 3610, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3610)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - This paper aims to argue for a conception of information literacy (IL) that goes beyond the abilities of finding information as it includes communication skills. An important issue in this is that abstractors exercise IL on a professional level. Design/methodology/approach - By stressing the importance of the fact that information literacy extends towards verbal communication the paper takes an interdisciplinary approach, the main component of which is linguistics. Findings - It is found that verbal communication and especially analytic-synthetic writing activities play an important role in information literacy at the level of everyday language use, semi-professional and professional summarising of information. The latter level characterises abstracting. Originality/value - The paper adds to the body of knowledge about information literacy in general and in connection with communication and abstracting.
  2. Lancaster, F.W.: Indexing and abstracting in theory and practice (2003) 0.00
    0.0047219303 = product of:
      0.018887721 = sum of:
        0.018887721 = weight(_text_:information in 4913) [ClassicSimilarity], result of:
          0.018887721 = score(doc=4913,freq=14.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.3078936 = fieldWeight in 4913, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4913)
      0.25 = coord(1/4)
    
    Content
    Covers: indexing principles and practice; precoordinate indexes; consistency and quality of indexing; types and functions of abstracts; writing an abstract; evaluation theory and practice; approaches used in indexing and abstracting services; indexing enhancement; natural language in information retrieval; indexing and abstracting of imaginative works; databases of images and sound; automatic indexing and abstracting; the future of indexing and abstracting services
    Footnote
    Rez. in: JASIST 57(2006) no.1, S.144-145 (H. Saggion): "... This volume is a very valuable source of information for not only students and professionals in library and information science but also for individuals and institutions involved in knowledge management and organization activities. Because of its broad coverage of the information science topic, teachers will find the contents of this book useful for courses in the areas of information technology, digital as well as traditional libraries, and information science in general."
    Imprint
    Champaign, IL : Graduate School of Library and Information Science
  3. Hartley, J.: Do structured abstracts take more space? : And does it matter? (2002) 0.00
    0.004164351 = product of:
      0.016657405 = sum of:
        0.016657405 = weight(_text_:information in 582) [ClassicSimilarity], result of:
          0.016657405 = score(doc=582,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.27153665 = fieldWeight in 582, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=582)
      0.25 = coord(1/4)
    
    Source
    Journal of information science. 28(2002) no.5, S.417-422
  4. Hartley, J.; Betts, L.: Common weaknesses in traditional abstracts in the social sciences (2009) 0.00
    0.0039907596 = product of:
      0.015963038 = sum of:
        0.015963038 = weight(_text_:information in 3115) [ClassicSimilarity], result of:
          0.015963038 = score(doc=3115,freq=10.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.2602176 = fieldWeight in 3115, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3115)
      0.25 = coord(1/4)
    
    Abstract
    Detailed checklists and questionnaires have been used in the past to assess the quality of structured abstracts in the medical sciences. The aim of this article is to report the findings when a simpler checklist was used to evaluate the quality of 100 traditional abstracts published in 53 different social science journals. Most of these abstracts contained information about the aims, methods, and results of the studies. However, many did not report details about the sample sizes, ages, or sexes of the participants, or where the research was carried out. The correlation between the lengths of the abstracts and the amount of information present was 0.37 (p < .001), suggesting that word limits for abstracts may restrict the presence of key information to some extent. We conclude that authors can improve the quality of information in traditional abstracts in the social sciences by using the simple checklist provided in this article.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.10, S.2010-2018
  5. Montesi, M.; Urdiciain, B.G.: Recent linguistic research into author abstracts : its value for information science (2005) 0.00
    0.003091229 = product of:
      0.012364916 = sum of:
        0.012364916 = weight(_text_:information in 4823) [ClassicSimilarity], result of:
          0.012364916 = score(doc=4823,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.20156369 = fieldWeight in 4823, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4823)
      0.25 = coord(1/4)
    
    Abstract
    This paper is a review of genre analysis of author abstracts carried out in the area of English for Special Purposes (ESP) since 1990. Given the descriptive character of such analysis, it can be valuable for Information Science (IS), as it provides a picture of the variation in author abstracts, depending an the discipline, culture and language of the author, and the envisaged context. The authors claim that such knowledge can be useful for information professionals who need to revise author abstracts, or use them for other activities in the organization of knowledge, such as subject analysis and control of vocabulary. With this purpose in mind, we summarize various findings of ESP research. We describe how abstracts vary in structure, content and discourse, and how linguists explain such variations. Other factors taken into account are the stylistic and discoursal features of the abstract, lexical choices, and the possible sources of blas. In conclusion, we show how such findings can have practical and theoretical implications for IS.
  6. Parekh, R.L.: Advanced indexing and abstracting practices (2000) 0.00
    0.003091229 = product of:
      0.012364916 = sum of:
        0.012364916 = weight(_text_:information in 119) [ClassicSimilarity], result of:
          0.012364916 = score(doc=119,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.20156369 = fieldWeight in 119, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=119)
      0.25 = coord(1/4)
    
    Abstract
    Indexing and abstracting are not activities that should be looked upon as ends in themselves. It is the results of these activities that should be evaluated and this can only be done within the context of a particular database, whether in printed or machine-readable form. In this context, the indexing can be judged successful if it allows searchers to locate items they want without having to look at many they do not want. This book intended primarily as a text to be used in teaching indexing and abstracting of Library and information science. It is an immense value to all individuals and institutions involved in information retrieval and related activities, including librarians, managers of information centres and database producers.
  7. Wang, F.L.; Yang, C.C.: ¬The impact analysis of language differences on an automatic multilingual text summarization system (2006) 0.00
    0.0029745363 = product of:
      0.011898145 = sum of:
        0.011898145 = weight(_text_:information in 5049) [ClassicSimilarity], result of:
          0.011898145 = score(doc=5049,freq=8.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.19395474 = fieldWeight in 5049, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5049)
      0.25 = coord(1/4)
    
    Abstract
    Based on the salient features of the documents, automatic text summarization systems extract the key sentences from source documents. This process supports the users in evaluating the relevance of the extracted documents returned by information retrieval systems. Because of this tool, efficient filtering can be achieved. Indirectly, these systems help to resolve the problem of information overloading. Many automatic text summarization systems have been implemented for use with different languages. It has been established that the grammatical and lexical differences between languages have a significant effect on text processing. However, the impact of the language differences on the automatic text summarization systems has not yet been investigated. The authors provide an impact analysis of language difference on automatic text summarization. It includes the effect on the extraction processes, the scoring mechanisms, the performance, and the matching of the extracted sentences, using the parallel corpus in English and Chinese as the tested object. The analysis results provide a greater understanding of language differences and promote the future development of more advanced text summarization techniques.
    Footnote
    Beitrag einer special topic section on multilingual information systems
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.5, S.684-696
  8. Bowman, J.H.: Annotation: a lost art in cataloguing (2007) 0.00
    0.0029446408 = product of:
      0.011778563 = sum of:
        0.011778563 = weight(_text_:information in 255) [ClassicSimilarity], result of:
          0.011778563 = score(doc=255,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.1920054 = fieldWeight in 255, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=255)
      0.25 = coord(1/4)
    
    Abstract
    Public library catalogues in early twentieth-century Britain frequently included annotations, either to clarify obscure titles or to provide further information about the subject-matter of the books they described. Two manuals giving instruction on how to do this were published at that time. Following World War I, with the decline of the printed catalogue, this kind of annotation became rarer, and was almost confined to bulletins of new books. The early issues of the British National Bibliography included some annotations in exceptional cases. Parallels are drawn with the provision of table-of-contents information in present-day OPAC's.
  9. Hartley, J.; Betts, L.: ¬The effects of spacing and titles on judgments of the effectiveness of structured abstracts (2007) 0.00
    0.0025760243 = product of:
      0.010304097 = sum of:
        0.010304097 = weight(_text_:information in 1325) [ClassicSimilarity], result of:
          0.010304097 = score(doc=1325,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.16796975 = fieldWeight in 1325, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1325)
      0.25 = coord(1/4)
    
    Abstract
    Previous research assessing the effectiveness of structured abstracts has been limited in two respects. First, when comparing structured abstracts with traditional ones, investigators usually have rewritten the original abstracts, and thus confounded changes in the layout with changes in both the wording and the content of the text. Second, investigators have not always included the title of the article together with the abstract when asking participants to judge the quality of the abstracts, yet titles alert readers to the meaning of the materials that follow. The aim of this research was to redress these limitations. Three studies were carried out. Four versions of each of four abstracts were prepared. These versions consisted of structured/traditional abstracts matched in content, with and without titles. In Study 1, 64 undergraduates each rated one of these abstracts on six separate rating scales. In Study 2, 225 academics and research workers rated the abstracts electronically, and in Study 3, 252 information scientists did likewise. In Studies 1 and 3, the respondents rated the structured abstracts significantly more favorably than they did the traditional ones, but the presence or absence of titles had no effect on their judgments. In Study 2, no main effects were observed for structure or for titles. The layout of the text, together with the subheadings, contributed to the higher ratings of effectiveness for structured abstracts, but the presence or absence of titles had no clear effects in these experimental studies. It is likely that this spatial organization, together with the greater amount of information normally provided in structured abstracts, explains why structured abstracts are generally judged to be superior to traditional ones.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.14, S.2335-2340
  10. Sauperl, A.; Klasinc, J.; Luzar, S.: Components of abstracts : logical structure of scholarly abstracts in pharmacology, sociology, and linguistics and literature (2008) 0.00
    0.0025760243 = product of:
      0.010304097 = sum of:
        0.010304097 = weight(_text_:information in 1961) [ClassicSimilarity], result of:
          0.010304097 = score(doc=1961,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.16796975 = fieldWeight in 1961, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1961)
      0.25 = coord(1/4)
    
    Abstract
    The international standard ISO 214:1976 defines an abstract as "an abbreviated, accurate representation of the contents of a document" (p. 1) that should "enable readers to identify the basic content of a document quickly and accurately to determine relevance" (p. 1). It also should be useful in computerized searching. The ISO standard suggests including the following elements: purpose, methods, results, and conclusions. Researchers have often challenged this structure and found that different disciplines and cultures prefer different information content. These claims are partially supported by the findings of our research into the structure of pharmacology, sociology, and Slovenian language and literature abstracts of papers published in international and Slovenian scientific periodicals. The three disciplines have different information content. Slovenian pharmacology abstracts differ in content from those in international periodicals while the differences between international and Slovenian abstracts are small in sociology. In the field of Slovenian language and literature, only domestic abstracts were studied. The identified differences can in part be attributed to the disciplines, but also to the different role of journals and papers in the professional society and to differences in perception of the role of abstracts. The findings raise questions about the structure of abstracts required by some publishers of international journals.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.9, S.1420-1432
  11. Hartley, J.; Betts, L.: Revising and polishing a structured abstract : is it worth the time and effort? (2008) 0.00
    0.0021033147 = product of:
      0.008413259 = sum of:
        0.008413259 = weight(_text_:information in 2362) [ClassicSimilarity], result of:
          0.008413259 = score(doc=2362,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.13714671 = fieldWeight in 2362, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2362)
      0.25 = coord(1/4)
    
    Abstract
    Many writers of structured abstracts spend a good deal of time revising and polishing their texts - but is it worth it? Do readers notice the difference? In this paper we report three studies of readers using rating scales to judge (electronically) the clarity of an original and a revised abstract, both as a whole and in its constituent parts. In Study 1, with approximately 250 academics and research workers, we found some significant differences in favor of the revised abstract, but in Study 2, with approximately 210 information scientists, we found no significant effects. Pooling the data from Studies 1 and 2, however, in Study 3, led to significant differences at a higher probability level between the perception of the original and revised abstract as a whole and between the same components as found in Study 1. These results thus indicate that the revised abstract as a whole, as well as certain specific components of it, were judged significantly clearer than the original one. In short, the results of these experiments show that readers can and do perceive differences between original and revised texts - sometimes - and that therefore these efforts are worth the time and effort.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.12, S.1870-1877
  12. Ou, S.; Khoo, C.; Goh, D.H.; Heng, H.-Y.: Automatic discourse parsing of sociology dissertation abstracts as sentence categorization (2004) 0.00
    0.0020608194 = product of:
      0.008243278 = sum of:
        0.008243278 = weight(_text_:information in 2676) [ClassicSimilarity], result of:
          0.008243278 = score(doc=2676,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.1343758 = fieldWeight in 2676, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2676)
      0.25 = coord(1/4)
    
    Abstract
    We investigated an approach to automatic discourse parsing of sociology dissertation abstracts as a sentence categorization task. Decision tree induction was used for the automatic categorization. Three models were developed. Model 1 made use of word tokens found in the sentences. Model 2 made use of both word tokens and sentence position in the abstract. In addition to the attributes used in Model 2, Model 3 also considered information regarding the presence of indicator words in surrounding sentences. Model 3 obtained the highest accuracy rate of 74.5 % when applied to a test sample, compared to 71.6% for Model 2 and 60.8% for Model 1. The results indicated that information about sentence position can substantially increase the accuracy of categorization, and indicator words in earlier sentences (before the sentence being processed) also contribute to the categorization accuracy.
    Source
    Knowledge organization and the global information society: Proceedings of the 8th International ISKO Conference 13-16 July 2004, London, UK. Ed.: I.C. McIlwaine
  13. Kuhlen, R.: Informationsaufbereitung III : Referieren (Abstracts - Abstracting - Grundlagen) (2004) 0.00
    0.0020608194 = product of:
      0.008243278 = sum of:
        0.008243278 = weight(_text_:information in 2917) [ClassicSimilarity], result of:
          0.008243278 = score(doc=2917,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.1343758 = fieldWeight in 2917, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2917)
      0.25 = coord(1/4)
    
    Abstract
    Was ein Abstract (im Folgenden synonym mit Referat oder Kurzreferat gebraucht) ist, legt das American National Standards Institute in einer Weise fest, die sicherlich von den meisten Fachleuten akzeptiert werden kann: "An abstract is defined as an abbreviated, accurate representation of the contents of a document"; fast genauso die deutsche Norm DIN 1426: "Das Kurzreferat gibt kurz und klar den Inhalt des Dokuments wieder." Abstracts gehören zum wissenschaftlichen Alltag. Weitgehend allen Publikationen, zumindest in den naturwissenschaftlichen, technischen, informationsbezogenen oder medizinischen Bereichen, gehen Abstracts voran, "prefe-rably prepared by its author(s) for publication with it". Es gibt wohl keinen Wissenschaftler, der nicht irgendwann einmal ein Abstract geschrieben hätte. Gehört das Erstellen von Abstracts dann überhaupt zur dokumentarischen bzw informationswissenschaftlichen Methodenlehre, wenn es jeder kann? Was macht den informationellen Mehrwert aus, der durch Expertenreferate gegenüber Laienreferaten erzeugt wird? Dies ist nicht so leicht zu beantworten, zumal geeignete Bewertungsverfahren fehlen, die Qualität von Abstracts vergleichend "objektiv" zu messen. Abstracts werden in erheblichem Umfang von Informationsspezialisten erstellt, oft unter der Annahme, dass Autoren selber dafür weniger geeignet sind. Vergegenwärtigen wir uns, was wir über Abstracts und Abstracting wissen. Ein besonders gelungenes Abstract ist zuweilen klarer als der Ursprungstext selber, darf aber nicht mehr Information als dieser enthalten: "Good abstracts are highly structured, concise, and coherent, and are the result of a thorough analysis of the content of the abstracted materials. Abstracts may be more readable than the basis documents, but because of size constraints they rarely equal and never surpass the information content of the basic document". Dies ist verständlich, denn ein "Abstract" ist zunächst nichts anderes als ein Ergebnis des Vorgangs einer Abstraktion. Ohne uns zu sehr in die philosophischen Hintergründe der Abstraktion zu verlieren, besteht diese doch "in der Vernachlässigung von bestimmten Vorstellungsbzw. Begriffsinhalten, von welchen zugunsten anderer Teilinhalte abgesehen, abstrahiert' wird. Sie ist stets verbunden mit einer Fixierung von (interessierenden) Merkmalen durch die aktive Aufmerksamkeit, die unter einem bestimmten pragmatischen Gesichtspunkt als wesentlich' für einen vorgestellten bzw für einen unter einen Begriff fallenden Gegenstand (oder eine Mehrheit von Gegenständen) betrachtet werden". Abstracts reduzieren weniger Begriffsinhalte, sondern Texte bezüglich ihres proportionalen Gehaltes. Borko/ Bernier haben dies sogar quantifiziert; sie schätzen den Reduktionsfaktor auf 1:10 bis 1:12
    Source
    Grundlagen der praktischen Information und Dokumentation. 5., völlig neu gefaßte Ausgabe. 2 Bde. Hrsg. von R. Kuhlen, Th. Seeger u. D. Strauch. Begründet von Klaus Laisiepen, Ernst Lutterbeck, Karl-Heinrich Meyer-Uhlenried. Bd.1: Handbuch zur Einführung in die Informationswissenschaft und -praxis
  14. Montesi, M.; Mackenzie Owen, J.: Revision of author abstracts : how it is carried out by LISA editors (2007) 0.00
    0.0014872681 = product of:
      0.0059490725 = sum of:
        0.0059490725 = weight(_text_:information in 807) [ClassicSimilarity], result of:
          0.0059490725 = score(doc=807,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.09697737 = fieldWeight in 807, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=807)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The literature on abstracts recommends the revision of author supplied abstracts before their inclusion in database collections. However, little guidance is given on how to carry out such revision, and few studies exist on this topic. The purpose of this research paper is to first survey 187 bibliographic databases to ascertain how many did revise abstracts, and then study the practical amendments made by one of these, i.e. LISA (Library and Information Science Abstracts). Design/methodology/approach - Database policies were established by e-mail or through alternative sources, with 136 databases out of 187 exhaustively documented. Differences between 100 author-supplied abstracts and the corresponding 100 LISA amended abstracts were classified into sentence-level and beyond sentence-level categories, and then as additions, deletions and rephrasing of text. Findings - Revision of author abstracts was carried out by 66 databases, but in just 32 cases did it imply more than spelling, shortening of length and formula representation. In LISA, amendments were often non-systematic and inconsistent, but still pointed to significant aspects which were discussed. Originality/value - Amendments made by LISA editors are important in multi- and inter-disciplinary research, since they tend to clarify certain aspects such as terminology, and suggest that abstracts should not always be considered as substitutes for the original document. From this point-of-view, the revision of abstracts can be considered as an important factor in enhancing a database's quality.