Search (48 results, page 2 of 3)

  • × theme_ss:"Referieren"
  • × type_ss:"a"
  1. Endres-Niggemeyer, B.: Content analysis : a special case of text compression (1989) 0.00
    0.003945538 = product of:
      0.0118366135 = sum of:
        0.0118366135 = product of:
          0.023673227 = sum of:
            0.023673227 = weight(_text_:of in 3549) [ClassicSimilarity], result of:
              0.023673227 = score(doc=3549,freq=8.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.34554482 = fieldWeight in 3549, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3549)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Presents a theoretical model, based on the Flower/Hayes model of expository writing, of the process involved in content analysis for abstracting and indexing.
    Source
    Information, knowledge, evolution. Proceedings of the 44th FID Congress, Helsinki, 28.8.-1.9.1988. Ed. by S. Koshiala and R. Launo
  2. Montesi, M.; Urdiciain, B.G.: Recent linguistic research into author abstracts : its value for information science (2005) 0.00
    0.003925761 = product of:
      0.011777283 = sum of:
        0.011777283 = product of:
          0.023554565 = sum of:
            0.023554565 = weight(_text_:of in 4823) [ClassicSimilarity], result of:
              0.023554565 = score(doc=4823,freq=22.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.34381276 = fieldWeight in 4823, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4823)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper is a review of genre analysis of author abstracts carried out in the area of English for Special Purposes (ESP) since 1990. Given the descriptive character of such analysis, it can be valuable for Information Science (IS), as it provides a picture of the variation in author abstracts, depending an the discipline, culture and language of the author, and the envisaged context. The authors claim that such knowledge can be useful for information professionals who need to revise author abstracts, or use them for other activities in the organization of knowledge, such as subject analysis and control of vocabulary. With this purpose in mind, we summarize various findings of ESP research. We describe how abstracts vary in structure, content and discourse, and how linguists explain such variations. Other factors taken into account are the stylistic and discoursal features of the abstract, lexical choices, and the possible sources of blas. In conclusion, we show how such findings can have practical and theoretical implications for IS.
  3. Hartley, J.; Betts, L.: Common weaknesses in traditional abstracts in the social sciences (2009) 0.00
    0.003925761 = product of:
      0.011777283 = sum of:
        0.011777283 = product of:
          0.023554565 = sum of:
            0.023554565 = weight(_text_:of in 3115) [ClassicSimilarity], result of:
              0.023554565 = score(doc=3115,freq=22.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.34381276 = fieldWeight in 3115, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3115)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Detailed checklists and questionnaires have been used in the past to assess the quality of structured abstracts in the medical sciences. The aim of this article is to report the findings when a simpler checklist was used to evaluate the quality of 100 traditional abstracts published in 53 different social science journals. Most of these abstracts contained information about the aims, methods, and results of the studies. However, many did not report details about the sample sizes, ages, or sexes of the participants, or where the research was carried out. The correlation between the lengths of the abstracts and the amount of information present was 0.37 (p < .001), suggesting that word limits for abstracts may restrict the presence of key information to some extent. We conclude that authors can improve the quality of information in traditional abstracts in the social sciences by using the simple checklist provided in this article.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.10, S.2010-2018
  4. Sen, B.K.: Research articles in LISA Plus : problems of identification (1997) 0.00
    0.0039058835 = product of:
      0.01171765 = sum of:
        0.01171765 = product of:
          0.0234353 = sum of:
            0.0234353 = weight(_text_:of in 430) [ClassicSimilarity], result of:
              0.0234353 = score(doc=430,freq=16.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.34207192 = fieldWeight in 430, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=430)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports results of a study to determine how easy and quickly research articles in library and information science could be retrieved from the LISA Plus CD-ROM database. Results show that the search with the descriptor 'research' retrieves all types of articles and it is necessary to read through every abstract to locate the research articles. The introductory sentence of a substantial number of abstracts hinder the process of identification since the sentence provides such information as the conference where the paper was presented, the special issue or the section of a periodical where the article is located; or obvious background information. Suggests measures whereby research articles can be identified easily and rapidly
    Source
    Malaysian journal of library and information science. 2(1997) no.1, S.97-106
  5. Bowman, J.H.: Annotation: a lost art in cataloguing (2007) 0.00
    0.0039058835 = product of:
      0.01171765 = sum of:
        0.01171765 = product of:
          0.0234353 = sum of:
            0.0234353 = weight(_text_:of in 255) [ClassicSimilarity], result of:
              0.0234353 = score(doc=255,freq=16.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.34207192 = fieldWeight in 255, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=255)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Public library catalogues in early twentieth-century Britain frequently included annotations, either to clarify obscure titles or to provide further information about the subject-matter of the books they described. Two manuals giving instruction on how to do this were published at that time. Following World War I, with the decline of the printed catalogue, this kind of annotation became rarer, and was almost confined to bulletins of new books. The early issues of the British National Bibliography included some annotations in exceptional cases. Parallels are drawn with the provision of table-of-contents information in present-day OPAC's.
    Footnote
    Simultaneously published as Cataloger, Editor, and Scholar: Essays in Honor of Ruth C. Carter
  6. Koltay, T.: Abstracting: information literacy on a professional level (2009) 0.00
    0.0039058835 = product of:
      0.01171765 = sum of:
        0.01171765 = product of:
          0.0234353 = sum of:
            0.0234353 = weight(_text_:of in 3610) [ClassicSimilarity], result of:
              0.0234353 = score(doc=3610,freq=16.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.34207192 = fieldWeight in 3610, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3610)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - This paper aims to argue for a conception of information literacy (IL) that goes beyond the abilities of finding information as it includes communication skills. An important issue in this is that abstractors exercise IL on a professional level. Design/methodology/approach - By stressing the importance of the fact that information literacy extends towards verbal communication the paper takes an interdisciplinary approach, the main component of which is linguistics. Findings - It is found that verbal communication and especially analytic-synthetic writing activities play an important role in information literacy at the level of everyday language use, semi-professional and professional summarising of information. The latter level characterises abstracting. Originality/value - The paper adds to the body of knowledge about information literacy in general and in connection with communication and abstracting.
    Source
    Journal of documentation. 65(2009) no.5, S.841-855
  7. Bakewell, K.G.B.; Rowland, G.: Indexing and abstracting (1993) 0.00
    0.003865822 = product of:
      0.011597466 = sum of:
        0.011597466 = product of:
          0.023194931 = sum of:
            0.023194931 = weight(_text_:of in 5540) [ClassicSimilarity], result of:
              0.023194931 = score(doc=5540,freq=12.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.33856338 = fieldWeight in 5540, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5540)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    State of the art review of UK developments in indexing and abstracting druing the period 1986-1990 covering: bibliographies of indexing and abstracting; British standards (including the revised British Standard on indexing, BS 3700); Wheatley Medal and Carey Award; a list of indexes published during this period; the role of the computer and automatic indexing; hypermedia; PRECIS; POPSI, relational indexing; thesauri; education and training; the indexing process, newspaper indexing; fiction indexes; the indexing profession; and a review of abstracting and indexing services
  8. Cross, C.; Oppenheim, C.: ¬A genre analysis of scientific abstracts (2006) 0.00
    0.003701243 = product of:
      0.011103729 = sum of:
        0.011103729 = product of:
          0.022207458 = sum of:
            0.022207458 = weight(_text_:of in 5603) [ClassicSimilarity], result of:
              0.022207458 = score(doc=5603,freq=44.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.3241498 = fieldWeight in 5603, product of:
                  6.6332498 = tf(freq=44.0), with freq of:
                    44.0 = termFreq=44.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5603)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - The purpose of the paper is to analyse the structure of a small number of abstracts that have appeared in the CABI database over a number of years, during which time the authorship of the abstracts changed from CABI editorial staff to journal article authors themselves. This paper reports a study of the semantic organisation and thematic structure of 12 abstracts from the field of protozoology in an effort to discover whether these abstracts followed generally agreed abstracting guidelines. Design/methodology/approach - The method adopted was a move analysis of the text of the abstracts. This move analysis revealed a five-move pattern: move 1 situates the research within the scientific community; move 2 introduces the research by either describing the main features of the research or presenting its purpose; move 3 describes the methodology; move 4 states the results; and move 5 draws conclusions or suggests practical applications. Findings - Thematic analysis shows that scientific abstract authors thematise their subject by referring to the discourse domain or the "real" world. Not all of the abstracts succeeded in following the guideline advice. However, there was general consistency regarding semantic organisation and thematic structure. Research limitations/implications - The research limitations were the small number of abstracts examined, from just one subject domain. Practical limitations - The practical implications are the need for abstracting services to be clearer and more prescriptive regarding how they want abstracts to be structured as the lack of formal training in abstract writing increases the risk of subjectivity and verbosity and reduces clarity in scientific abstracts. Another implication of the research are that abstracting and indexing services must ensure that they maintain abstract quality if they introduce policies of accepting author abstracts. This is important as there is probably little formal training in abstract writing for science students at present. Recommendations for further research are made. Originality/value - This paper reports a study of the semantic organisation and thematic structure of 12 abstracts from the field of protozoology.
    Source
    Journal of documentation. 62(2006) no.4, S.428-446
  9. Endres-Niggemeyer, B.; Maier, E.; Sigel, A.: How to implement a naturalistic model of abstracting : four core working steps of an expert abstractor (1995) 0.00
    0.0036536194 = product of:
      0.010960858 = sum of:
        0.010960858 = product of:
          0.021921717 = sum of:
            0.021921717 = weight(_text_:of in 2930) [ClassicSimilarity], result of:
              0.021921717 = score(doc=2930,freq=14.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.31997898 = fieldWeight in 2930, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2930)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    4 working steps taken from a comprehensive empirical model of expert abstracting are studied in order to prepare an explorative implementation of a simulation model. It aims at explaining the knowledge processing activities during professional summarizing. Following the case-based and holistic strategy of qualitative empirical research, the main features of the simulation system were developed by investigating in detail a small but central test case - 4 working steps where an expert abstractor discovers what the paper is about and drafts the topic sentence of the abstract
  10. Wilson, M.J.; Wilson, M.L.: ¬A comparison of techniques for measuring sensemaking and learning within participant-generated summaries (2013) 0.00
    0.00355646 = product of:
      0.0106693795 = sum of:
        0.0106693795 = product of:
          0.021338759 = sum of:
            0.021338759 = weight(_text_:of in 612) [ClassicSimilarity], result of:
              0.021338759 = score(doc=612,freq=26.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.31146988 = fieldWeight in 612, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=612)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    While it is easy to identify whether someone has found a piece of information during a search task, it is much harder to measure how much someone has learned during the search process. Searchers who are learning often exhibit exploratory behaviors, and so current research is often focused on improving support for exploratory search. Consequently, we need effective measures of learning to demonstrate better support for exploratory search. Some approaches, such as quizzes, measure recall when learning from a fixed source of information. This research, however, focuses on techniques for measuring open-ended learning, which often involve analyzing handwritten summaries produced by participants after a task. There are two common techniques for analyzing such summaries: (a) counting facts and statements and (b) judging topic coverage. Both of these techniques, however, can be easily confounded by simple variables such as summary length. This article presents a new technique that measures depth of learning within written summaries based on Bloom's taxonomy (B.S. Bloom & M.D. Engelhart, 1956). This technique was generated using grounded theory and is designed to be less susceptible to such confounding variables. Together, these three categories of measure were compared by applying them to a large collection of written summaries produced in a task-based study, and our results provide insights into each of their strengths and weaknesses. Both fact-to-statement ratio and our own measure of depth of learning were effective while being less affected by confounding variables. Recommendations and clear areas of future work are provided to help continued research into supporting sensemaking and learning.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.2, S.291-306
  11. Molina, M.P.: Documentary abstracting : toward a methodological approach (1995) 0.00
    0.0035289964 = product of:
      0.010586989 = sum of:
        0.010586989 = product of:
          0.021173978 = sum of:
            0.021173978 = weight(_text_:of in 1790) [ClassicSimilarity], result of:
              0.021173978 = score(doc=1790,freq=10.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.3090647 = fieldWeight in 1790, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1790)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In the general abstracting process (GAP), there are 2 types of data: textual, within a particular framed trilogy (surface, deep, and rhetoric); and documentary (abstractor, means of production, and user demands). Proposes its development, the use of the following disciplines, among others: linguistics (structural, tranformational, and textual), logic (formal and fuzzy), and psychology (cognitive). The model for that textual transformation is based on a system of combined strategies with 4 key stages: reading understanding, selection, interpretation, and synthesis
    Source
    Journal of the American Society for Information Science. 46(1995) no.3, S.225-234
  12. Wang, F.L.; Yang, C.C.: ¬The impact analysis of language differences on an automatic multilingual text summarization system (2006) 0.00
    0.003271467 = product of:
      0.009814401 = sum of:
        0.009814401 = product of:
          0.019628802 = sum of:
            0.019628802 = weight(_text_:of in 5049) [ClassicSimilarity], result of:
              0.019628802 = score(doc=5049,freq=22.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.28651062 = fieldWeight in 5049, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5049)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Based on the salient features of the documents, automatic text summarization systems extract the key sentences from source documents. This process supports the users in evaluating the relevance of the extracted documents returned by information retrieval systems. Because of this tool, efficient filtering can be achieved. Indirectly, these systems help to resolve the problem of information overloading. Many automatic text summarization systems have been implemented for use with different languages. It has been established that the grammatical and lexical differences between languages have a significant effect on text processing. However, the impact of the language differences on the automatic text summarization systems has not yet been investigated. The authors provide an impact analysis of language difference on automatic text summarization. It includes the effect on the extraction processes, the scoring mechanisms, the performance, and the matching of the extracted sentences, using the parallel corpus in English and Chinese as the tested object. The analysis results provide a greater understanding of language differences and promote the future development of more advanced text summarization techniques.
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.5, S.684-696
  13. Montesi, M.; Mackenzie Owen, J.: Revision of author abstracts : how it is carried out by LISA editors (2007) 0.00
    0.0031192217 = product of:
      0.009357665 = sum of:
        0.009357665 = product of:
          0.01871533 = sum of:
            0.01871533 = weight(_text_:of in 807) [ClassicSimilarity], result of:
              0.01871533 = score(doc=807,freq=20.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.27317715 = fieldWeight in 807, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=807)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - The literature on abstracts recommends the revision of author supplied abstracts before their inclusion in database collections. However, little guidance is given on how to carry out such revision, and few studies exist on this topic. The purpose of this research paper is to first survey 187 bibliographic databases to ascertain how many did revise abstracts, and then study the practical amendments made by one of these, i.e. LISA (Library and Information Science Abstracts). Design/methodology/approach - Database policies were established by e-mail or through alternative sources, with 136 databases out of 187 exhaustively documented. Differences between 100 author-supplied abstracts and the corresponding 100 LISA amended abstracts were classified into sentence-level and beyond sentence-level categories, and then as additions, deletions and rephrasing of text. Findings - Revision of author abstracts was carried out by 66 databases, but in just 32 cases did it imply more than spelling, shortening of length and formula representation. In LISA, amendments were often non-systematic and inconsistent, but still pointed to significant aspects which were discussed. Originality/value - Amendments made by LISA editors are important in multi- and inter-disciplinary research, since they tend to clarify certain aspects such as terminology, and suggest that abstracts should not always be considered as substitutes for the original document. From this point-of-view, the revision of abstracts can be considered as an important factor in enhancing a database's quality.
  14. O'Rourke, A.J.: Structured abstracts in information retrieval from biomedical databases : a literature survey (1997) 0.00
    0.0030878722 = product of:
      0.009263616 = sum of:
        0.009263616 = product of:
          0.018527232 = sum of:
            0.018527232 = weight(_text_:of in 85) [ClassicSimilarity], result of:
              0.018527232 = score(doc=85,freq=10.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.2704316 = fieldWeight in 85, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=85)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Clear guidelines have been provided for structuring the abstracts of original research and review articles and, in the past 10 years, several major medical periodicals have adopted the policy of including such abstracts with all their articles. A review of the literature reveals that proponents claim that structured abstracts enhance peer review, improve information retrieval, and ease critical appraisal. However, some periodicals have not adopted structured abstracts and their opponents claim that they make articles longer and harder to read and restrict author originality. Concludes that previous research on structured abstracts focused on how closely they followed prescribed structure and include salient points of the full text, rather than their role in increasing the usefulness of the article
  15. Jizba, L.: Reflections on summarizing and abstracting : implications for Internet Web documents, and standardized library cataloging databases (1997) 0.00
    0.0030878722 = product of:
      0.009263616 = sum of:
        0.009263616 = product of:
          0.018527232 = sum of:
            0.018527232 = weight(_text_:of in 701) [ClassicSimilarity], result of:
              0.018527232 = score(doc=701,freq=10.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.2704316 = fieldWeight in 701, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=701)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Comments on the value of abstracts or summary notes to information available online via the Internet and WWW and concludes that automated abstracting techniques would be highly useful if routinely applied to cataloguing or metadata for Internet documents and documents in other databases. Information seekers need external summary information to assess content and value of retrieved documents. Examines traditional models for writers, in library audiovisual cataloguing, periodical databases and archival work, along with innovative new model databases featuring robust cataloguing summaries. Notes recent developments in automated techniques, computational research, and machine summarization of digital images. Recommendations are made for future designers of cataloguing and metadata standards
    Source
    Journal of Internet cataloging. 1(1997) no.2, S.15-39
  16. Hartley, J.; Betts, L.: Revising and polishing a structured abstract : is it worth the time and effort? (2008) 0.00
    0.0029591531 = product of:
      0.008877459 = sum of:
        0.008877459 = product of:
          0.017754918 = sum of:
            0.017754918 = weight(_text_:of in 2362) [ClassicSimilarity], result of:
              0.017754918 = score(doc=2362,freq=18.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.25915858 = fieldWeight in 2362, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2362)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Many writers of structured abstracts spend a good deal of time revising and polishing their texts - but is it worth it? Do readers notice the difference? In this paper we report three studies of readers using rating scales to judge (electronically) the clarity of an original and a revised abstract, both as a whole and in its constituent parts. In Study 1, with approximately 250 academics and research workers, we found some significant differences in favor of the revised abstract, but in Study 2, with approximately 210 information scientists, we found no significant effects. Pooling the data from Studies 1 and 2, however, in Study 3, led to significant differences at a higher probability level between the perception of the original and revised abstract as a whole and between the same components as found in Study 1. These results thus indicate that the revised abstract as a whole, as well as certain specific components of it, were judged significantly clearer than the original one. In short, the results of these experiments show that readers can and do perceive differences between original and revised texts - sometimes - and that therefore these efforts are worth the time and effort.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.12, S.1870-1877
  17. Wheatley, A.; Armstrong, C.J.: Metadata, recall, and abstracts : can abstracts ever be reliable indicators of document value? (1997) 0.00
    0.0028993662 = product of:
      0.008698098 = sum of:
        0.008698098 = product of:
          0.017396197 = sum of:
            0.017396197 = weight(_text_:of in 824) [ClassicSimilarity], result of:
              0.017396197 = score(doc=824,freq=12.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.25392252 = fieldWeight in 824, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=824)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Abstracts from 7 Internet subject trees (Euroferret, Excite, Infoseek, Lycos Top 5%, Magellan, WebCrawler, Yahoo!), 5 Internet subject gateways (ADAM, EEVL, NetFirst, OMNI, SOSIG), and 3 online databases (ERIC, ISI, LISA) were examined for their subject content, treatment of various enriching features, physical properties such as overall length, anf their readability. Considerable differences were measured, and consistent similarities among abstracts from each type of source were demonstrated. Internet subject tree abstracts were generally the shortest, and online database abstracts the longest. Subject tree and online database abstracts were the most informative, but the level of coverage of document features such as tables, bibliographies, and geographical constraints were disappointingly poor. On balance, the Internet gateways appeared to be providing the most satisfactory abstracts. The authors discuss the continuing role in networked information retrieval of abstracts and their functional analoques such as metadata
  18. Pinto, M.: Abstracting/abstract adaptation to digital environments : research trends (2003) 0.00
    0.0028993662 = product of:
      0.008698098 = sum of:
        0.008698098 = product of:
          0.017396197 = sum of:
            0.017396197 = weight(_text_:of in 4446) [ClassicSimilarity], result of:
              0.017396197 = score(doc=4446,freq=12.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.25392252 = fieldWeight in 4446, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4446)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The technological revolution is affecting the structure, form and content of documents, reducing the effectiveness of traditional abstracts that, to some extent, are inadequate to the new documentary conditions. Aims to show the directions in which abstracting/abstracts can evolve to achieve the necessary adequacy in the new digital environments. Three researching trends are proposed: theoretical, methodological and pragmatic. Theoretically, there are some needs for expanding the document concept, reengineering abstracting and designing interdisciplinary models. Methodologically, the trend is toward the structuring, automating and qualifying of the abstracts. Pragmatically, abstracts networking, combined with alternative and complementary models, open a new and promising horizon. Automating, structuring and qualifying abstracting/abstract offer some short-term prospects for progress. Concludes that reengineering, networking and visualising would be middle-term fruitful areas of research toward the full adequacy of abstracting in the new electronic age.
    Source
    Journal of documentation. 59(2003) no.5, S.581-608
  19. Fraenkel, A.S.; Klein, S.T.: Information retrieval from annotated texts (1999) 0.00
    0.0027618767 = product of:
      0.00828563 = sum of:
        0.00828563 = product of:
          0.01657126 = sum of:
            0.01657126 = weight(_text_:of in 4308) [ClassicSimilarity], result of:
              0.01657126 = score(doc=4308,freq=2.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.24188137 = fieldWeight in 4308, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4308)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the American Society for Information Science. 50(1999) no.10, S.845-854
  20. Hartley, J.: Do structured abstracts take more space? : And does it matter? (2002) 0.00
    0.0027618767 = product of:
      0.00828563 = sum of:
        0.00828563 = product of:
          0.01657126 = sum of:
            0.01657126 = weight(_text_:of in 582) [ClassicSimilarity], result of:
              0.01657126 = score(doc=582,freq=2.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.24188137 = fieldWeight in 582, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.109375 = fieldNorm(doc=582)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Journal of information science. 28(2002) no.5, S.417-422