Search (56 results, page 3 of 3)

  • × language_ss:"e"
  • × theme_ss:"Referieren"
  1. Montesi, M.; Mackenzie Owen, J.: Revision of author abstracts : how it is carried out by LISA editors (2007) 0.00
    4.1050607E-4 = product of:
      0.006978603 = sum of:
        0.006978603 = weight(_text_:in in 807) [ClassicSimilarity], result of:
          0.006978603 = score(doc=807,freq=10.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.16802745 = fieldWeight in 807, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=807)
      0.05882353 = coord(1/17)
    
    Abstract
    Purpose - The literature on abstracts recommends the revision of author supplied abstracts before their inclusion in database collections. However, little guidance is given on how to carry out such revision, and few studies exist on this topic. The purpose of this research paper is to first survey 187 bibliographic databases to ascertain how many did revise abstracts, and then study the practical amendments made by one of these, i.e. LISA (Library and Information Science Abstracts). Design/methodology/approach - Database policies were established by e-mail or through alternative sources, with 136 databases out of 187 exhaustively documented. Differences between 100 author-supplied abstracts and the corresponding 100 LISA amended abstracts were classified into sentence-level and beyond sentence-level categories, and then as additions, deletions and rephrasing of text. Findings - Revision of author abstracts was carried out by 66 databases, but in just 32 cases did it imply more than spelling, shortening of length and formula representation. In LISA, amendments were often non-systematic and inconsistent, but still pointed to significant aspects which were discussed. Originality/value - Amendments made by LISA editors are important in multi- and inter-disciplinary research, since they tend to clarify certain aspects such as terminology, and suggest that abstracts should not always be considered as substitutes for the original document. From this point-of-view, the revision of abstracts can be considered as an important factor in enhancing a database's quality.
  2. Endres-Niggemeyer, B.: Summarizing information (1998) 0.00
    3.8157197E-4 = product of:
      0.006486723 = sum of:
        0.006486723 = weight(_text_:in in 688) [ClassicSimilarity], result of:
          0.006486723 = score(doc=688,freq=6.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.1561842 = fieldWeight in 688, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=688)
      0.05882353 = coord(1/17)
    
    Abstract
    Summarizing is the process of reducing the large information size of something like a novel or a scientific paper to a short summary or abstract comprising only the most essential points. Summarizing is frequent in everyday communication, but it is also a professional skill for journalists and others. Automated summarizing functions are urgently needed by Internet users who wish to avoid being overwhelmed by information. This book presents the state of the art and surveys related research; it deals with everyday and professional summarizing as well as computerized approaches. The author focuses in detail on the cognitive pro-cess involved in summarizing and supports this with a multimedia simulation systems on the accompanying CD-ROM
  3. Pinto, M.: Abstracting/abstract adaptation to digital environments : research trends (2003) 0.00
    3.8157197E-4 = product of:
      0.006486723 = sum of:
        0.006486723 = weight(_text_:in in 4446) [ClassicSimilarity], result of:
          0.006486723 = score(doc=4446,freq=6.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.1561842 = fieldWeight in 4446, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4446)
      0.05882353 = coord(1/17)
    
    Abstract
    The technological revolution is affecting the structure, form and content of documents, reducing the effectiveness of traditional abstracts that, to some extent, are inadequate to the new documentary conditions. Aims to show the directions in which abstracting/abstracts can evolve to achieve the necessary adequacy in the new digital environments. Three researching trends are proposed: theoretical, methodological and pragmatic. Theoretically, there are some needs for expanding the document concept, reengineering abstracting and designing interdisciplinary models. Methodologically, the trend is toward the structuring, automating and qualifying of the abstracts. Pragmatically, abstracts networking, combined with alternative and complementary models, open a new and promising horizon. Automating, structuring and qualifying abstracting/abstract offer some short-term prospects for progress. Concludes that reengineering, networking and visualising would be middle-term fruitful areas of research toward the full adequacy of abstracting in the new electronic age.
  4. Endres-Niggemeyer, B.: Content analysis : a special case of text compression (1989) 0.00
    3.6716778E-4 = product of:
      0.0062418524 = sum of:
        0.0062418524 = weight(_text_:in in 3549) [ClassicSimilarity], result of:
          0.0062418524 = score(doc=3549,freq=2.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.15028831 = fieldWeight in 3549, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=3549)
      0.05882353 = coord(1/17)
    
    Abstract
    Presents a theoretical model, based on the Flower/Hayes model of expository writing, of the process involved in content analysis for abstracting and indexing.
  5. Cremmins, E.T.: ¬The art of abstracting (1996) 0.00
    3.6716778E-4 = product of:
      0.0062418524 = sum of:
        0.0062418524 = weight(_text_:in in 282) [ClassicSimilarity], result of:
          0.0062418524 = score(doc=282,freq=2.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.15028831 = fieldWeight in 282, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=282)
      0.05882353 = coord(1/17)
    
    Footnote
    Rez. in: JASIS 48(1997) no.7, S.670-671 (C.A. Bean)
  6. Koltay, T.: Abstracts and abstracting : a genre and set of skills for the twenty-first century (2010) 0.00
    3.6716778E-4 = product of:
      0.0062418524 = sum of:
        0.0062418524 = weight(_text_:in in 4125) [ClassicSimilarity], result of:
          0.0062418524 = score(doc=4125,freq=8.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.15028831 = fieldWeight in 4125, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4125)
      0.05882353 = coord(1/17)
    
    Abstract
    Despite their changing role, abstracts remain useful in the digital world. Aimed at both information professionals and researchers who work and publish in different fields, this book summarizes the most important and up-to-date theory of abstracting, as well as giving advice and examples for the practice of writing different kinds of abstracts. The book discusses the length, the functions and basic structure of abstracts. A new approach is outlined on the questions of informative and indicative abstracts. The abstractors' personality, their linguistic and non-linguistic knowledge and skills are also discussed with special attention. The process of abstracting, its steps and models, as well as recipient's role are treated with special distinction. Abstracting is presented as an aimed (purported) understanding of the original text, its interpretation and then a special projection of the information deemed to be worth of abstracting into a new text.Despite the relatively large number of textbooks on the topic there is no up-to-date book on abstracting in the English language. In addition to providing a comprehensive coverage of the topic, the proposed book contains novel views - especially on informative and indicative abstracts. The discussion is based on an interdisciplinary approach, blending the methods of library and information science and linguistics. The book strives to a synthesis of theory and practice. The synthesis is based on a large and existing body of knowledge which, however, is often characterised by misleading terminology and flawed beliefs.
  7. Endres-Niggemeyer, B.; Maier, E.; Sigel, A.: How to implement a naturalistic model of abstracting : four core working steps of an expert abstractor (1995) 0.00
    3.6347756E-4 = product of:
      0.0061791185 = sum of:
        0.0061791185 = weight(_text_:in in 2930) [ClassicSimilarity], result of:
          0.0061791185 = score(doc=2930,freq=4.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.14877784 = fieldWeight in 2930, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2930)
      0.05882353 = coord(1/17)
    
    Abstract
    4 working steps taken from a comprehensive empirical model of expert abstracting are studied in order to prepare an explorative implementation of a simulation model. It aims at explaining the knowledge processing activities during professional summarizing. Following the case-based and holistic strategy of qualitative empirical research, the main features of the simulation system were developed by investigating in detail a small but central test case - 4 working steps where an expert abstractor discovers what the paper is about and drafts the topic sentence of the abstract
  8. Sen, B.K.: Research articles in LISA Plus : problems of identification (1997) 0.00
    3.6347756E-4 = product of:
      0.0061791185 = sum of:
        0.0061791185 = weight(_text_:in in 430) [ClassicSimilarity], result of:
          0.0061791185 = score(doc=430,freq=4.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.14877784 = fieldWeight in 430, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=430)
      0.05882353 = coord(1/17)
    
    Abstract
    Reports results of a study to determine how easy and quickly research articles in library and information science could be retrieved from the LISA Plus CD-ROM database. Results show that the search with the descriptor 'research' retrieves all types of articles and it is necessary to read through every abstract to locate the research articles. The introductory sentence of a substantial number of abstracts hinder the process of identification since the sentence provides such information as the conference where the paper was presented, the special issue or the section of a periodical where the article is located; or obvious background information. Suggests measures whereby research articles can be identified easily and rapidly
  9. Cross, C.; Oppenheim, C.: ¬A genre analysis of scientific abstracts (2006) 0.00
    3.597495E-4 = product of:
      0.0061157416 = sum of:
        0.0061157416 = weight(_text_:in in 5603) [ClassicSimilarity], result of:
          0.0061157416 = score(doc=5603,freq=12.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.14725187 = fieldWeight in 5603, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=5603)
      0.05882353 = coord(1/17)
    
    Abstract
    Purpose - The purpose of the paper is to analyse the structure of a small number of abstracts that have appeared in the CABI database over a number of years, during which time the authorship of the abstracts changed from CABI editorial staff to journal article authors themselves. This paper reports a study of the semantic organisation and thematic structure of 12 abstracts from the field of protozoology in an effort to discover whether these abstracts followed generally agreed abstracting guidelines. Design/methodology/approach - The method adopted was a move analysis of the text of the abstracts. This move analysis revealed a five-move pattern: move 1 situates the research within the scientific community; move 2 introduces the research by either describing the main features of the research or presenting its purpose; move 3 describes the methodology; move 4 states the results; and move 5 draws conclusions or suggests practical applications. Findings - Thematic analysis shows that scientific abstract authors thematise their subject by referring to the discourse domain or the "real" world. Not all of the abstracts succeeded in following the guideline advice. However, there was general consistency regarding semantic organisation and thematic structure. Research limitations/implications - The research limitations were the small number of abstracts examined, from just one subject domain. Practical limitations - The practical implications are the need for abstracting services to be clearer and more prescriptive regarding how they want abstracts to be structured as the lack of formal training in abstract writing increases the risk of subjectivity and verbosity and reduces clarity in scientific abstracts. Another implication of the research are that abstracting and indexing services must ensure that they maintain abstract quality if they introduce policies of accepting author abstracts. This is important as there is probably little formal training in abstract writing for science students at present. Recommendations for further research are made. Originality/value - This paper reports a study of the semantic organisation and thematic structure of 12 abstracts from the field of protozoology.
  10. Bakewell, K.G.B.; Rowland, G.: Indexing and abstracting (1993) 0.00
    2.9373422E-4 = product of:
      0.0049934816 = sum of:
        0.0049934816 = weight(_text_:in in 5540) [ClassicSimilarity], result of:
          0.0049934816 = score(doc=5540,freq=2.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.120230645 = fieldWeight in 5540, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=5540)
      0.05882353 = coord(1/17)
    
    Abstract
    State of the art review of UK developments in indexing and abstracting druing the period 1986-1990 covering: bibliographies of indexing and abstracting; British standards (including the revised British Standard on indexing, BS 3700); Wheatley Medal and Carey Award; a list of indexes published during this period; the role of the computer and automatic indexing; hypermedia; PRECIS; POPSI, relational indexing; thesauri; education and training; the indexing process, newspaper indexing; fiction indexes; the indexing profession; and a review of abstracting and indexing services
  11. Molina, M.P.: Documentary abstracting : toward a methodological approach (1995) 0.00
    2.9373422E-4 = product of:
      0.0049934816 = sum of:
        0.0049934816 = weight(_text_:in in 1790) [ClassicSimilarity], result of:
          0.0049934816 = score(doc=1790,freq=2.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.120230645 = fieldWeight in 1790, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=1790)
      0.05882353 = coord(1/17)
    
    Abstract
    In the general abstracting process (GAP), there are 2 types of data: textual, within a particular framed trilogy (surface, deep, and rhetoric); and documentary (abstractor, means of production, and user demands). Proposes its development, the use of the following disciplines, among others: linguistics (structural, tranformational, and textual), logic (formal and fuzzy), and psychology (cognitive). The model for that textual transformation is based on a system of combined strategies with 4 key stages: reading understanding, selection, interpretation, and synthesis
  12. Cremmins, E.T.: ¬The art of abstracting (1996) 0.00
    2.9373422E-4 = product of:
      0.0049934816 = sum of:
        0.0049934816 = weight(_text_:in in 1007) [ClassicSimilarity], result of:
          0.0049934816 = score(doc=1007,freq=2.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.120230645 = fieldWeight in 1007, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=1007)
      0.05882353 = coord(1/17)
    
    Footnote
    Rez. in: Journal of chemical information and computer sciences 36(1996) no.5, S.1050 (V.K. Raman); Information processing and management 33(1997) no.4, S.573 (H.R. Tibbo)
  13. Wang, F.L.; Yang, C.C.: ¬The impact analysis of language differences on an automatic multilingual text summarization system (2006) 0.00
    2.5962683E-4 = product of:
      0.004413656 = sum of:
        0.004413656 = weight(_text_:in in 5049) [ClassicSimilarity], result of:
          0.004413656 = score(doc=5049,freq=4.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.10626988 = fieldWeight in 5049, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5049)
      0.05882353 = coord(1/17)
    
    Abstract
    Based on the salient features of the documents, automatic text summarization systems extract the key sentences from source documents. This process supports the users in evaluating the relevance of the extracted documents returned by information retrieval systems. Because of this tool, efficient filtering can be achieved. Indirectly, these systems help to resolve the problem of information overloading. Many automatic text summarization systems have been implemented for use with different languages. It has been established that the grammatical and lexical differences between languages have a significant effect on text processing. However, the impact of the language differences on the automatic text summarization systems has not yet been investigated. The authors provide an impact analysis of language difference on automatic text summarization. It includes the effect on the extraction processes, the scoring mechanisms, the performance, and the matching of the extracted sentences, using the parallel corpus in English and Chinese as the tested object. The analysis results provide a greater understanding of language differences and promote the future development of more advanced text summarization techniques.
  14. Alonso, M.I.; Fernández, L.M.M.: Perspectives of studies on document abstracting : towards an integrated view of models and theoretical approaches (2010) 0.00
    2.5962683E-4 = product of:
      0.004413656 = sum of:
        0.004413656 = weight(_text_:in in 3959) [ClassicSimilarity], result of:
          0.004413656 = score(doc=3959,freq=4.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.10626988 = fieldWeight in 3959, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3959)
      0.05882353 = coord(1/17)
    
    Abstract
    Purpose - The aim of this paper is to systemize and improve the scientific status of studies on document abstracting. This is a diachronic, systematic study of document abstracting studies carried out from different perspectives and models (textual, psycholinguistic, social and communicative). Design/methodology/approach - A review of the perspectives and analysis proposals which are of interest to the various theoreticians of abstracting is carried out using a variety of techniques and approaches (cognitive, linguistic, communicative-social, didactic, etc.), each with different levels of theoretical and methodological abstraction and degrees of application. The most significant contributions of each are reviewed and highlighted, along with their limitations. Findings - It is found that the great challenge in abstracting is the systemization of models and conceptual apparatus, which open up this type of research to semiotic and socio-interactional perspectives. It is necessary to carry out suitable empirical research with operative designs and ad hoc measuring instruments which can measure the efficiency of the abstracting and the efficiency of a good abstract, while at the same time feeding back into the theoretical baggage of this type of study. Such research will have to explain and provide answers to all the elements and variables, which affect the realization and the reception of a quality abstract. Originality/value - The paper provides a small map of the studies on document abstracting. This shows how the conceptual and methodological framework has extended at the same time as the Science of Documentation has been evolving. All the models analysed - the communicative and interactional approach - are integrated in a new systematic framework.
  15. Wheatley, A.; Armstrong, C.J.: Metadata, recall, and abstracts : can abstracts ever be reliable indicators of document value? (1997) 0.00
    2.2030066E-4 = product of:
      0.0037451112 = sum of:
        0.0037451112 = weight(_text_:in in 824) [ClassicSimilarity], result of:
          0.0037451112 = score(doc=824,freq=2.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.09017298 = fieldWeight in 824, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=824)
      0.05882353 = coord(1/17)
    
    Abstract
    Abstracts from 7 Internet subject trees (Euroferret, Excite, Infoseek, Lycos Top 5%, Magellan, WebCrawler, Yahoo!), 5 Internet subject gateways (ADAM, EEVL, NetFirst, OMNI, SOSIG), and 3 online databases (ERIC, ISI, LISA) were examined for their subject content, treatment of various enriching features, physical properties such as overall length, anf their readability. Considerable differences were measured, and consistent similarities among abstracts from each type of source were demonstrated. Internet subject tree abstracts were generally the shortest, and online database abstracts the longest. Subject tree and online database abstracts were the most informative, but the level of coverage of document features such as tables, bibliographies, and geographical constraints were disappointingly poor. On balance, the Internet gateways appeared to be providing the most satisfactory abstracts. The authors discuss the continuing role in networked information retrieval of abstracts and their functional analoques such as metadata
  16. Wilson, M.J.; Wilson, M.L.: ¬A comparison of techniques for measuring sensemaking and learning within participant-generated summaries (2013) 0.00
    1.8358389E-4 = product of:
      0.0031209262 = sum of:
        0.0031209262 = weight(_text_:in in 612) [ClassicSimilarity], result of:
          0.0031209262 = score(doc=612,freq=2.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.07514416 = fieldWeight in 612, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=612)
      0.05882353 = coord(1/17)
    
    Abstract
    While it is easy to identify whether someone has found a piece of information during a search task, it is much harder to measure how much someone has learned during the search process. Searchers who are learning often exhibit exploratory behaviors, and so current research is often focused on improving support for exploratory search. Consequently, we need effective measures of learning to demonstrate better support for exploratory search. Some approaches, such as quizzes, measure recall when learning from a fixed source of information. This research, however, focuses on techniques for measuring open-ended learning, which often involve analyzing handwritten summaries produced by participants after a task. There are two common techniques for analyzing such summaries: (a) counting facts and statements and (b) judging topic coverage. Both of these techniques, however, can be easily confounded by simple variables such as summary length. This article presents a new technique that measures depth of learning within written summaries based on Bloom's taxonomy (B.S. Bloom & M.D. Engelhart, 1956). This technique was generated using grounded theory and is designed to be less susceptible to such confounding variables. Together, these three categories of measure were compared by applying them to a large collection of written summaries produced in a task-based study, and our results provide insights into each of their strengths and weaknesses. Both fact-to-statement ratio and our own measure of depth of learning were effective while being less affected by confounding variables. Recommendations and clear areas of future work are provided to help continued research into supporting sensemaking and learning.

Types

  • a 41
  • m 11
  • r 2
  • b 1
  • s 1
  • More… Less…