Search (57 results, page 3 of 3)

  • × theme_ss:"Referieren"
  • × type_ss:"a"
  1. Armstrong, C.J.; Wheatley, A.: Writing abstracts for online databases : results of database producers' guidelines (1998) 0.00
    0.0018033426 = product of:
      0.010820055 = sum of:
        0.010820055 = weight(_text_:in in 3295) [ClassicSimilarity], result of:
          0.010820055 = score(doc=3295,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1822149 = fieldWeight in 3295, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3295)
      0.16666667 = coord(1/6)
    
    Abstract
    Reports on one area of research in an Electronic Libraries Programme (eLib) MODELS (MOving to Distributed Environments for Library Services) supporting study in 3 investigative areas: examination of current database producers' guidelines for their abstract writers; a brief survey of abstracts in some traditional online databases; and a detailed survey of abstracts from 3 types of electronic database (print sourced online databases, Internet subject trees or directories, and Internet gateways). Examination of database producers' guidelines, reported here, gave a clear view of the intentions behind professionally produced traditional (printed index based) database abstracts and provided a benchmark against which to judge the conclusions of the larger investigations into abstract style, readability and content
  2. Endres-Niggemeyer, B.: Summarising text for intelligent communication : results of the Dagstuhl seminar (1994) 0.00
    0.0017848461 = product of:
      0.010709076 = sum of:
        0.010709076 = weight(_text_:in in 8867) [ClassicSimilarity], result of:
          0.010709076 = score(doc=8867,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18034597 = fieldWeight in 8867, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=8867)
      0.16666667 = coord(1/6)
    
    Abstract
    As a result of the transition to full-text storage, multimedia and networking, information systems are becoming more efficient but at the same time more difficult to use, in particular because users are confronted with information volumes that increasingly exceed individual processing capacities. Consequently, there is an increase in the demand for user aids such as summarising techniques. Against this background, the interdisciplinary Dagstuhl Seminar 'Summarising Text for Intelligent Communication' (Dec. 1993) outlined the academic state of the art with regard to summarising (abstracting) and proposed future directions for research and system development. Research is currently shifting its attention from text summarising to summarising states of affairs. Recycling solutions are put forward in order to satisfy short-term needs for summarisation products. In the medium and long term, it is necessary to devise concepts and methods of intelligent summarising which have a better formal and empirical grounding and a more modular organisation
  3. Tibbo, H.R.: Abstracting across the disciplines : a content analysis of abstracts for the natural sciences, the social sciences, and the humanities with implications for abstracting standards and online information retrieval (1992) 0.00
    0.001682769 = product of:
      0.010096614 = sum of:
        0.010096614 = weight(_text_:in in 2536) [ClassicSimilarity], result of:
          0.010096614 = score(doc=2536,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.17003182 = fieldWeight in 2536, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=2536)
      0.16666667 = coord(1/6)
    
    Abstract
    Reports on a comparison of the "content categories" listed in the ANSI/ISO abstracting standards to actual content found in abstracts from the sciences, social sciences, and the humanities. The preliminary findings question the fundamental concept underlying these standards, namely, that any one set of standards and generalized instructions can describe and elicit the optimal configuration for abstracts from all subject areas
  4. Farrow, J.: All in the mind : concept analysis in indexing (1995) 0.00
    0.001682769 = product of:
      0.010096614 = sum of:
        0.010096614 = weight(_text_:in in 2926) [ClassicSimilarity], result of:
          0.010096614 = score(doc=2926,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.17003182 = fieldWeight in 2926, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=2926)
      0.16666667 = coord(1/6)
    
  5. Booth, A.; O'Rouke, A.J.: ¬The value of structured abstracts in information retrieval from MEDLINE (1997) 0.00
    0.001682769 = product of:
      0.010096614 = sum of:
        0.010096614 = weight(_text_:in in 764) [ClassicSimilarity], result of:
          0.010096614 = score(doc=764,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.17003182 = fieldWeight in 764, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=764)
      0.16666667 = coord(1/6)
    
    Abstract
    Presents a structured abstract of the actual article. Outlines the debate on the value of structured abstracts and describes a research project into their use, which investigated records of cardiovascular disease downloaded from MEDLINE and tested against clinical questions derived from a survey of CD-ROM use in 3 health science libraries. It was found that structured abstracts improve precision at the expense of recall and place heavier demands on the skills of selecting fields to search within the abstract. Indicates directions for further research
  6. Montesi, M.; Mackenzie Owen, J.: Revision of author abstracts : how it is carried out by LISA editors (2007) 0.00
    0.0016629322 = product of:
      0.009977593 = sum of:
        0.009977593 = weight(_text_:in in 807) [ClassicSimilarity], result of:
          0.009977593 = score(doc=807,freq=10.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.16802745 = fieldWeight in 807, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=807)
      0.16666667 = coord(1/6)
    
    Abstract
    Purpose - The literature on abstracts recommends the revision of author supplied abstracts before their inclusion in database collections. However, little guidance is given on how to carry out such revision, and few studies exist on this topic. The purpose of this research paper is to first survey 187 bibliographic databases to ascertain how many did revise abstracts, and then study the practical amendments made by one of these, i.e. LISA (Library and Information Science Abstracts). Design/methodology/approach - Database policies were established by e-mail or through alternative sources, with 136 databases out of 187 exhaustively documented. Differences between 100 author-supplied abstracts and the corresponding 100 LISA amended abstracts were classified into sentence-level and beyond sentence-level categories, and then as additions, deletions and rephrasing of text. Findings - Revision of author abstracts was carried out by 66 databases, but in just 32 cases did it imply more than spelling, shortening of length and formula representation. In LISA, amendments were often non-systematic and inconsistent, but still pointed to significant aspects which were discussed. Originality/value - Amendments made by LISA editors are important in multi- and inter-disciplinary research, since they tend to clarify certain aspects such as terminology, and suggest that abstracts should not always be considered as substitutes for the original document. From this point-of-view, the revision of abstracts can be considered as an important factor in enhancing a database's quality.
  7. Pinto, M.: Abstracting/abstract adaptation to digital environments : research trends (2003) 0.00
    0.0015457221 = product of:
      0.009274333 = sum of:
        0.009274333 = weight(_text_:in in 4446) [ClassicSimilarity], result of:
          0.009274333 = score(doc=4446,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1561842 = fieldWeight in 4446, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4446)
      0.16666667 = coord(1/6)
    
    Abstract
    The technological revolution is affecting the structure, form and content of documents, reducing the effectiveness of traditional abstracts that, to some extent, are inadequate to the new documentary conditions. Aims to show the directions in which abstracting/abstracts can evolve to achieve the necessary adequacy in the new digital environments. Three researching trends are proposed: theoretical, methodological and pragmatic. Theoretically, there are some needs for expanding the document concept, reengineering abstracting and designing interdisciplinary models. Methodologically, the trend is toward the structuring, automating and qualifying of the abstracts. Pragmatically, abstracts networking, combined with alternative and complementary models, open a new and promising horizon. Automating, structuring and qualifying abstracting/abstract offer some short-term prospects for progress. Concludes that reengineering, networking and visualising would be middle-term fruitful areas of research toward the full adequacy of abstracting in the new electronic age.
  8. Endres-Niggemeyer, B.: Content analysis : a special case of text compression (1989) 0.00
    0.0014873719 = product of:
      0.008924231 = sum of:
        0.008924231 = weight(_text_:in in 3549) [ClassicSimilarity], result of:
          0.008924231 = score(doc=3549,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15028831 = fieldWeight in 3549, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=3549)
      0.16666667 = coord(1/6)
    
    Abstract
    Presents a theoretical model, based on the Flower/Hayes model of expository writing, of the process involved in content analysis for abstracting and indexing.
  9. Endres-Niggemeyer, B.; Maier, E.; Sigel, A.: How to implement a naturalistic model of abstracting : four core working steps of an expert abstractor (1995) 0.00
    0.0014724231 = product of:
      0.008834538 = sum of:
        0.008834538 = weight(_text_:in in 2930) [ClassicSimilarity], result of:
          0.008834538 = score(doc=2930,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.14877784 = fieldWeight in 2930, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2930)
      0.16666667 = coord(1/6)
    
    Abstract
    4 working steps taken from a comprehensive empirical model of expert abstracting are studied in order to prepare an explorative implementation of a simulation model. It aims at explaining the knowledge processing activities during professional summarizing. Following the case-based and holistic strategy of qualitative empirical research, the main features of the simulation system were developed by investigating in detail a small but central test case - 4 working steps where an expert abstractor discovers what the paper is about and drafts the topic sentence of the abstract
  10. Sen, B.K.: Research articles in LISA Plus : problems of identification (1997) 0.00
    0.0014724231 = product of:
      0.008834538 = sum of:
        0.008834538 = weight(_text_:in in 430) [ClassicSimilarity], result of:
          0.008834538 = score(doc=430,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.14877784 = fieldWeight in 430, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=430)
      0.16666667 = coord(1/6)
    
    Abstract
    Reports results of a study to determine how easy and quickly research articles in library and information science could be retrieved from the LISA Plus CD-ROM database. Results show that the search with the descriptor 'research' retrieves all types of articles and it is necessary to read through every abstract to locate the research articles. The introductory sentence of a substantial number of abstracts hinder the process of identification since the sentence provides such information as the conference where the paper was presented, the special issue or the section of a periodical where the article is located; or obvious background information. Suggests measures whereby research articles can be identified easily and rapidly
  11. Cross, C.; Oppenheim, C.: ¬A genre analysis of scientific abstracts (2006) 0.00
    0.0014573209 = product of:
      0.008743925 = sum of:
        0.008743925 = weight(_text_:in in 5603) [ClassicSimilarity], result of:
          0.008743925 = score(doc=5603,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.14725187 = fieldWeight in 5603, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=5603)
      0.16666667 = coord(1/6)
    
    Abstract
    Purpose - The purpose of the paper is to analyse the structure of a small number of abstracts that have appeared in the CABI database over a number of years, during which time the authorship of the abstracts changed from CABI editorial staff to journal article authors themselves. This paper reports a study of the semantic organisation and thematic structure of 12 abstracts from the field of protozoology in an effort to discover whether these abstracts followed generally agreed abstracting guidelines. Design/methodology/approach - The method adopted was a move analysis of the text of the abstracts. This move analysis revealed a five-move pattern: move 1 situates the research within the scientific community; move 2 introduces the research by either describing the main features of the research or presenting its purpose; move 3 describes the methodology; move 4 states the results; and move 5 draws conclusions or suggests practical applications. Findings - Thematic analysis shows that scientific abstract authors thematise their subject by referring to the discourse domain or the "real" world. Not all of the abstracts succeeded in following the guideline advice. However, there was general consistency regarding semantic organisation and thematic structure. Research limitations/implications - The research limitations were the small number of abstracts examined, from just one subject domain. Practical limitations - The practical implications are the need for abstracting services to be clearer and more prescriptive regarding how they want abstracts to be structured as the lack of formal training in abstract writing increases the risk of subjectivity and verbosity and reduces clarity in scientific abstracts. Another implication of the research are that abstracting and indexing services must ensure that they maintain abstract quality if they introduce policies of accepting author abstracts. This is important as there is probably little formal training in abstract writing for science students at present. Recommendations for further research are made. Originality/value - This paper reports a study of the semantic organisation and thematic structure of 12 abstracts from the field of protozoology.
  12. Bakewell, K.G.B.; Rowland, G.: Indexing and abstracting (1993) 0.00
    0.0011898974 = product of:
      0.0071393843 = sum of:
        0.0071393843 = weight(_text_:in in 5540) [ClassicSimilarity], result of:
          0.0071393843 = score(doc=5540,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.120230645 = fieldWeight in 5540, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=5540)
      0.16666667 = coord(1/6)
    
    Abstract
    State of the art review of UK developments in indexing and abstracting druing the period 1986-1990 covering: bibliographies of indexing and abstracting; British standards (including the revised British Standard on indexing, BS 3700); Wheatley Medal and Carey Award; a list of indexes published during this period; the role of the computer and automatic indexing; hypermedia; PRECIS; POPSI, relational indexing; thesauri; education and training; the indexing process, newspaper indexing; fiction indexes; the indexing profession; and a review of abstracting and indexing services
  13. Molina, M.P.: Documentary abstracting : toward a methodological approach (1995) 0.00
    0.0011898974 = product of:
      0.0071393843 = sum of:
        0.0071393843 = weight(_text_:in in 1790) [ClassicSimilarity], result of:
          0.0071393843 = score(doc=1790,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.120230645 = fieldWeight in 1790, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=1790)
      0.16666667 = coord(1/6)
    
    Abstract
    In the general abstracting process (GAP), there are 2 types of data: textual, within a particular framed trilogy (surface, deep, and rhetoric); and documentary (abstractor, means of production, and user demands). Proposes its development, the use of the following disciplines, among others: linguistics (structural, tranformational, and textual), logic (formal and fuzzy), and psychology (cognitive). The model for that textual transformation is based on a system of combined strategies with 4 key stages: reading understanding, selection, interpretation, and synthesis
  14. Wang, F.L.; Yang, C.C.: ¬The impact analysis of language differences on an automatic multilingual text summarization system (2006) 0.00
    0.0010517307 = product of:
      0.006310384 = sum of:
        0.006310384 = weight(_text_:in in 5049) [ClassicSimilarity], result of:
          0.006310384 = score(doc=5049,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.10626988 = fieldWeight in 5049, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5049)
      0.16666667 = coord(1/6)
    
    Abstract
    Based on the salient features of the documents, automatic text summarization systems extract the key sentences from source documents. This process supports the users in evaluating the relevance of the extracted documents returned by information retrieval systems. Because of this tool, efficient filtering can be achieved. Indirectly, these systems help to resolve the problem of information overloading. Many automatic text summarization systems have been implemented for use with different languages. It has been established that the grammatical and lexical differences between languages have a significant effect on text processing. However, the impact of the language differences on the automatic text summarization systems has not yet been investigated. The authors provide an impact analysis of language difference on automatic text summarization. It includes the effect on the extraction processes, the scoring mechanisms, the performance, and the matching of the extracted sentences, using the parallel corpus in English and Chinese as the tested object. The analysis results provide a greater understanding of language differences and promote the future development of more advanced text summarization techniques.
  15. Alonso, M.I.; Fernández, L.M.M.: Perspectives of studies on document abstracting : towards an integrated view of models and theoretical approaches (2010) 0.00
    0.0010517307 = product of:
      0.006310384 = sum of:
        0.006310384 = weight(_text_:in in 3959) [ClassicSimilarity], result of:
          0.006310384 = score(doc=3959,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.10626988 = fieldWeight in 3959, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3959)
      0.16666667 = coord(1/6)
    
    Abstract
    Purpose - The aim of this paper is to systemize and improve the scientific status of studies on document abstracting. This is a diachronic, systematic study of document abstracting studies carried out from different perspectives and models (textual, psycholinguistic, social and communicative). Design/methodology/approach - A review of the perspectives and analysis proposals which are of interest to the various theoreticians of abstracting is carried out using a variety of techniques and approaches (cognitive, linguistic, communicative-social, didactic, etc.), each with different levels of theoretical and methodological abstraction and degrees of application. The most significant contributions of each are reviewed and highlighted, along with their limitations. Findings - It is found that the great challenge in abstracting is the systemization of models and conceptual apparatus, which open up this type of research to semiotic and socio-interactional perspectives. It is necessary to carry out suitable empirical research with operative designs and ad hoc measuring instruments which can measure the efficiency of the abstracting and the efficiency of a good abstract, while at the same time feeding back into the theoretical baggage of this type of study. Such research will have to explain and provide answers to all the elements and variables, which affect the realization and the reception of a quality abstract. Originality/value - The paper provides a small map of the studies on document abstracting. This shows how the conceptual and methodological framework has extended at the same time as the Science of Documentation has been evolving. All the models analysed - the communicative and interactional approach - are integrated in a new systematic framework.
  16. Wheatley, A.; Armstrong, C.J.: Metadata, recall, and abstracts : can abstracts ever be reliable indicators of document value? (1997) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 824) [ClassicSimilarity], result of:
          0.005354538 = score(doc=824,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 824, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=824)
      0.16666667 = coord(1/6)
    
    Abstract
    Abstracts from 7 Internet subject trees (Euroferret, Excite, Infoseek, Lycos Top 5%, Magellan, WebCrawler, Yahoo!), 5 Internet subject gateways (ADAM, EEVL, NetFirst, OMNI, SOSIG), and 3 online databases (ERIC, ISI, LISA) were examined for their subject content, treatment of various enriching features, physical properties such as overall length, anf their readability. Considerable differences were measured, and consistent similarities among abstracts from each type of source were demonstrated. Internet subject tree abstracts were generally the shortest, and online database abstracts the longest. Subject tree and online database abstracts were the most informative, but the level of coverage of document features such as tables, bibliographies, and geographical constraints were disappointingly poor. On balance, the Internet gateways appeared to be providing the most satisfactory abstracts. The authors discuss the continuing role in networked information retrieval of abstracts and their functional analoques such as metadata
  17. Wilson, M.J.; Wilson, M.L.: ¬A comparison of techniques for measuring sensemaking and learning within participant-generated summaries (2013) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 612) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=612,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 612, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=612)
      0.16666667 = coord(1/6)
    
    Abstract
    While it is easy to identify whether someone has found a piece of information during a search task, it is much harder to measure how much someone has learned during the search process. Searchers who are learning often exhibit exploratory behaviors, and so current research is often focused on improving support for exploratory search. Consequently, we need effective measures of learning to demonstrate better support for exploratory search. Some approaches, such as quizzes, measure recall when learning from a fixed source of information. This research, however, focuses on techniques for measuring open-ended learning, which often involve analyzing handwritten summaries produced by participants after a task. There are two common techniques for analyzing such summaries: (a) counting facts and statements and (b) judging topic coverage. Both of these techniques, however, can be easily confounded by simple variables such as summary length. This article presents a new technique that measures depth of learning within written summaries based on Bloom's taxonomy (B.S. Bloom & M.D. Engelhart, 1956). This technique was generated using grounded theory and is designed to be less susceptible to such confounding variables. Together, these three categories of measure were compared by applying them to a large collection of written summaries produced in a task-based study, and our results provide insights into each of their strengths and weaknesses. Both fact-to-statement ratio and our own measure of depth of learning were effective while being less affected by confounding variables. Recommendations and clear areas of future work are provided to help continued research into supporting sensemaking and learning.

Languages

  • e 41
  • d 16