Search (5 results, page 1 of 1)

  • × year_i:[2010 TO 2020}
  • × theme_ss:"Referieren"
  1. Wilson, M.J.; Wilson, M.L.: ¬A comparison of techniques for measuring sensemaking and learning within participant-generated summaries (2013) 0.00
    0.0026742492 = product of:
      0.0053484985 = sum of:
        0.0053484985 = product of:
          0.010696997 = sum of:
            0.010696997 = weight(_text_:a in 612) [ClassicSimilarity], result of:
              0.010696997 = score(doc=612,freq=20.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20142901 = fieldWeight in 612, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=612)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    While it is easy to identify whether someone has found a piece of information during a search task, it is much harder to measure how much someone has learned during the search process. Searchers who are learning often exhibit exploratory behaviors, and so current research is often focused on improving support for exploratory search. Consequently, we need effective measures of learning to demonstrate better support for exploratory search. Some approaches, such as quizzes, measure recall when learning from a fixed source of information. This research, however, focuses on techniques for measuring open-ended learning, which often involve analyzing handwritten summaries produced by participants after a task. There are two common techniques for analyzing such summaries: (a) counting facts and statements and (b) judging topic coverage. Both of these techniques, however, can be easily confounded by simple variables such as summary length. This article presents a new technique that measures depth of learning within written summaries based on Bloom's taxonomy (B.S. Bloom & M.D. Engelhart, 1956). This technique was generated using grounded theory and is designed to be less susceptible to such confounding variables. Together, these three categories of measure were compared by applying them to a large collection of written summaries produced in a task-based study, and our results provide insights into each of their strengths and weaknesses. Both fact-to-statement ratio and our own measure of depth of learning were effective while being less affected by confounding variables. Recommendations and clear areas of future work are provided to help continued research into supporting sensemaking and learning.
    Type
    a
  2. Alonso, M.I.; Fernández, L.M.M.: Perspectives of studies on document abstracting : towards an integrated view of models and theoretical approaches (2010) 0.00
    0.0023919214 = product of:
      0.0047838427 = sum of:
        0.0047838427 = product of:
          0.009567685 = sum of:
            0.009567685 = weight(_text_:a in 3959) [ClassicSimilarity], result of:
              0.009567685 = score(doc=3959,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18016359 = fieldWeight in 3959, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3959)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - The aim of this paper is to systemize and improve the scientific status of studies on document abstracting. This is a diachronic, systematic study of document abstracting studies carried out from different perspectives and models (textual, psycholinguistic, social and communicative). Design/methodology/approach - A review of the perspectives and analysis proposals which are of interest to the various theoreticians of abstracting is carried out using a variety of techniques and approaches (cognitive, linguistic, communicative-social, didactic, etc.), each with different levels of theoretical and methodological abstraction and degrees of application. The most significant contributions of each are reviewed and highlighted, along with their limitations. Findings - It is found that the great challenge in abstracting is the systemization of models and conceptual apparatus, which open up this type of research to semiotic and socio-interactional perspectives. It is necessary to carry out suitable empirical research with operative designs and ad hoc measuring instruments which can measure the efficiency of the abstracting and the efficiency of a good abstract, while at the same time feeding back into the theoretical baggage of this type of study. Such research will have to explain and provide answers to all the elements and variables, which affect the realization and the reception of a quality abstract. Originality/value - The paper provides a small map of the studies on document abstracting. This shows how the conceptual and methodological framework has extended at the same time as the Science of Documentation has been evolving. All the models analysed - the communicative and interactional approach - are integrated in a new systematic framework.
    Type
    a
  3. Spina, D.; Trippas, J.R.; Cavedon, L.; Sanderson, M.: Extracting audio summaries to support effective spoken document search (2017) 0.00
    0.002269176 = product of:
      0.004538352 = sum of:
        0.004538352 = product of:
          0.009076704 = sum of:
            0.009076704 = weight(_text_:a in 3788) [ClassicSimilarity], result of:
              0.009076704 = score(doc=3788,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1709182 = fieldWeight in 3788, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3788)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We address the challenge of extracting query biased audio summaries from podcasts to support users in making relevance decisions in spoken document search via an audio-only communication channel. We performed a crowdsourced experiment that demonstrates that transcripts of spoken documents created using Automated Speech Recognition (ASR), even with significant errors, are effective sources of document summaries or "snippets" for supporting users in making relevance judgments against a query. In particular, the results show that summaries generated from ASR transcripts are comparable, in utility and user-judged preference, to spoken summaries generated from error-free manual transcripts of the same collection. We also observed that content-based audio summaries are at least as preferred as synthesized summaries obtained from manually curated metadata, such as title and description. We describe a methodology for constructing a new test collection, which we have made publicly available.
    Type
    a
  4. Koltay, T.: Abstracts and abstracting : a genre and set of skills for the twenty-first century (2010) 0.00
    0.0022374375 = product of:
      0.004474875 = sum of:
        0.004474875 = product of:
          0.00894975 = sum of:
            0.00894975 = weight(_text_:a in 4125) [ClassicSimilarity], result of:
              0.00894975 = score(doc=4125,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1685276 = fieldWeight in 4125, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4125)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Despite their changing role, abstracts remain useful in the digital world. Aimed at both information professionals and researchers who work and publish in different fields, this book summarizes the most important and up-to-date theory of abstracting, as well as giving advice and examples for the practice of writing different kinds of abstracts. The book discusses the length, the functions and basic structure of abstracts. A new approach is outlined on the questions of informative and indicative abstracts. The abstractors' personality, their linguistic and non-linguistic knowledge and skills are also discussed with special attention. The process of abstracting, its steps and models, as well as recipient's role are treated with special distinction. Abstracting is presented as an aimed (purported) understanding of the original text, its interpretation and then a special projection of the information deemed to be worth of abstracting into a new text.Despite the relatively large number of textbooks on the topic there is no up-to-date book on abstracting in the English language. In addition to providing a comprehensive coverage of the topic, the proposed book contains novel views - especially on informative and indicative abstracts. The discussion is based on an interdisciplinary approach, blending the methods of library and information science and linguistics. The book strives to a synthesis of theory and practice. The synthesis is based on a large and existing body of knowledge which, however, is often characterised by misleading terminology and flawed beliefs.
  5. Reischer, J.; Lottes, D.; Meier, F.; Stirner, M.: Evaluation von Summarizing-Systemen : Kommerzielle und freie Systeme im Vergleich (2010) 0.00
    0.001353075 = product of:
      0.00270615 = sum of:
        0.00270615 = product of:
          0.0054123 = sum of:
            0.0054123 = weight(_text_:a in 492) [ClassicSimilarity], result of:
              0.0054123 = score(doc=492,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10191591 = fieldWeight in 492, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=492)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a