Search (18 results, page 1 of 1)

  • × theme_ss:"Referieren"
  1. Koltay, T.: ¬A hypertext tutorial on abstracting for library science students (1995) 0.02
    0.0233882 = product of:
      0.081858695 = sum of:
        0.05540042 = weight(_text_:based in 3061) [ClassicSimilarity], result of:
          0.05540042 = score(doc=3061,freq=4.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.47078028 = fieldWeight in 3061, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.078125 = fieldNorm(doc=3061)
        0.026458278 = product of:
          0.052916557 = sum of:
            0.052916557 = weight(_text_:22 in 3061) [ClassicSimilarity], result of:
              0.052916557 = score(doc=3061,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.38690117 = fieldWeight in 3061, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3061)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Discusses briefly the application of hypertext in library user training with particular reference to a specific hypertext based tutorial designed to teach library school students the basics knowledge of abstracts and abstracting process
    Date
    27. 1.1996 18:22:06
    Theme
    Computer Based Training
  2. Wan, X.; Yang, J.; Xiao, J.: Incorporating cross-document relationships between sentences for single document summarizations (2006) 0.02
    0.01616737 = product of:
      0.056585796 = sum of:
        0.04071083 = weight(_text_:based in 2421) [ClassicSimilarity], result of:
          0.04071083 = score(doc=2421,freq=6.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.34595144 = fieldWeight in 2421, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=2421)
        0.015874967 = product of:
          0.031749934 = sum of:
            0.031749934 = weight(_text_:22 in 2421) [ClassicSimilarity], result of:
              0.031749934 = score(doc=2421,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.23214069 = fieldWeight in 2421, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2421)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Graph-based ranking algorithms have recently been proposed for single document summarizations and such algorithms evaluate the importance of a sentence by making use of the relationships between sentences in the document in a recursive way. In this paper, we investigate using other related or relevant documents to improve summarization of one single document based on the graph-based ranking algorithm. In addition to the within-document relationships between sentences in the specified document, the cross-document relationships between sentences in different documents are also taken into account in the proposed approach. We evaluate the performance of the proposed approach on DUC 2002 data with the ROUGE metric and results demonstrate that the cross-document relationships between sentences in different but related documents can significantly improve the performance of single document summarization.
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
  3. Bowman, J.H.: Annotation: a lost art in cataloguing (2007) 0.01
    0.012016361 = product of:
      0.08411452 = sum of:
        0.08411452 = product of:
          0.16822904 = sum of:
            0.16822904 = weight(_text_:britain in 255) [ClassicSimilarity], result of:
              0.16822904 = score(doc=255,freq=2.0), product of:
                0.29147226 = queryWeight, product of:
                  7.462781 = idf(docFreq=68, maxDocs=44218)
                  0.03905679 = queryNorm
                0.57717 = fieldWeight in 255, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.462781 = idf(docFreq=68, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=255)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Public library catalogues in early twentieth-century Britain frequently included annotations, either to clarify obscure titles or to provide further information about the subject-matter of the books they described. Two manuals giving instruction on how to do this were published at that time. Following World War I, with the decline of the printed catalogue, this kind of annotation became rarer, and was almost confined to bulletins of new books. The early issues of the British National Bibliography included some annotations in exceptional cases. Parallels are drawn with the provision of table-of-contents information in present-day OPAC's.
  4. Alonso, M.I.; Fernández, L.M.M.: Perspectives of studies on document abstracting : towards an integrated view of models and theoretical approaches (2010) 0.01
    0.00977261 = product of:
      0.068408266 = sum of:
        0.068408266 = weight(_text_:great in 3959) [ClassicSimilarity], result of:
          0.068408266 = score(doc=3959,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.31105953 = fieldWeight in 3959, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3959)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose - The aim of this paper is to systemize and improve the scientific status of studies on document abstracting. This is a diachronic, systematic study of document abstracting studies carried out from different perspectives and models (textual, psycholinguistic, social and communicative). Design/methodology/approach - A review of the perspectives and analysis proposals which are of interest to the various theoreticians of abstracting is carried out using a variety of techniques and approaches (cognitive, linguistic, communicative-social, didactic, etc.), each with different levels of theoretical and methodological abstraction and degrees of application. The most significant contributions of each are reviewed and highlighted, along with their limitations. Findings - It is found that the great challenge in abstracting is the systemization of models and conceptual apparatus, which open up this type of research to semiotic and socio-interactional perspectives. It is necessary to carry out suitable empirical research with operative designs and ad hoc measuring instruments which can measure the efficiency of the abstracting and the efficiency of a good abstract, while at the same time feeding back into the theoretical baggage of this type of study. Such research will have to explain and provide answers to all the elements and variables, which affect the realization and the reception of a quality abstract. Originality/value - The paper provides a small map of the studies on document abstracting. This shows how the conceptual and methodological framework has extended at the same time as the Science of Documentation has been evolving. All the models analysed - the communicative and interactional approach - are integrated in a new systematic framework.
  5. Endres-Niggemeyer, B.: Content analysis : a special case of text compression (1989) 0.01
    0.005596288 = product of:
      0.039174013 = sum of:
        0.039174013 = weight(_text_:based in 3549) [ClassicSimilarity], result of:
          0.039174013 = score(doc=3549,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.33289194 = fieldWeight in 3549, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.078125 = fieldNorm(doc=3549)
      0.14285715 = coord(1/7)
    
    Abstract
    Presents a theoretical model, based on the Flower/Hayes model of expository writing, of the process involved in content analysis for abstracting and indexing.
  6. Molina, M.P.: Documentary abstracting : toward a methodological approach (1995) 0.00
    0.00447703 = product of:
      0.03133921 = sum of:
        0.03133921 = weight(_text_:based in 1790) [ClassicSimilarity], result of:
          0.03133921 = score(doc=1790,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.26631355 = fieldWeight in 1790, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0625 = fieldNorm(doc=1790)
      0.14285715 = coord(1/7)
    
    Abstract
    In the general abstracting process (GAP), there are 2 types of data: textual, within a particular framed trilogy (surface, deep, and rhetoric); and documentary (abstractor, means of production, and user demands). Proposes its development, the use of the following disciplines, among others: linguistics (structural, tranformational, and textual), logic (formal and fuzzy), and psychology (cognitive). The model for that textual transformation is based on a system of combined strategies with 4 key stages: reading understanding, selection, interpretation, and synthesis
  7. Monday, I.: ¬Les processus cognitifs et la redaction de résumes (1996) 0.00
    0.00447703 = product of:
      0.03133921 = sum of:
        0.03133921 = weight(_text_:based in 6917) [ClassicSimilarity], result of:
          0.03133921 = score(doc=6917,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.26631355 = fieldWeight in 6917, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0625 = fieldNorm(doc=6917)
      0.14285715 = coord(1/7)
    
    Abstract
    Attempts to explain the intellectual and cognitive processes which govern the understanding and structure of a text, on the one hand, and writing a summary or abstract on the other, based on the literature of information science, education, cognitive psychology and psychiatry
  8. Koltay, T.: Abstracts and abstracting : a genre and set of skills for the twenty-first century (2010) 0.00
    0.003957173 = product of:
      0.02770021 = sum of:
        0.02770021 = weight(_text_:based in 4125) [ClassicSimilarity], result of:
          0.02770021 = score(doc=4125,freq=4.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.23539014 = fieldWeight in 4125, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4125)
      0.14285715 = coord(1/7)
    
    Abstract
    Despite their changing role, abstracts remain useful in the digital world. Aimed at both information professionals and researchers who work and publish in different fields, this book summarizes the most important and up-to-date theory of abstracting, as well as giving advice and examples for the practice of writing different kinds of abstracts. The book discusses the length, the functions and basic structure of abstracts. A new approach is outlined on the questions of informative and indicative abstracts. The abstractors' personality, their linguistic and non-linguistic knowledge and skills are also discussed with special attention. The process of abstracting, its steps and models, as well as recipient's role are treated with special distinction. Abstracting is presented as an aimed (purported) understanding of the original text, its interpretation and then a special projection of the information deemed to be worth of abstracting into a new text.Despite the relatively large number of textbooks on the topic there is no up-to-date book on abstracting in the English language. In addition to providing a comprehensive coverage of the topic, the proposed book contains novel views - especially on informative and indicative abstracts. The discussion is based on an interdisciplinary approach, blending the methods of library and information science and linguistics. The book strives to a synthesis of theory and practice. The synthesis is based on a large and existing body of knowledge which, however, is often characterised by misleading terminology and flawed beliefs.
  9. Wilson, M.J.; Wilson, M.L.: ¬A comparison of techniques for measuring sensemaking and learning within participant-generated summaries (2013) 0.00
    0.003957173 = product of:
      0.02770021 = sum of:
        0.02770021 = weight(_text_:based in 612) [ClassicSimilarity], result of:
          0.02770021 = score(doc=612,freq=4.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.23539014 = fieldWeight in 612, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=612)
      0.14285715 = coord(1/7)
    
    Abstract
    While it is easy to identify whether someone has found a piece of information during a search task, it is much harder to measure how much someone has learned during the search process. Searchers who are learning often exhibit exploratory behaviors, and so current research is often focused on improving support for exploratory search. Consequently, we need effective measures of learning to demonstrate better support for exploratory search. Some approaches, such as quizzes, measure recall when learning from a fixed source of information. This research, however, focuses on techniques for measuring open-ended learning, which often involve analyzing handwritten summaries produced by participants after a task. There are two common techniques for analyzing such summaries: (a) counting facts and statements and (b) judging topic coverage. Both of these techniques, however, can be easily confounded by simple variables such as summary length. This article presents a new technique that measures depth of learning within written summaries based on Bloom's taxonomy (B.S. Bloom & M.D. Engelhart, 1956). This technique was generated using grounded theory and is designed to be less susceptible to such confounding variables. Together, these three categories of measure were compared by applying them to a large collection of written summaries produced in a task-based study, and our results provide insights into each of their strengths and weaknesses. Both fact-to-statement ratio and our own measure of depth of learning were effective while being less affected by confounding variables. Recommendations and clear areas of future work are provided to help continued research into supporting sensemaking and learning.
  10. Endres-Niggemeyer, B.; Maier, E.; Sigel, A.: How to implement a naturalistic model of abstracting : four core working steps of an expert abstractor (1995) 0.00
    0.0039174017 = product of:
      0.02742181 = sum of:
        0.02742181 = weight(_text_:based in 2930) [ClassicSimilarity], result of:
          0.02742181 = score(doc=2930,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.23302436 = fieldWeight in 2930, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2930)
      0.14285715 = coord(1/7)
    
    Abstract
    4 working steps taken from a comprehensive empirical model of expert abstracting are studied in order to prepare an explorative implementation of a simulation model. It aims at explaining the knowledge processing activities during professional summarizing. Following the case-based and holistic strategy of qualitative empirical research, the main features of the simulation system were developed by investigating in detail a small but central test case - 4 working steps where an expert abstractor discovers what the paper is about and drafts the topic sentence of the abstract
  11. Armstrong, C.J.; Wheatley, A.: Writing abstracts for online databases : results of database producers' guidelines (1998) 0.00
    0.0039174017 = product of:
      0.02742181 = sum of:
        0.02742181 = weight(_text_:based in 3295) [ClassicSimilarity], result of:
          0.02742181 = score(doc=3295,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.23302436 = fieldWeight in 3295, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3295)
      0.14285715 = coord(1/7)
    
    Abstract
    Reports on one area of research in an Electronic Libraries Programme (eLib) MODELS (MOving to Distributed Environments for Library Services) supporting study in 3 investigative areas: examination of current database producers' guidelines for their abstract writers; a brief survey of abstracts in some traditional online databases; and a detailed survey of abstracts from 3 types of electronic database (print sourced online databases, Internet subject trees or directories, and Internet gateways). Examination of database producers' guidelines, reported here, gave a clear view of the intentions behind professionally produced traditional (printed index based) database abstracts and provided a benchmark against which to judge the conclusions of the larger investigations into abstract style, readability and content
  12. Spiteri, L.F.: Library and information science vs business : a comparison of approaches to abstracting (1997) 0.00
    0.0039174017 = product of:
      0.02742181 = sum of:
        0.02742181 = weight(_text_:based in 3699) [ClassicSimilarity], result of:
          0.02742181 = score(doc=3699,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.23302436 = fieldWeight in 3699, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3699)
      0.14285715 = coord(1/7)
    
    Abstract
    The library and information science (LIS) literature on abstracting makes little mention about abstracting conducted in the corporate / business environment, whereas the business literature suggests that abstarcting is a very important component of business writing. Examines a variety of publications from LIS and business in order to compare and contrast their approaches to the following aspects of abstracting: definitions of abstracts; types of abstracts; purpose of abstracts; and writing of abstracts. Summarises the results of the examination which revealed a number of similarities, differences, and inadequacies in the ways in which both fields approach abstracting. Concludes that both fields need to develop more detailed guidelines concerning the cognitive process of abstracting and suggests improvements to the training af absractors based on these findings
  13. Spina, D.; Trippas, J.R.; Cavedon, L.; Sanderson, M.: Extracting audio summaries to support effective spoken document search (2017) 0.00
    0.0033577727 = product of:
      0.023504408 = sum of:
        0.023504408 = weight(_text_:based in 3788) [ClassicSimilarity], result of:
          0.023504408 = score(doc=3788,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.19973516 = fieldWeight in 3788, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=3788)
      0.14285715 = coord(1/7)
    
    Abstract
    We address the challenge of extracting query biased audio summaries from podcasts to support users in making relevance decisions in spoken document search via an audio-only communication channel. We performed a crowdsourced experiment that demonstrates that transcripts of spoken documents created using Automated Speech Recognition (ASR), even with significant errors, are effective sources of document summaries or "snippets" for supporting users in making relevance judgments against a query. In particular, the results show that summaries generated from ASR transcripts are comparable, in utility and user-judged preference, to spoken summaries generated from error-free manual transcripts of the same collection. We also observed that content-based audio summaries are at least as preferred as synthesized summaries obtained from manually curated metadata, such as title and description. We describe a methodology for constructing a new test collection, which we have made publicly available.
  14. Palais, E.S.: Abstracting for reference librarians (1988) 0.00
    0.0030238035 = product of:
      0.021166623 = sum of:
        0.021166623 = product of:
          0.042333245 = sum of:
            0.042333245 = weight(_text_:22 in 2832) [ClassicSimilarity], result of:
              0.042333245 = score(doc=2832,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.30952093 = fieldWeight in 2832, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2832)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Reference librarian. 1988, no.22, S.297-308
  15. Wang, F.L.; Yang, C.C.: ¬The impact analysis of language differences on an automatic multilingual text summarization system (2006) 0.00
    0.002798144 = product of:
      0.019587006 = sum of:
        0.019587006 = weight(_text_:based in 5049) [ClassicSimilarity], result of:
          0.019587006 = score(doc=5049,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.16644597 = fieldWeight in 5049, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5049)
      0.14285715 = coord(1/7)
    
    Abstract
    Based on the salient features of the documents, automatic text summarization systems extract the key sentences from source documents. This process supports the users in evaluating the relevance of the extracted documents returned by information retrieval systems. Because of this tool, efficient filtering can be achieved. Indirectly, these systems help to resolve the problem of information overloading. Many automatic text summarization systems have been implemented for use with different languages. It has been established that the grammatical and lexical differences between languages have a significant effect on text processing. However, the impact of the language differences on the automatic text summarization systems has not yet been investigated. The authors provide an impact analysis of language difference on automatic text summarization. It includes the effect on the extraction processes, the scoring mechanisms, the performance, and the matching of the extracted sentences, using the parallel corpus in English and Chinese as the tested object. The analysis results provide a greater understanding of language differences and promote the future development of more advanced text summarization techniques.
  16. Hartley, J.; Sydes, M.: Which layout do you prefer? : an analysis of readers' preferences for different typographic layouts of structured abstracts (1996) 0.00
    0.0022678524 = product of:
      0.015874967 = sum of:
        0.015874967 = product of:
          0.031749934 = sum of:
            0.031749934 = weight(_text_:22 in 4411) [ClassicSimilarity], result of:
              0.031749934 = score(doc=4411,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.23214069 = fieldWeight in 4411, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4411)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Journal of information science. 22(1996) no.1, S.27-37
  17. Ward, M.L.: ¬The future of the human indexer (1996) 0.00
    0.0022678524 = product of:
      0.015874967 = sum of:
        0.015874967 = product of:
          0.031749934 = sum of:
            0.031749934 = weight(_text_:22 in 7244) [ClassicSimilarity], result of:
              0.031749934 = score(doc=7244,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.23214069 = fieldWeight in 7244, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7244)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    9. 2.1997 18:44:22
  18. Hartley, J.; Sydes, M.; Blurton, A.: Obtaining information accurately and quickly : are structured abstracts more efficient? (1996) 0.00
    0.0018898771 = product of:
      0.013229139 = sum of:
        0.013229139 = product of:
          0.026458278 = sum of:
            0.026458278 = weight(_text_:22 in 7673) [ClassicSimilarity], result of:
              0.026458278 = score(doc=7673,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.19345059 = fieldWeight in 7673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=7673)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Journal of information science. 22(1996) no.5, S.349-356