Search (42 results, page 1 of 3)

  • × theme_ss:"Retrievalstudien"
  1. Sievert, M.E.; McKinin, E.J.: Why full-text misses some relevant documents : an analysis of documents not retrieved by CCML or MEDIS (1989) 0.06
    0.06394671 = product of:
      0.12789342 = sum of:
        0.12789342 = sum of:
          0.086370535 = weight(_text_:core in 3564) [ClassicSimilarity], result of:
            0.086370535 = score(doc=3564,freq=2.0), product of:
              0.25797358 = queryWeight, product of:
                5.0504966 = idf(docFreq=769, maxDocs=44218)
                0.051078856 = queryNorm
              0.3348038 = fieldWeight in 3564, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.0504966 = idf(docFreq=769, maxDocs=44218)
                0.046875 = fieldNorm(doc=3564)
          0.04152288 = weight(_text_:22 in 3564) [ClassicSimilarity], result of:
            0.04152288 = score(doc=3564,freq=2.0), product of:
              0.17886946 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051078856 = queryNorm
              0.23214069 = fieldWeight in 3564, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3564)
      0.5 = coord(1/2)
    
    Abstract
    Searches conducted as part of the MEDLINE/Full-Text Research Project revealed that the full-text data bases of clinical medical journal articles (CCML (Comprehensive Core Medical Library) from BRS Information Technologies, and MEDIS from Mead Data Central) did not retrieve all the relevant citations. An analysis of the data indicated that 204 relevant citations were retrieved only by MEDLINE. A comparison of the strategies used on the full-text data bases with the text of the articles of these 204 citations revealed that 2 reasons contributed to these failure. The searcher often constructed a restrictive strategy which resulted in the loss of relevant documents; and as in other kinds of retrieval, the problems of natural language caused the loss of relevant documents.
    Date
    9. 1.1996 10:22:31
  2. Wu, C.-J.: Experiments on using the Dublin Core to reduce the retrieval error ratio (1998) 0.06
    0.056329697 = product of:
      0.112659395 = sum of:
        0.112659395 = product of:
          0.22531879 = sum of:
            0.22531879 = weight(_text_:core in 5201) [ClassicSimilarity], result of:
              0.22531879 = score(doc=5201,freq=10.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.8734181 = fieldWeight in 5201, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5201)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In order to test the power of metadata on information retrieval, an experiment was designed and conducted on a group of 7 graduate students using the Dublin Core as the cataloguing metadata. Results show that, on average, the retrieval error rate is only 2.9 per cent for the MES system (http://140.136.85.194), which utilizes the Dublin Core to describe the documents on the World Wide Web, in contrast to 20.7 per cent for the 7 famous search engines including HOTBOT, GAIS, LYCOS, EXCITE, INFOSEEK, YAHOO, and OCTOPUS. The very low error rate indicates that the users can use the information of the Dublin Core to decide whether to retrieve the documents or not
    Object
    Dublin core
  3. Wildemuth, B.; Freund, L.; Toms, E.G.: Untangling search task complexity and difficulty in the context of interactive information retrieval studies (2014) 0.05
    0.05328892 = product of:
      0.10657784 = sum of:
        0.10657784 = sum of:
          0.07197544 = weight(_text_:core in 1786) [ClassicSimilarity], result of:
            0.07197544 = score(doc=1786,freq=2.0), product of:
              0.25797358 = queryWeight, product of:
                5.0504966 = idf(docFreq=769, maxDocs=44218)
                0.051078856 = queryNorm
              0.27900314 = fieldWeight in 1786, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.0504966 = idf(docFreq=769, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1786)
          0.0346024 = weight(_text_:22 in 1786) [ClassicSimilarity], result of:
            0.0346024 = score(doc=1786,freq=2.0), product of:
              0.17886946 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051078856 = queryNorm
              0.19345059 = fieldWeight in 1786, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1786)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - One core element of interactive information retrieval (IIR) experiments is the assignment of search tasks. The purpose of this paper is to provide an analytical review of current practice in developing those search tasks to test, observe or control task complexity and difficulty. Design/methodology/approach - Over 100 prior studies of IIR were examined in terms of how each defined task complexity and/or difficulty (or related concepts) and subsequently interpreted those concepts in the development of the assigned search tasks. Findings - Search task complexity is found to include three dimensions: multiplicity of subtasks or steps, multiplicity of facets, and indeterminability. Search task difficulty is based on an interaction between the search task and the attributes of the searcher or the attributes of the search situation. The paper highlights the anomalies in our use of these two concepts, concluding with suggestions for future methodological research related to search task complexity and difficulty. Originality/value - By analyzing and synthesizing current practices, this paper provides guidance for future experiments in IIR that involve these two constructs.
    Date
    6. 4.2015 19:31:22
  4. Barry, C.I.; Schamber, L.: User-defined relevance criteria : a comparison of 2 studies (1995) 0.03
    0.028790178 = product of:
      0.057580356 = sum of:
        0.057580356 = product of:
          0.11516071 = sum of:
            0.11516071 = weight(_text_:core in 3850) [ClassicSimilarity], result of:
              0.11516071 = score(doc=3850,freq=2.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.44640505 = fieldWeight in 3850, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3850)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Aims to determine the extent to which there is a core of relevance criteria that soans such factors as information need situations, user environments, and types of information. 2 recent empirical studies have identified and described user defined relevance criteria. Synthesizes the findings of the 2 studies as a 1st step toward identifying criteria that seem to span information environments and criteria that may be more situationally specific
  5. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.02
    0.02422168 = product of:
      0.04844336 = sum of:
        0.04844336 = product of:
          0.09688672 = sum of:
            0.09688672 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.09688672 = score(doc=262,freq=2.0), product of:
                0.17886946 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051078856 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    20.10.2000 12:22:23
  6. Tomaiuolo, N.G.; Parker, J.: Maximizing relevant retrieval : keyword and natural language searching (1998) 0.02
    0.02422168 = product of:
      0.04844336 = sum of:
        0.04844336 = product of:
          0.09688672 = sum of:
            0.09688672 = weight(_text_:22 in 6418) [ClassicSimilarity], result of:
              0.09688672 = score(doc=6418,freq=2.0), product of:
                0.17886946 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051078856 = queryNorm
                0.5416616 = fieldWeight in 6418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6418)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Online. 22(1998) no.6, S.57-58
  7. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.02
    0.02422168 = product of:
      0.04844336 = sum of:
        0.04844336 = product of:
          0.09688672 = sum of:
            0.09688672 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
              0.09688672 = score(doc=6438,freq=2.0), product of:
                0.17886946 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051078856 = queryNorm
                0.5416616 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    11. 8.2001 16:22:19
  8. Dalrymple, P.W.: Retrieval by reformulation in two library catalogs : toward a cognitive model of searching behavior (1990) 0.02
    0.02422168 = product of:
      0.04844336 = sum of:
        0.04844336 = product of:
          0.09688672 = sum of:
            0.09688672 = weight(_text_:22 in 5089) [ClassicSimilarity], result of:
              0.09688672 = score(doc=5089,freq=2.0), product of:
                0.17886946 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051078856 = queryNorm
                0.5416616 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 18:43:54
  9. Shakir, H.S.; Nagao, M.: Context-sensitive processing of semantic queries in an image database system (1996) 0.02
    0.021592634 = product of:
      0.043185268 = sum of:
        0.043185268 = product of:
          0.086370535 = sum of:
            0.086370535 = weight(_text_:core in 6626) [ClassicSimilarity], result of:
              0.086370535 = score(doc=6626,freq=2.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.3348038 = fieldWeight in 6626, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6626)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In an image database environment, an image can be retrieved using common names of entities that appear in it. Shows how an image is abstracted into a hierarchy of entity names and features and how relations are established between entities visible in the image. Semantic queries are also hierarchical. Its core is a fuzzy matching technique that compares semantic queries to image abstractions by assessing the similarity of contexts between the query and the candidate image. An important object of this matching technique is to distinguish between abstractions of different images that have the same labels but are different in context from each other. Each image is tagged with a matching degree even when it does not provide an exact match of the query. Experiments have been conducted to evaluate the strategy
  10. Serrano Cobos, J.; Quintero Orta, A.: Design, development and management of an information recovery system for an Internet Website : from documentary theory to practice (2003) 0.02
    0.021592634 = product of:
      0.043185268 = sum of:
        0.043185268 = product of:
          0.086370535 = sum of:
            0.086370535 = weight(_text_:core in 2726) [ClassicSimilarity], result of:
              0.086370535 = score(doc=2726,freq=2.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.3348038 = fieldWeight in 2726, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2726)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A real case study is shown, explaining in a timeline the whole process of design, development and evaluation of a search engine used as a navigational help tool for end users and clients an a content website, e-commerce driven. The nature of the website is a community website, which will determine the core design of the information service. This study will involve several steps, such as information recovery system analysis, comparative analysis of other commercial search engines, service design, functionalities and scope; software selection, design of the project, project management, future service administration and conclusions.
  11. Voorhees, E.M.: On test collections for adaptive information retrieval (2008) 0.02
    0.021592634 = product of:
      0.043185268 = sum of:
        0.043185268 = product of:
          0.086370535 = sum of:
            0.086370535 = weight(_text_:core in 2444) [ClassicSimilarity], result of:
              0.086370535 = score(doc=2444,freq=2.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.3348038 = fieldWeight in 2444, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2444)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Traditional Cranfield test collections represent an abstraction of a retrieval task that Sparck Jones calls the "core competency" of retrieval: a task that is necessary, but not sufficient, for user retrieval tasks. The abstraction facilitates research by controlling for (some) sources of variability, thus increasing the power of experiments that compare system effectiveness while reducing their cost. However, even within the highly-abstracted case of the Cranfield paradigm, meta-analysis demonstrates that the user/topic effect is greater than the system effect, so experiments must include a relatively large number of topics to distinguish systems' effectiveness. The evidence further suggests that changing the abstraction slightly to include just a bit more characterization of the user will result in a dramatic loss of power or increase in cost of retrieval experiments. Defining a new, feasible abstraction for supporting adaptive IR research will require winnowing the list of all possible factors that can affect retrieval behavior to a minimum number of essential factors.
  12. Tamine, L.; Chouquet, C.; Palmer, T.: Analysis of biomedical and health queries : lessons learned from TREC and CLEF evaluation benchmarks (2015) 0.02
    0.01799386 = product of:
      0.03598772 = sum of:
        0.03598772 = product of:
          0.07197544 = sum of:
            0.07197544 = weight(_text_:core in 2341) [ClassicSimilarity], result of:
              0.07197544 = score(doc=2341,freq=2.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.27900314 = fieldWeight in 2341, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2341)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A large body of research work examined, from both the query side and the user behavior side, the characteristics of medical- and health-related searches. One of the core issues in medical information retrieval (IR) is diversity of tasks that lead to diversity of categories of information needs and queries. From the evaluation perspective, another related and challenging issue is the limited availability of appropriate test collections allowing the experimental validation of medically task oriented IR techniques and systems. In this paper, we explore the peculiarities of TREC and CLEF medically oriented tasks and queries through the analysis of the differences and the similarities between queries across tasks, with respect to length, specificity, and clarity features and then study their effect on retrieval performance. We show that, even for expert oriented queries, language specificity level varies significantly across tasks as well as search difficulty. Additional findings highlight that query clarity factors are task dependent and that query terms specificity based on domain-specific terminology resources is not significantly linked to term rareness in the document collection. The lessons learned from our study could serve as starting points for the design of future task-based medical information retrieval frameworks.
  13. Allan, J.; Callan, J.P.; Croft, W.B.; Ballesteros, L.; Broglio, J.; Xu, J.; Shu, H.: INQUERY at TREC-5 (1997) 0.02
    0.0173012 = product of:
      0.0346024 = sum of:
        0.0346024 = product of:
          0.0692048 = sum of:
            0.0692048 = weight(_text_:22 in 3103) [ClassicSimilarity], result of:
              0.0692048 = score(doc=3103,freq=2.0), product of:
                0.17886946 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051078856 = queryNorm
                0.38690117 = fieldWeight in 3103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3103)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 2.1999 20:55:22
  14. Ng, K.B.; Loewenstern, D.; Basu, C.; Hirsh, H.; Kantor, P.B.: Data fusion of machine-learning methods for the TREC5 routing tak (and other work) (1997) 0.02
    0.0173012 = product of:
      0.0346024 = sum of:
        0.0346024 = product of:
          0.0692048 = sum of:
            0.0692048 = weight(_text_:22 in 3107) [ClassicSimilarity], result of:
              0.0692048 = score(doc=3107,freq=2.0), product of:
                0.17886946 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051078856 = queryNorm
                0.38690117 = fieldWeight in 3107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3107)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 2.1999 20:59:22
  15. Saracevic, T.: On a method for studying the structure and nature of requests in information retrieval (1983) 0.02
    0.0173012 = product of:
      0.0346024 = sum of:
        0.0346024 = product of:
          0.0692048 = sum of:
            0.0692048 = weight(_text_:22 in 2417) [ClassicSimilarity], result of:
              0.0692048 = score(doc=2417,freq=2.0), product of:
                0.17886946 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051078856 = queryNorm
                0.38690117 = fieldWeight in 2417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2417)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    S.22-25
  16. Rijsbergen, C.J. van: ¬A test for the separation of relevant and non-relevant documents in experimental retrieval collections (1973) 0.01
    0.01384096 = product of:
      0.02768192 = sum of:
        0.02768192 = product of:
          0.05536384 = sum of:
            0.05536384 = weight(_text_:22 in 5002) [ClassicSimilarity], result of:
              0.05536384 = score(doc=5002,freq=2.0), product of:
                0.17886946 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051078856 = queryNorm
                0.30952093 = fieldWeight in 5002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5002)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    19. 3.1996 11:22:12
  17. Sanderson, M.: ¬The Reuters test collection (1996) 0.01
    0.01384096 = product of:
      0.02768192 = sum of:
        0.02768192 = product of:
          0.05536384 = sum of:
            0.05536384 = weight(_text_:22 in 6971) [ClassicSimilarity], result of:
              0.05536384 = score(doc=6971,freq=2.0), product of:
                0.17886946 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051078856 = queryNorm
                0.30952093 = fieldWeight in 6971, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6971)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  18. Lespinasse, K.: TREC: une conference pour l'evaluation des systemes de recherche d'information (1997) 0.01
    0.01384096 = product of:
      0.02768192 = sum of:
        0.02768192 = product of:
          0.05536384 = sum of:
            0.05536384 = weight(_text_:22 in 744) [ClassicSimilarity], result of:
              0.05536384 = score(doc=744,freq=2.0), product of:
                0.17886946 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051078856 = queryNorm
                0.30952093 = fieldWeight in 744, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=744)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 8.1996 22:01:00
  19. ¬The Fifth Text Retrieval Conference (TREC-5) (1997) 0.01
    0.01384096 = product of:
      0.02768192 = sum of:
        0.02768192 = product of:
          0.05536384 = sum of:
            0.05536384 = weight(_text_:22 in 3087) [ClassicSimilarity], result of:
              0.05536384 = score(doc=3087,freq=2.0), product of:
                0.17886946 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051078856 = queryNorm
                0.30952093 = fieldWeight in 3087, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3087)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Proceedings of the 5th TREC-confrerence held in Gaithersburgh, Maryland, Nov 20-22, 1996. Aim of the conference was discussion on retrieval techniques for large test collections. Different research groups used different techniques, such as automated thesauri, term weighting, natural language techniques, relevance feedback and advanced pattern matching, for information retrieval from the same large database. This procedure makes it possible to compare the results. The proceedings include papers, tables of the system results, and brief system descriptions including timing and storage information
  20. Pemberton, J.K.; Ojala, M.; Garman, N.: Head to head : searching the Web versus traditional services (1998) 0.01
    0.01384096 = product of:
      0.02768192 = sum of:
        0.02768192 = product of:
          0.05536384 = sum of:
            0.05536384 = weight(_text_:22 in 3572) [ClassicSimilarity], result of:
              0.05536384 = score(doc=3572,freq=2.0), product of:
                0.17886946 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051078856 = queryNorm
                0.30952093 = fieldWeight in 3572, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3572)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Online. 22(1998) no.3, S.24-26,28