Search (213 results, page 2 of 11)

  • × theme_ss:"Retrievalstudien"
  • × year_i:[1990 TO 2000}
  1. Harter, S.P.: Search term combinations and retrieval overlap : a proposed methodology and case study (1990) 0.00
    0.00334869 = product of:
      0.00669738 = sum of:
        0.00669738 = product of:
          0.01339476 = sum of:
            0.01339476 = weight(_text_:a in 339) [ClassicSimilarity], result of:
              0.01339476 = score(doc=339,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.25222903 = fieldWeight in 339, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=339)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  2. Singhal, A.: Document length normalization (1996) 0.00
    0.00334869 = product of:
      0.00669738 = sum of:
        0.00669738 = product of:
          0.01339476 = sum of:
            0.01339476 = weight(_text_:a in 6630) [ClassicSimilarity], result of:
              0.01339476 = score(doc=6630,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.25222903 = fieldWeight in 6630, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6630)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In the Text REtrieval Conference (TREC) collection - a large full text experimental text collection with varying documents lengths - observes that the likelihood of a document being judged relevant by a user increases with the document length. A retrieval strategy, such as the vector space cosine match, that retrieves documents of different lengths with roughly equal chances, will not optimally retrieve useful documents from such a collection. Presents a modified technique (pivoted cosine normalization) that attempts to match the likelihood of retrieving documents of all lengths to the likelihood of their relevance and shows that this technique yields significant improvements in retrieval effectiveness
    Type
    a
  3. Wilkes, A.; Nelson, A.: Subject searching in two online catalogs : authority control vs. non authority control (1995) 0.00
    0.0031324127 = product of:
      0.0062648254 = sum of:
        0.0062648254 = product of:
          0.012529651 = sum of:
            0.012529651 = weight(_text_:a in 4450) [ClassicSimilarity], result of:
              0.012529651 = score(doc=4450,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.23593865 = fieldWeight in 4450, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4450)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Compares the results of subject searching in 2 online catalogue systems, one system with authority control, the other without. Transaction logs from Library A (no authority control) were analyzed to identify searching patterns of users; 885 searches were attempted, 351 (39,7%) by subject. 142 (40,6%) of these subject searches were unsuccessful. Identical searches were performed in a comparable library that has authority control, Library B. Terms identified in 'see' references at Library B were searched in Library A. 105 (73,9%) of the searches that appeared to fail would have retrievd at least one, and usually many, records if a link had been provided between the term chosen by the user and the term used by the system
    Type
    a
  4. Leppanen, E.: Homografiongelma tekstihaussa ja homografien disambiguoinnin vaikutukset (1996) 0.00
    0.0030444188 = product of:
      0.0060888375 = sum of:
        0.0060888375 = product of:
          0.012177675 = sum of:
            0.012177675 = weight(_text_:a in 27) [ClassicSimilarity], result of:
              0.012177675 = score(doc=27,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22931081 = fieldWeight in 27, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=27)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Homonymy is known to often cause false drops in free text searching in a full text database. The problem is quite common and difficult to avoid in Finnish, but nobody has examined it before. Reports on a study that examined the frequency of, and solutions to, the homonymy problem, based on searches made in a Finnish full text database containing about 55.000 newspaper articles. The results indicate that homonymy is not a very serious problem in full text searching, with only about 1 search result set out of 4 containing false drops caused by homonymy. Several other reasons for nonrelevance were much more common. However, in some set results there were a considerable number of homonymy errors, so the number seems to be very random. A study was also made into whether homonyms can be disambiguated by syntactic analysis. The result was that 75,2% of homonyms were disambiguated by this method. Verb homonyms were considerably easier to disambiguate than substantives. Although homonymy is not a very big problem it could perhaps easily be eliminated if there was a suitable syntactic analyzer in the IR system
    Type
    a
  5. Bates, M.J.: Document familiarity, relevance, and Bradford's law : the Getty Online Searching Project report; no.5 (1996) 0.00
    0.0030255679 = product of:
      0.0060511357 = sum of:
        0.0060511357 = product of:
          0.012102271 = sum of:
            0.012102271 = weight(_text_:a in 6978) [ClassicSimilarity], result of:
              0.012102271 = score(doc=6978,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22789092 = fieldWeight in 6978, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6978)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Getty Online Searching Project studied the end user searching behaviour of 27 humanities scholars over a 2 year period. A number of scholars anticipated that they were already familiar with a percentage of records their searches retrieved. High document familiarity can be a significant factor in searching: Draws implications regarding the impact of high document familiarity on relevance and information retrieval theory. Makes speculations regarding high document familiarity and Bradford's law
    Type
    a
  6. Crestani, F.; Ruthven, I.; Sanderson, M.; Rijsbergen, C.J. van: ¬The troubles with using a logical model of IR on a large collection of documents : experimenting retrieval by logical imaging on TREC (1996) 0.00
    0.0029294936 = product of:
      0.005858987 = sum of:
        0.005858987 = product of:
          0.011717974 = sum of:
            0.011717974 = weight(_text_:a in 7522) [ClassicSimilarity], result of:
              0.011717974 = score(doc=7522,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22065444 = fieldWeight in 7522, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=7522)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  7. Large, A.; Beheshti, J.; Breuleux, A.: ¬A comparison of information retrieval from print and CD-ROM versions of an encyclopedia by elementary school students (1994) 0.00
    0.0029000505 = product of:
      0.005800101 = sum of:
        0.005800101 = product of:
          0.011600202 = sum of:
            0.011600202 = weight(_text_:a in 1015) [ClassicSimilarity], result of:
              0.011600202 = score(doc=1015,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.21843673 = fieldWeight in 1015, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1015)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Describes an experiment using 48 sixth-grade students to compare retrieval techniques using the print and CD-ROM versions of Compton's Encyclopedia. Four queries of defferent complexity (measured by the numer of terms present) were searched by the students after a short training session. The searches were timed and the retrieval steps and search terms were noted. The searches were no faster on the CD-ROM than the print version, but in both cases time was related directly to the number of terms involved. The students coped well with the CD-ROM interface and its several retrieval paths
    Editor
    Renaud, A.
    Type
    a
  8. Beaulieu, M.; Robertson, S.; Rasmussen, E.: Evaluating interactive systems in TREC (1996) 0.00
    0.0029000505 = product of:
      0.005800101 = sum of:
        0.005800101 = product of:
          0.011600202 = sum of:
            0.011600202 = weight(_text_:a in 2998) [ClassicSimilarity], result of:
              0.011600202 = score(doc=2998,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.21843673 = fieldWeight in 2998, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2998)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The TREC experiments were designed to allow large-scale laboratory testing of information retrieval techniques. As the experiments have progressed, groups within TREC have become increasingly interested in finding ways to allow user interaction without invalidating the experimental design. The development of an 'interactive track' within TREC to accomodate user interaction has required some modifications in the way the retrieval task is designed. In particular there is a need to simulate a realistic interactive searching task within a laboratory environment. Through successive interactive studies in TREC, the Okapi team at City University London has identified methodological issues relevant to this process. A diagnostic experiment was conducted as a follow-up to TREC searches which attempted to isolate the human nad automatic contributions to query formulation and retrieval performance
    Type
    a
  9. Hull, D.A.: Stemming algorithms : a case study for detailed evaluation (1996) 0.00
    0.0029000505 = product of:
      0.005800101 = sum of:
        0.005800101 = product of:
          0.011600202 = sum of:
            0.011600202 = weight(_text_:a in 2999) [ClassicSimilarity], result of:
              0.011600202 = score(doc=2999,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.21843673 = fieldWeight in 2999, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2999)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The majority of information retrieval experiments are evaluated by measures such as average precision and average recall. Fundamental decisions about the superiority of one retrieval technique over another are made solely on the bases of these measures. We claim that average performance figures need to be validated with a careful statistical analysis and that there is a great deal of additional information that can be uncovered by looking closely at the results of individual queries. This article is a case study of stemming algorithms which describes a number of novel approaches to evaluation and demonstrates their value
    Type
    a
  10. Spink, A.; Goodrum, A.: ¬A study of search intermediary working notes : implications for IR system design (1996) 0.00
    0.0029000505 = product of:
      0.005800101 = sum of:
        0.005800101 = product of:
          0.011600202 = sum of:
            0.011600202 = weight(_text_:a in 6981) [ClassicSimilarity], result of:
              0.011600202 = score(doc=6981,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.21843673 = fieldWeight in 6981, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6981)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Reports findings from an explanatory study investigating working notes created during encoding and external storage (EES) processes, by human search intermediaries using a Boolean information retrieval systems. Analysis of 221 sets of working notes created by human search intermediaries revealed extensive use of EES processes and the creation of working notes of textual, numerical and graphical entities. Nearly 70% of recorded working noted were textual/numerical entities, nearly 30 were graphical entities and 0,73% were indiscernible. Segmentation devices were also used in 48% of the working notes. The creation of working notes during the EES processes was a fundamental element within the mediated, interactive information retrieval process. Discusses implications for the design of interfaces to support users' EES processes and further research
    Type
    a
  11. Spink, A.; Greisdorf, H.: Users' partial relevance judgements during online searching (1997) 0.00
    0.0029000505 = product of:
      0.005800101 = sum of:
        0.005800101 = product of:
          0.011600202 = sum of:
            0.011600202 = weight(_text_:a in 623) [ClassicSimilarity], result of:
              0.011600202 = score(doc=623,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.21843673 = fieldWeight in 623, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=623)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Reports results of research to examine users conducting their initial online search on a particular information problem. Findings from 3 separate studies of relevance judgements by 44 initial search users were examined, including 2 studies of 13 end users and a study of 18 user engaged in mediated online searches. Number of items was judged on the scale 'relevant', 'patially relevant' and 'not rlevant'. Results suggest that: a relationship exists between partially rlevant items retrieved anch changes in the users' information problem or question during an information seeking process; partial relevance judgements play an important role for users in the early stages of seeking information on a particular information problem; and 'highly' relevant items may or may not be the only items useful at the early stages of users' information seeking processes
    Type
    a
  12. Davis, M.; Dunning, T.: ¬A TREC evaluation of query translation methods for multi-lingual text retrieval (1996) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 1917) [ClassicSimilarity], result of:
              0.011481222 = score(doc=1917,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 1917, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1917)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  13. Aldous, K.J.: ¬A system for the automatic retrieval of information from a specialist database (1996) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 4078) [ClassicSimilarity], result of:
              0.011481222 = score(doc=4078,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 4078, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4078)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Accessing useful information from a complex database requires knowledge of the structure of the database and an understanding of the methods of information retrieval. A means of overcoming this knowledge barrier to the use of narrow domain databases is proposed in which the user is required to enter only a series of terms which identify the required material. Describes a method which classifies terms according to their meaning in the context of the database and which uses this classification to access and execute models of code stored in the database to effect retrieval. Presents an implementation of the method using a database of technical information on the nature and use of fungicides. Initial results of trials with potential users indicate that the system can produce relevant resposes to queries expressed in this style. Since the code modules are part of the database, extensions may be easily implemented to handle most queries which users are likely to pose
    Type
    a
  14. Guglielmo, E.J.; Rowe, N.C.: Natural-language retrieval of images based on descriptive captions (1996) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 6624) [ClassicSimilarity], result of:
              0.011481222 = score(doc=6624,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 6624, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6624)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Describes a prototype intelligent information retrieval system that uses natural-language understanding to efficiently locate captioned data. Multimedia data generally requires captions to explain its features and significance. Such descriptive captions often rely on long nominal compunds (strings of consecutive nouns) which create problems of ambiguous word sense. Presents a system in which captions and user queries are parsed and interpreted to produce a logical form, using a detailed theory of the meaning of nominal compounds. A fine-grain match can then compare the logical form of the query to the logical forms for each caption. To improve system efficiency, the system performs a coarse-grain match with index files, using nouns and verbs extracted from the query. Experiments with randomly selected queries and captions from an existing image library show an increase of 30% in precision and 50% in recall over the keyphrase approach currently used. Processing times have a median of 7 seconds as compared to 8 minutes for the existing system
    Type
    a
  15. Buckley, C.; Singhal, A.; Mitra, M.; Salton, G.: New retrieval approaches using SMART : TREC 4 (1996) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 7528) [ClassicSimilarity], result of:
              0.011481222 = score(doc=7528,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 7528, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7528)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  16. Dimitroff, A.; Wolfram, D.; Volz, A.: Affective response and retrieval performance : analysis of contributing factors (1996) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 164) [ClassicSimilarity], result of:
              0.011481222 = score(doc=164,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 164, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Describes a study which investigated the affective response of 83 subjects to 2 versions of a hypertext-based bibliographic retrieval system. The objective of the study was to determine if subjects preferred searching a hypertext information retrieval (IR) system via traditional bibliographic links or via an enhanced set of linkages between structured records. The study also examined the utility of using factor analysis to explore subjects' affective responses to searching the 2 hypertext-based IR systems; explored the effect of experience on search outcome; and compared the effect of different types of linkages within the hypertext system. Findings reveal a complex relationship between system and user that is sometimes contradictory. Searchers found the systems to be usable or unusable in different ways indicating that further researchg is needed to isolate to specific features that searchers find frustrating or not in searching structured records via a hypertext-based IR system
    Type
    a
  17. Singhal, A.; Buckley, C.; Mitra, M.: Using query zoning and correlation with SMART : TREC 5 (1997) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 3090) [ClassicSimilarity], result of:
              0.011481222 = score(doc=3090,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 3090, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3090)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  18. Savoy, J.; Calvé, A. le; Vrajitoru, D.: Report on the TREC5 experiment : data fusion and collection fusion (1997) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 3108) [ClassicSimilarity], result of:
              0.011481222 = score(doc=3108,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 3108, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3108)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  19. Sheridan, P.; Ballerini, J.P.; Schäuble, P.: Building a large multilingual test collection from comparable news documents (1998) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 6298) [ClassicSimilarity], result of:
              0.011481222 = score(doc=6298,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 6298, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6298)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  20. Agata, T.: ¬A measure for evaluating search engines on the World Wide Web : retrieval test with ESL (Expected Search Length) (1997) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 3892) [ClassicSimilarity], result of:
              0.011481222 = score(doc=3892,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 3892, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3892)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a

Languages

Types

  • a 205
  • r 3
  • s 3
  • m 2
  • el 1
  • More… Less…