Search (47 results, page 1 of 3)

  • × theme_ss:"Retrievalstudien"
  1. Byrne, J.R.: Relative effectiveness of titles, abstracts, and subject headings for machine retrieval from the COMPENDEX services (1975) 0.06
    0.06382137 = product of:
      0.12764274 = sum of:
        0.12764274 = product of:
          0.25528547 = sum of:
            0.25528547 = weight(_text_:abstracts in 1604) [ClassicSimilarity], result of:
              0.25528547 = score(doc=1604,freq=8.0), product of:
                0.2890173 = queryWeight, product of:
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.05061213 = queryNorm
                0.8832879 = fieldWeight in 1604, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1604)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We have investigated the relative merits of searching on titles, subject headings, abstracts, free-language terms, and combinations of these elements. The COMPENDEX data base was used for this study since it combined all of these data elements of interest. In general, the results obtained from the experiments indicate that, as expected, titles alone are not satisfactory for efficient retrieval. The combination of titles and abstracts came the closest to 100% retrieval, with searching of abstracts alone doing almost as well. Indexer input, although necessary for 100% retrieval in almost all cases, was found to be relatively unimportant
  2. Feng, S.: ¬A comparative study of indexing languages in single and multidatabase searching (1989) 0.05
    0.051575456 = product of:
      0.10315091 = sum of:
        0.10315091 = product of:
          0.20630182 = sum of:
            0.20630182 = weight(_text_:abstracts in 2494) [ClassicSimilarity], result of:
              0.20630182 = score(doc=2494,freq=4.0), product of:
                0.2890173 = queryWeight, product of:
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.05061213 = queryNorm
                0.7138044 = fieldWeight in 2494, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2494)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    An experiment was conducted using 3 data bases in library and information science - Library and Information Science Abstracts (LISA), Information Science Abstracts and ERIC - to investigate some of the main factors affecting on-line searching: effectiveness of search vocabularies, combinations of fields searched, and overlaps among databases. Natural language, controlled vocabulary and a mixture of natural language and controlled terms were tested using different fields of bibliographic records. Also discusses a comparative evaluation of single and multi-data base searching, measuring the overlap among data bases and their influence upon on-line searching.
  3. Ekmekcioglu, F.C.; Robertson, A.M.; Willett, P.: Effectiveness of query expansion in ranked-output document retrieval systems (1992) 0.04
    0.036469355 = product of:
      0.07293871 = sum of:
        0.07293871 = product of:
          0.14587742 = sum of:
            0.14587742 = weight(_text_:abstracts in 5689) [ClassicSimilarity], result of:
              0.14587742 = score(doc=5689,freq=2.0), product of:
                0.2890173 = queryWeight, product of:
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.05061213 = queryNorm
                0.50473595 = fieldWeight in 5689, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5689)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Reports an evaluation of 3 methods for the expansion of natural language queries in ranked output retrieval systems. The methods are based on term co-occurrence data, on Soundex codes, and on a string similarity measure. Searches for 110 queries in a data base of 26.280 titles and abstracts suggest that there is no significant difference in retrieval effectiveness between any of these methods and unexpanded searches
  4. Armstrong, C.J.; Medawar, K.: Investigation into the quality of databases in general use in the UK (1996) 0.03
    0.031910684 = product of:
      0.06382137 = sum of:
        0.06382137 = product of:
          0.12764274 = sum of:
            0.12764274 = weight(_text_:abstracts in 6768) [ClassicSimilarity], result of:
              0.12764274 = score(doc=6768,freq=2.0), product of:
                0.2890173 = queryWeight, product of:
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.05061213 = queryNorm
                0.44164395 = fieldWeight in 6768, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6768)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Reports on a Centre for Information Quality Management (CIQM) BLRRD funded project which investigated the quality of databases in general use in the UK. Gives a literature review of quality in library and information services. Reports the results of a CIQM questionnaire survey on the quality problems of databases and their affect on users. Carries out databases evaluations of: INSPEC on ESA-IRS, INSPEC on KR Data-Star, INSPEC on UMI CD-ROM, BNB on CD-ROM, and Information Science Abstracts Plus CD-ROM. Sets out a methodology for evaluation of bibliographic databases
  5. Hersh, W.; Pentecost, J.; Hickam, D.: ¬A task-oriented approach to information retrieval evaluation : overview and design for empirical testing (1996) 0.03
    0.027352015 = product of:
      0.05470403 = sum of:
        0.05470403 = product of:
          0.10940806 = sum of:
            0.10940806 = weight(_text_:abstracts in 3001) [ClassicSimilarity], result of:
              0.10940806 = score(doc=3001,freq=2.0), product of:
                0.2890173 = queryWeight, product of:
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.05061213 = queryNorm
                0.37855196 = fieldWeight in 3001, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3001)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    As retrieval system become more oriented towards end-users, there is an increasing need for improved methods to evaluate their effectiveness. We performed a task-oriented assessment of 2 MEDLINE searching systems, one which promotes traditional Boolean searching on human-indexed thesaurus terms and the other natural language searching on words in the title, abstracts and indexing terms. Medical students were randomized to one of the 2 systems and given clinical questions to answer. The students were able to use each system successfully, with no significant differences in questions correctly answered, time taken, relevant articles retrieved, or user satisfaction between the systems. This approach to evaluation was successful in measuring effectiveness of system use and demonstrates that both types of systems can be used equally well with minimal training
  6. Sen, B.K.: ¬An inquiry into the information retrieval efficiency of LISA PLUS database (1996) 0.03
    0.027352015 = product of:
      0.05470403 = sum of:
        0.05470403 = product of:
          0.10940806 = sum of:
            0.10940806 = weight(_text_:abstracts in 6640) [ClassicSimilarity], result of:
              0.10940806 = score(doc=6640,freq=2.0), product of:
                0.2890173 = queryWeight, product of:
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.05061213 = queryNorm
                0.37855196 = fieldWeight in 6640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6640)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Reports results of a study to compare the efficiency of the computerized searching of LISA Plus and Currents Research in Library and Information Science (CRLIS) with manual searching of the printed version of LISA. The study focused on articles covering the library and information science profession (LIS), published in Asian library and information science periodicals. The first stage was to identify Asian LIS periodicals using the Ulrich's Plus CD-ROM database. Computerized searching involved 2 methods; straightforward creation of sets for every periodical title; and browsing of brief citations of abstracts of all articles identified as being on the library profession published in the 1993 LISA. The manual searching involved browsing section 2.0 profession for all 11 issues of the printed LISA. Examines the reasons why computeroized searches took more time and retrieved less number of items. Suggests measures whereby the efficiency of computerized searches can be increased and concludes that to ensure comprehensive recall of relevant items, a combination of manual and computerized search is indispensible
  7. Keyes, J.G.: Using conceptual categories of questions to measure differences in retrieval performance (1996) 0.03
    0.027352015 = product of:
      0.05470403 = sum of:
        0.05470403 = product of:
          0.10940806 = sum of:
            0.10940806 = weight(_text_:abstracts in 7440) [ClassicSimilarity], result of:
              0.10940806 = score(doc=7440,freq=2.0), product of:
                0.2890173 = queryWeight, product of:
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.05061213 = queryNorm
                0.37855196 = fieldWeight in 7440, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7440)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The form of a question denotes the relationship between the current state of knowledge of the questioner and the propositional content of the question. To assess whether these semantic differences have implications for information retrieval, uses CF database, a 1239 document test database containing titles and abstracts of documents pertaining to cystic fibrosis. The database has an accompanying list of 100 questions which were divided into 5 conceptual categories of questions based on their semantic representation. 2 retrieval methods were used to investigate potential diferences in outcomes across conceptual categories: the cosine measurement and the similarity measurement. The ranked results produced by different algorithms will vary for individual conceptual categories as well as for overall performance
  8. Frei, H.P.; Schäuble, P.: Determining the effectiveness of retrieval algorithms (1991) 0.03
    0.027352015 = product of:
      0.05470403 = sum of:
        0.05470403 = product of:
          0.10940806 = sum of:
            0.10940806 = weight(_text_:abstracts in 787) [ClassicSimilarity], result of:
              0.10940806 = score(doc=787,freq=2.0), product of:
                0.2890173 = queryWeight, product of:
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.05061213 = queryNorm
                0.37855196 = fieldWeight in 787, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.046875 = fieldNorm(doc=787)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A new effectiveness measure ('usefulness measure') is proposed to circumvent the problems associated with the classical recall and precision measures. It is difficult to evaluate systems that filter extremly dynamic information; the determination of all relevant dodcuments in a real life collection is hardly affordable, and the specification of binary relevance assessments is often problematic. The new measure relies on an statistical approach with which two retrieval algorithms are compared. In contrast to the classical recall and precision measures, the new measure requires only relative judgments, and the reply of the retrieval system os compared directly with the information need of the user rather than with the query. The new measure has the added ability to determine an error probability that indicates haw stable the usefulness measure is. Using a test collection of abstracts from CACM, it is shown that our new measure is also capable of disclosing the effect of manually assigned descriptors and yields a results similar to that of the traditional recall and precision measures.
  9. Chen, H.; Martinez, J.; Kirchhoff, A.; Ng, T.D.; Schatz, B.R.: Alleviating search uncertainty through concept associations : automatic indexing, co-occurence analysis, and parallel computing (1998) 0.03
    0.027352015 = product of:
      0.05470403 = sum of:
        0.05470403 = product of:
          0.10940806 = sum of:
            0.10940806 = weight(_text_:abstracts in 5202) [ClassicSimilarity], result of:
              0.10940806 = score(doc=5202,freq=2.0), product of:
                0.2890173 = queryWeight, product of:
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.05061213 = queryNorm
                0.37855196 = fieldWeight in 5202, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5202)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this article, we report research on an algorithmic approach to alleviating search uncertainty in a large information space. Grounded on object filtering, automatic indexing, and co-occurence analysis, we performed a large-scale experiment using a parallel supercomputer (SGI Power Challenge) to analyze 400.000+ abstracts in an INSPEC computer engineering collection. Two system-generated thesauri, one based on a combined object filtering and automatic indexing method, and the other based on automatic indexing only, were compaed with the human-generated INSPEC subject thesaurus. Our user evaluation revealed that the system-generated thesauri were better than the INSPEC thesaurus in 'concept recall', but in 'concept precision' the 3 thesauri were comparable. Our analysis also revealed that the terms suggested by the 3 thesauri were complementary and could be used to significantly increase 'variety' in search terms the thereby reduce search uncertainty
  10. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.02
    0.024000356 = product of:
      0.048000712 = sum of:
        0.048000712 = product of:
          0.096001424 = sum of:
            0.096001424 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.096001424 = score(doc=262,freq=2.0), product of:
                0.17723505 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05061213 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    20.10.2000 12:22:23
  11. Tomaiuolo, N.G.; Parker, J.: Maximizing relevant retrieval : keyword and natural language searching (1998) 0.02
    0.024000356 = product of:
      0.048000712 = sum of:
        0.048000712 = product of:
          0.096001424 = sum of:
            0.096001424 = weight(_text_:22 in 6418) [ClassicSimilarity], result of:
              0.096001424 = score(doc=6418,freq=2.0), product of:
                0.17723505 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05061213 = queryNorm
                0.5416616 = fieldWeight in 6418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6418)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Online. 22(1998) no.6, S.57-58
  12. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.02
    0.024000356 = product of:
      0.048000712 = sum of:
        0.048000712 = product of:
          0.096001424 = sum of:
            0.096001424 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
              0.096001424 = score(doc=6438,freq=2.0), product of:
                0.17723505 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05061213 = queryNorm
                0.5416616 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    11. 8.2001 16:22:19
  13. Dalrymple, P.W.: Retrieval by reformulation in two library catalogs : toward a cognitive model of searching behavior (1990) 0.02
    0.024000356 = product of:
      0.048000712 = sum of:
        0.048000712 = product of:
          0.096001424 = sum of:
            0.096001424 = weight(_text_:22 in 5089) [ClassicSimilarity], result of:
              0.096001424 = score(doc=5089,freq=2.0), product of:
                0.17723505 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05061213 = queryNorm
                0.5416616 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 18:43:54
  14. Pirkola, A.; Jarvelin, K.: ¬The effect of anaphor and ellipsis resolution on proximity searching in a text database (1995) 0.02
    0.022793347 = product of:
      0.045586694 = sum of:
        0.045586694 = product of:
          0.09117339 = sum of:
            0.09117339 = weight(_text_:abstracts in 4088) [ClassicSimilarity], result of:
              0.09117339 = score(doc=4088,freq=2.0), product of:
                0.2890173 = queryWeight, product of:
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.05061213 = queryNorm
                0.31545997 = fieldWeight in 4088, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4088)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    So far, methods for ellipsis and anaphor resolution have been developed and the effects of anaphor resolution have been analyzed in the context of statistical information retrieval of scientific abstracts. No significant improvements has been observed. Analyzes the effects of ellipsis and anaphor resolution on proximity searching in a full text database. Anaphora and ellipsis are classified on the basis of the type of their correlates / antecedents rather than, as traditional, on the basis of their own linguistic type. The classification differentiates proper names and common nouns of basic words, compound words, and phrases. The study was carried out in a newspaper article database containing 55.000 full text articles. A set of 154 keyword pairs in different categories was created. Human resolution of keyword ellipsis and anaphora was performed to identify sentences and paragraphs which would match proximity searches after resolution. Findings indicate that ellipsis and anaphor resolution is most relevant for proper name phrases and only marginal in the other keyword categories. Therefore the recall effect of restricted resolution of proper name phrases only was analyzed for keyword pairs containing at least 1 proper name phrase. Findings indicate a recall increase of 38.2% in sentence searches, and 28.8% in paragraph searches when proper name ellipsis were resolved. The recall increase was 17.6% sentence searches, and 19.8% in paragraph searches when proper name anaphora were resolved. Some simple and computationally justifiable resolution method might be developed only for proper name phrases to support keyword based full text information retrieval. Discusses elements of such a method
  15. Robertson, S.E.; Sparck Jones, K.: Simple, proven approaches to text retrieval (1997) 0.02
    0.022793347 = product of:
      0.045586694 = sum of:
        0.045586694 = product of:
          0.09117339 = sum of:
            0.09117339 = weight(_text_:abstracts in 4532) [ClassicSimilarity], result of:
              0.09117339 = score(doc=4532,freq=2.0), product of:
                0.2890173 = queryWeight, product of:
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.05061213 = queryNorm
                0.31545997 = fieldWeight in 4532, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7104354 = idf(docFreq=397, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4532)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This technical note describes straightforward techniques for document indexing and retrieval that have been solidly established through extensive testing and are easy to apply. They are useful for many different types of text material, are viable for very large files, and have the advantage that they do not require special skills or training for searching, but are easy for end users. The document and text retrieval methods described here have a sound theoretical basis, are well established by extensive testing, and the ideas involved are now implemented in some commercial retrieval systems. Testing in the last few years has, in particular, shown that the methods presented here work very well with full texts, not only title and abstracts, and with large files of texts containing three quarters of a million documents. These tests, the TREC Tests (see Harman 1993 - 1997; IP&M 1995), have been rigorous comparative evaluations involving many different approaches to information retrieval. These techniques depend an the use of simple terms for indexing both request and document texts; an term weighting exploiting statistical information about term occurrences; an scoring for request-document matching, using these weights, to obtain a ranked search output; and an relevance feedback to modify request weights or term sets in iterative searching. The normal implementation is via an inverted file organisation using a term list with linked document identifiers, plus counting data, and pointers to the actual texts. The user's request can be a word list, phrases, sentences or extended text.
  16. Allan, J.; Callan, J.P.; Croft, W.B.; Ballesteros, L.; Broglio, J.; Xu, J.; Shu, H.: INQUERY at TREC-5 (1997) 0.02
    0.017143112 = product of:
      0.034286223 = sum of:
        0.034286223 = product of:
          0.06857245 = sum of:
            0.06857245 = weight(_text_:22 in 3103) [ClassicSimilarity], result of:
              0.06857245 = score(doc=3103,freq=2.0), product of:
                0.17723505 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05061213 = queryNorm
                0.38690117 = fieldWeight in 3103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3103)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 2.1999 20:55:22
  17. Ng, K.B.; Loewenstern, D.; Basu, C.; Hirsh, H.; Kantor, P.B.: Data fusion of machine-learning methods for the TREC5 routing tak (and other work) (1997) 0.02
    0.017143112 = product of:
      0.034286223 = sum of:
        0.034286223 = product of:
          0.06857245 = sum of:
            0.06857245 = weight(_text_:22 in 3107) [ClassicSimilarity], result of:
              0.06857245 = score(doc=3107,freq=2.0), product of:
                0.17723505 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05061213 = queryNorm
                0.38690117 = fieldWeight in 3107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3107)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 2.1999 20:59:22
  18. Saracevic, T.: On a method for studying the structure and nature of requests in information retrieval (1983) 0.02
    0.017143112 = product of:
      0.034286223 = sum of:
        0.034286223 = product of:
          0.06857245 = sum of:
            0.06857245 = weight(_text_:22 in 2417) [ClassicSimilarity], result of:
              0.06857245 = score(doc=2417,freq=2.0), product of:
                0.17723505 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05061213 = queryNorm
                0.38690117 = fieldWeight in 2417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2417)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    S.22-25
  19. Rijsbergen, C.J. van: ¬A test for the separation of relevant and non-relevant documents in experimental retrieval collections (1973) 0.01
    0.0137144895 = product of:
      0.027428979 = sum of:
        0.027428979 = product of:
          0.054857958 = sum of:
            0.054857958 = weight(_text_:22 in 5002) [ClassicSimilarity], result of:
              0.054857958 = score(doc=5002,freq=2.0), product of:
                0.17723505 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05061213 = queryNorm
                0.30952093 = fieldWeight in 5002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5002)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    19. 3.1996 11:22:12
  20. Sanderson, M.: ¬The Reuters test collection (1996) 0.01
    0.0137144895 = product of:
      0.027428979 = sum of:
        0.027428979 = product of:
          0.054857958 = sum of:
            0.054857958 = weight(_text_:22 in 6971) [ClassicSimilarity], result of:
              0.054857958 = score(doc=6971,freq=2.0), product of:
                0.17723505 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05061213 = queryNorm
                0.30952093 = fieldWeight in 6971, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6971)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon

Languages

  • e 42
  • d 3
  • f 1
  • More… Less…

Types

  • a 41
  • s 3
  • m 2
  • el 1
  • r 1
  • More… Less…