Search (3 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Literaturübersicht"
  • × theme_ss:"Retrievalstudien"
  1. McGill, M.J.; Huitfeldt, J.: Experimental techniques of information retrieval (1979) 0.00
    0.0023678814 = product of:
      0.0047357627 = sum of:
        0.0047357627 = product of:
          0.009471525 = sum of:
            0.009471525 = weight(_text_:a in 279) [ClassicSimilarity], result of:
              0.009471525 = score(doc=279,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17835285 = fieldWeight in 279, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=279)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  2. Harter, S.P.; Hert, C.A.: Evaluation of information retrieval systems : approaches, issues, and methods (1997) 0.00
    0.0023678814 = product of:
      0.0047357627 = sum of:
        0.0047357627 = product of:
          0.009471525 = sum of:
            0.009471525 = weight(_text_:a in 2264) [ClassicSimilarity], result of:
              0.009471525 = score(doc=2264,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17835285 = fieldWeight in 2264, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2264)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    State of the art review of information retrieval systems, defined as systems retrieving documents a sopposed to numerical data. Explains the classic Cranfield studies that have served as a standard for retrieval testing since the 1960s and discusses the Cranfield model and its relevance based measures of retrieval effectiveness. Details sosme of the problems with the Cranfield instruments and issues of validity and reliability, generalizability, usefulness and basic concepts. Discusses the evaluation of the Internet search engines in light of the Cranfield model, noting the very real differences between batch systems (Cranfield) and interactive systems (Internet). Because the Internet collection is not fixed, it is impossible to determine recall as a measure of retrieval effectiveness. considers future directions in evaluating information retrieval systems
    Type
    a
  3. Oppenheim, C.; Morris, A.; McKnight, C.: ¬The evaluation of WWW search engines (2000) 0.00
    0.002269176 = product of:
      0.004538352 = sum of:
        0.004538352 = product of:
          0.009076704 = sum of:
            0.009076704 = weight(_text_:a in 4546) [ClassicSimilarity], result of:
              0.009076704 = score(doc=4546,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1709182 = fieldWeight in 4546, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4546)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The literature of the evaluation of Internet search engines is reviewed. Although there have been many studies, there has been little consistency in the way such studies have been carried out. This problem is exacerbated by the fact that recall is virtually impossible to calculate in the fast changing Internet environment, and therefore the traditional Cranfield type of evaluation is not usually possible. A variety of alternative evaluation methods has been suggested to overcome this difficulty. The authors recommend that a standardised set of tools is developed for the evaluation of web search engines so that, in future, comparisons can be made between search engines more effectively, and that variations in performance of any given search engine over time can be tracked. The paper itself does not provide such a standard set of tools, but it investigates the issues and makes preliminary recommendations of the types of tools needed
    Type
    a