Search (372 results, page 1 of 19)

  • × theme_ss:"Retrievalstudien"
  1. Ellis, D.: Progress and problems in information retrieval (1996) 0.09
    0.08800239 = product of:
      0.17600478 = sum of:
        0.17600478 = sum of:
          0.11942034 = weight(_text_:retrieval in 789) [ClassicSimilarity], result of:
            0.11942034 = score(doc=789,freq=16.0), product of:
              0.15791564 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.052204985 = queryNorm
              0.75622874 = fieldWeight in 789, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0625 = fieldNorm(doc=789)
          0.056584436 = weight(_text_:22 in 789) [ClassicSimilarity], result of:
            0.056584436 = score(doc=789,freq=2.0), product of:
              0.18281296 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052204985 = queryNorm
              0.30952093 = fieldWeight in 789, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=789)
      0.5 = coord(1/2)
    
    Abstract
    An introduction to the principal generic approaches to information retrieval research with their associated concepts, models and systems, this text is designed to keep the information professional up to date with the major themes and developments that have preoccupied researchers in recent month in relation to textual and documentary retrieval systems.
    COMPASS
    Information retrieval
    Content
    First published 1991 as New horizons in information retrieval
    Date
    26. 7.2002 20:22:46
    LCSH
    Information retrieval
    Subject
    Information retrieval
    Information retrieval
  2. Tomaiuolo, N.G.; Parker, J.: Maximizing relevant retrieval : keyword and natural language searching (1998) 0.09
    0.08645517 = product of:
      0.17291033 = sum of:
        0.17291033 = sum of:
          0.07388757 = weight(_text_:retrieval in 6418) [ClassicSimilarity], result of:
            0.07388757 = score(doc=6418,freq=2.0), product of:
              0.15791564 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.052204985 = queryNorm
              0.46789268 = fieldWeight in 6418, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.109375 = fieldNorm(doc=6418)
          0.09902276 = weight(_text_:22 in 6418) [ClassicSimilarity], result of:
            0.09902276 = score(doc=6418,freq=2.0), product of:
              0.18281296 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052204985 = queryNorm
              0.5416616 = fieldWeight in 6418, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=6418)
      0.5 = coord(1/2)
    
    Source
    Online. 22(1998) no.6, S.57-58
  3. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.09
    0.08645517 = product of:
      0.17291033 = sum of:
        0.17291033 = sum of:
          0.07388757 = weight(_text_:retrieval in 6438) [ClassicSimilarity], result of:
            0.07388757 = score(doc=6438,freq=2.0), product of:
              0.15791564 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.052204985 = queryNorm
              0.46789268 = fieldWeight in 6438, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.109375 = fieldNorm(doc=6438)
          0.09902276 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
            0.09902276 = score(doc=6438,freq=2.0), product of:
              0.18281296 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052204985 = queryNorm
              0.5416616 = fieldWeight in 6438, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=6438)
      0.5 = coord(1/2)
    
    Date
    11. 8.2001 16:22:19
  4. Dalrymple, P.W.: Retrieval by reformulation in two library catalogs : toward a cognitive model of searching behavior (1990) 0.09
    0.08645517 = product of:
      0.17291033 = sum of:
        0.17291033 = sum of:
          0.07388757 = weight(_text_:retrieval in 5089) [ClassicSimilarity], result of:
            0.07388757 = score(doc=5089,freq=2.0), product of:
              0.15791564 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.052204985 = queryNorm
              0.46789268 = fieldWeight in 5089, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.109375 = fieldNorm(doc=5089)
          0.09902276 = weight(_text_:22 in 5089) [ClassicSimilarity], result of:
            0.09902276 = score(doc=5089,freq=2.0), product of:
              0.18281296 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052204985 = queryNorm
              0.5416616 = fieldWeight in 5089, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=5089)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 18:43:54
  5. Sanderson, M.: ¬The Reuters test collection (1996) 0.07
    0.07051369 = product of:
      0.14102738 = sum of:
        0.14102738 = sum of:
          0.084442936 = weight(_text_:retrieval in 6971) [ClassicSimilarity], result of:
            0.084442936 = score(doc=6971,freq=8.0), product of:
              0.15791564 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.052204985 = queryNorm
              0.5347345 = fieldWeight in 6971, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0625 = fieldNorm(doc=6971)
          0.056584436 = weight(_text_:22 in 6971) [ClassicSimilarity], result of:
            0.056584436 = score(doc=6971,freq=2.0), product of:
              0.18281296 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052204985 = queryNorm
              0.30952093 = fieldWeight in 6971, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=6971)
      0.5 = coord(1/2)
    
    Abstract
    Describes the Reuters test collection, which at 22.173 references is significantly larger than most traditional test collections. In addition, Reuters has none of the recall calculation problems normally associated with some of the larger test collections available. Explains the method derived by D.D. Lewis to perform retrieval experiments on the Reuters collection and illustrates the use of the Reuters collection using some simple retrieval experiments that compare the performance of stemming algorithms
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  6. Losee, R.M.: Determining information retrieval and filtering performance without experimentation (1995) 0.07
    0.066060096 = product of:
      0.13212019 = sum of:
        0.13212019 = sum of:
          0.08260882 = weight(_text_:retrieval in 3368) [ClassicSimilarity], result of:
            0.08260882 = score(doc=3368,freq=10.0), product of:
              0.15791564 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.052204985 = queryNorm
              0.5231199 = fieldWeight in 3368, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3368)
          0.04951138 = weight(_text_:22 in 3368) [ClassicSimilarity], result of:
            0.04951138 = score(doc=3368,freq=2.0), product of:
              0.18281296 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052204985 = queryNorm
              0.2708308 = fieldWeight in 3368, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3368)
      0.5 = coord(1/2)
    
    Abstract
    The performance of an information retrieval or text and media filtering system may be determined through analytic methods as well as by traditional simulation or experimental methods. These analytic methods can provide precise statements about expected performance. They can thus determine which of 2 similarly performing systems is superior. For both a single query terms and for a multiple query term retrieval model, a model for comparing the performance of different probabilistic retrieval methods is developed. This method may be used in computing the average search length for a query, given only knowledge of database parameter values. Describes predictive models for inverse document frequency, binary independence, and relevance feedback based retrieval and filtering. Simulation illustrate how the single term model performs and sample performance predictions are given for single term and multiple term problems
    Date
    22. 2.1996 13:14:10
  7. ¬The Fifth Text Retrieval Conference (TREC-5) (1997) 0.06
    0.06485708 = product of:
      0.12971416 = sum of:
        0.12971416 = sum of:
          0.07312973 = weight(_text_:retrieval in 3087) [ClassicSimilarity], result of:
            0.07312973 = score(doc=3087,freq=6.0), product of:
              0.15791564 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.052204985 = queryNorm
              0.46309367 = fieldWeight in 3087, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0625 = fieldNorm(doc=3087)
          0.056584436 = weight(_text_:22 in 3087) [ClassicSimilarity], result of:
            0.056584436 = score(doc=3087,freq=2.0), product of:
              0.18281296 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052204985 = queryNorm
              0.30952093 = fieldWeight in 3087, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=3087)
      0.5 = coord(1/2)
    
    Abstract
    Proceedings of the 5th TREC-confrerence held in Gaithersburgh, Maryland, Nov 20-22, 1996. Aim of the conference was discussion on retrieval techniques for large test collections. Different research groups used different techniques, such as automated thesauri, term weighting, natural language techniques, relevance feedback and advanced pattern matching, for information retrieval from the same large database. This procedure makes it possible to compare the results. The proceedings include papers, tables of the system results, and brief system descriptions including timing and storage information
  8. ¬The Eleventh Text Retrieval Conference, TREC 2002 (2003) 0.06
    0.06485708 = product of:
      0.12971416 = sum of:
        0.12971416 = sum of:
          0.07312973 = weight(_text_:retrieval in 4049) [ClassicSimilarity], result of:
            0.07312973 = score(doc=4049,freq=6.0), product of:
              0.15791564 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.052204985 = queryNorm
              0.46309367 = fieldWeight in 4049, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0625 = fieldNorm(doc=4049)
          0.056584436 = weight(_text_:22 in 4049) [ClassicSimilarity], result of:
            0.056584436 = score(doc=4049,freq=2.0), product of:
              0.18281296 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052204985 = queryNorm
              0.30952093 = fieldWeight in 4049, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4049)
      0.5 = coord(1/2)
    
    Abstract
    Proceedings of the llth TREC-conference held in Gaithersburg, Maryland (USA), November 19-22, 2002. Aim of the conference was discussion an retrieval and related information-seeking tasks for large test collection. 93 research groups used different techniques, for information retrieval from the same large database. This procedure makes it possible to compare the results. The tasks are: Cross-language searching, filtering, interactive searching, searching for novelty, question answering, searching for video shots, and Web searching.
  9. Crestani, F.; Rijsbergen, C.J. van: Information retrieval by imaging (1996) 0.06
    0.06310947 = product of:
      0.12621894 = sum of:
        0.12621894 = sum of:
          0.083780624 = weight(_text_:retrieval in 6967) [ClassicSimilarity], result of:
            0.083780624 = score(doc=6967,freq=14.0), product of:
              0.15791564 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.052204985 = queryNorm
              0.5305404 = fieldWeight in 6967, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.046875 = fieldNorm(doc=6967)
          0.04243833 = weight(_text_:22 in 6967) [ClassicSimilarity], result of:
            0.04243833 = score(doc=6967,freq=2.0), product of:
              0.18281296 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052204985 = queryNorm
              0.23214069 = fieldWeight in 6967, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=6967)
      0.5 = coord(1/2)
    
    Abstract
    Explains briefly what constitutes the imaging process and explains how imaging can be used in information retrieval. Proposes an approach based on the concept of: 'a term is a possible world'; which enables the exploitation of term to term relationships which are estimated using an information theoretic measure. Reports results of an evaluation exercise to compare the performance of imaging retrieval, using possible world semantics, with a benchmark and using the Cranfield 2 document collection to measure precision and recall. Initially, the performance imaging retrieval was seen to be better but statistical analysis proved that the difference was not significant. The problem with imaging retrieval lies in the amount of computations needed to be performed at run time and a later experiement investigated the possibility of reducing this amount. Notes lines of further investigation
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  10. Allan, J.; Callan, J.P.; Croft, W.B.; Ballesteros, L.; Broglio, J.; Xu, J.; Shu, H.: INQUERY at TREC-5 (1997) 0.06
    0.06175369 = product of:
      0.12350738 = sum of:
        0.12350738 = sum of:
          0.052776836 = weight(_text_:retrieval in 3103) [ClassicSimilarity], result of:
            0.052776836 = score(doc=3103,freq=2.0), product of:
              0.15791564 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.052204985 = queryNorm
              0.33420905 = fieldWeight in 3103, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.078125 = fieldNorm(doc=3103)
          0.070730545 = weight(_text_:22 in 3103) [ClassicSimilarity], result of:
            0.070730545 = score(doc=3103,freq=2.0), product of:
              0.18281296 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052204985 = queryNorm
              0.38690117 = fieldWeight in 3103, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=3103)
      0.5 = coord(1/2)
    
    Date
    27. 2.1999 20:55:22
    Source
    The Fifth Text Retrieval Conference (TREC-5). Ed.: E.M. Voorhees u. D.K. Harman
  11. Ng, K.B.; Loewenstern, D.; Basu, C.; Hirsh, H.; Kantor, P.B.: Data fusion of machine-learning methods for the TREC5 routing tak (and other work) (1997) 0.06
    0.06175369 = product of:
      0.12350738 = sum of:
        0.12350738 = sum of:
          0.052776836 = weight(_text_:retrieval in 3107) [ClassicSimilarity], result of:
            0.052776836 = score(doc=3107,freq=2.0), product of:
              0.15791564 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.052204985 = queryNorm
              0.33420905 = fieldWeight in 3107, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.078125 = fieldNorm(doc=3107)
          0.070730545 = weight(_text_:22 in 3107) [ClassicSimilarity], result of:
            0.070730545 = score(doc=3107,freq=2.0), product of:
              0.18281296 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052204985 = queryNorm
              0.38690117 = fieldWeight in 3107, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=3107)
      0.5 = coord(1/2)
    
    Date
    27. 2.1999 20:59:22
    Source
    The Fifth Text Retrieval Conference (TREC-5). Ed.: E.M. Voorhees u. D.K. Harman
  12. Saracevic, T.: On a method for studying the structure and nature of requests in information retrieval (1983) 0.06
    0.06175369 = product of:
      0.12350738 = sum of:
        0.12350738 = sum of:
          0.052776836 = weight(_text_:retrieval in 2417) [ClassicSimilarity], result of:
            0.052776836 = score(doc=2417,freq=2.0), product of:
              0.15791564 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.052204985 = queryNorm
              0.33420905 = fieldWeight in 2417, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.078125 = fieldNorm(doc=2417)
          0.070730545 = weight(_text_:22 in 2417) [ClassicSimilarity], result of:
            0.070730545 = score(doc=2417,freq=2.0), product of:
              0.18281296 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052204985 = queryNorm
              0.38690117 = fieldWeight in 2417, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=2417)
      0.5 = coord(1/2)
    
    Pages
    S.22-25
  13. Hodges, P.R.: Keyword in title indexes : effectiveness of retrieval in computer searches (1983) 0.06
    0.061699476 = product of:
      0.12339895 = sum of:
        0.12339895 = sum of:
          0.07388757 = weight(_text_:retrieval in 5001) [ClassicSimilarity], result of:
            0.07388757 = score(doc=5001,freq=8.0), product of:
              0.15791564 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.052204985 = queryNorm
              0.46789268 = fieldWeight in 5001, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5001)
          0.04951138 = weight(_text_:22 in 5001) [ClassicSimilarity], result of:
            0.04951138 = score(doc=5001,freq=2.0), product of:
              0.18281296 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052204985 = queryNorm
              0.2708308 = fieldWeight in 5001, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5001)
      0.5 = coord(1/2)
    
    Abstract
    A study was done to test the effectiveness of retrieval using title word searching. It was based on actual search profiles used in the Mechanized Information Center at Ohio State University, in order ro replicate as closely as possible actual searching conditions. Fewer than 50% of the relevant titles were retrieved by keywords in titles. The low rate of retrieval can be attributes to three sources: titles themselves, user and information specialist ignorance of the subject vocabulary in use, and to general language problems. Across fields it was found that the social sciences had the best retrieval rate, with science having the next best, and arts and humanities the lowest. Ways to enhance and supplement keyword in title searching on the computer and in printed indexes are discussed.
    Date
    14. 3.1996 13:22:21
  14. Smithson, S.: Information retrieval evaluation in practice : a case study approach (1994) 0.05
    0.05087889 = product of:
      0.10175778 = sum of:
        0.10175778 = sum of:
          0.0522464 = weight(_text_:retrieval in 7302) [ClassicSimilarity], result of:
            0.0522464 = score(doc=7302,freq=4.0), product of:
              0.15791564 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.052204985 = queryNorm
              0.33085006 = fieldWeight in 7302, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0546875 = fieldNorm(doc=7302)
          0.04951138 = weight(_text_:22 in 7302) [ClassicSimilarity], result of:
            0.04951138 = score(doc=7302,freq=2.0), product of:
              0.18281296 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052204985 = queryNorm
              0.2708308 = fieldWeight in 7302, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=7302)
      0.5 = coord(1/2)
    
    Abstract
    The evaluation of information retrieval systems is an important yet difficult operation. This paper describes an exploratory evaluation study that takes an interpretive approach to evaluation. The longitudinal study examines evaluation through the information-seeking behaviour of 22 case studies of 'real' users. The eclectic approach to data collection produced behavioral data that is compared with relevance judgements and satisfaction ratings. The study demonstrates considerable variations among the cases, among different evaluation measures within the same case, and among the same measures at different stages within a single case. It is argued that those involved in evaluation should be aware of the difficulties, and base any evaluation on a good understanding of the cases in question
  15. Blair, D.C.: STAIRS Redux : thoughts on the STAIRS evaluation, ten years after (1996) 0.05
    0.05087889 = product of:
      0.10175778 = sum of:
        0.10175778 = sum of:
          0.0522464 = weight(_text_:retrieval in 3002) [ClassicSimilarity], result of:
            0.0522464 = score(doc=3002,freq=4.0), product of:
              0.15791564 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.052204985 = queryNorm
              0.33085006 = fieldWeight in 3002, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3002)
          0.04951138 = weight(_text_:22 in 3002) [ClassicSimilarity], result of:
            0.04951138 = score(doc=3002,freq=2.0), product of:
              0.18281296 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052204985 = queryNorm
              0.2708308 = fieldWeight in 3002, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3002)
      0.5 = coord(1/2)
    
    Abstract
    The test of retrieval effectiveness performed on IBM's STAIRS and reported in 'Communications of the ACM' 10 years ago, continues to be cited frequently in the information retrieval literature. The reasons for the study's continuing pertinence to today's research are discussed, and the political, legal, and commercial aspects of the study are presented. In addition, the method of calculating recall that was used in the STAIRS study is discussed in some detail, especially how it reduces the 5 major types of uncertainty in recall estimations. It is also suggested that this method of recall estimation may serve as the basis for recall estimations that might be truly comparable between systems
    Source
    Journal of the American Society for Information Science. 47(1996) no.1, S.4-22
  16. Petrelli, D.: On the role of user-centred evaluation in the advancement of interactive information retrieval (2008) 0.05
    0.050001718 = product of:
      0.100003436 = sum of:
        0.100003436 = sum of:
          0.06463816 = weight(_text_:retrieval in 2026) [ClassicSimilarity], result of:
            0.06463816 = score(doc=2026,freq=12.0), product of:
              0.15791564 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.052204985 = queryNorm
              0.40932083 = fieldWeight in 2026, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2026)
          0.035365272 = weight(_text_:22 in 2026) [ClassicSimilarity], result of:
            0.035365272 = score(doc=2026,freq=2.0), product of:
              0.18281296 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052204985 = queryNorm
              0.19345059 = fieldWeight in 2026, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2026)
      0.5 = coord(1/2)
    
    Abstract
    This paper discusses the role of user-centred evaluations as an essential method for researching interactive information retrieval. It draws mainly on the work carried out during the Clarity Project where different user-centred evaluations were run during the lifecycle of a cross-language information retrieval system. The iterative testing was not only instrumental to the development of a usable system, but it enhanced our knowledge of the potential, impact, and actual use of cross-language information retrieval technology. Indeed the role of the user evaluation was dual: by testing a specific prototype it was possible to gain a micro-view and assess the effectiveness of each component of the complex system; by cumulating the result of all the evaluations (in total 43 people were involved) it was possible to build a macro-view of how cross-language retrieval would impact on users and their tasks. By showing the richness of results that can be acquired, this paper aims at stimulating researchers into considering user-centred evaluations as a flexible, adaptable and comprehensive technique for investigating non-traditional information access systems.
    Footnote
    Beitrag eines Themenbereichs: Evaluation of Interactive Information Retrieval Systems
    Source
    Information processing and management. 44(2008) no.1, S.22-38
  17. Rijsbergen, C.J. van: ¬A test for the separation of relevant and non-relevant documents in experimental retrieval collections (1973) 0.05
    0.049402952 = product of:
      0.098805904 = sum of:
        0.098805904 = sum of:
          0.042221468 = weight(_text_:retrieval in 5002) [ClassicSimilarity], result of:
            0.042221468 = score(doc=5002,freq=2.0), product of:
              0.15791564 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.052204985 = queryNorm
              0.26736724 = fieldWeight in 5002, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0625 = fieldNorm(doc=5002)
          0.056584436 = weight(_text_:22 in 5002) [ClassicSimilarity], result of:
            0.056584436 = score(doc=5002,freq=2.0), product of:
              0.18281296 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052204985 = queryNorm
              0.30952093 = fieldWeight in 5002, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=5002)
      0.5 = coord(1/2)
    
    Date
    19. 3.1996 11:22:12
  18. Lespinasse, K.: TREC: une conference pour l'evaluation des systemes de recherche d'information (1997) 0.05
    0.049402952 = product of:
      0.098805904 = sum of:
        0.098805904 = sum of:
          0.042221468 = weight(_text_:retrieval in 744) [ClassicSimilarity], result of:
            0.042221468 = score(doc=744,freq=2.0), product of:
              0.15791564 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.052204985 = queryNorm
              0.26736724 = fieldWeight in 744, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0625 = fieldNorm(doc=744)
          0.056584436 = weight(_text_:22 in 744) [ClassicSimilarity], result of:
            0.056584436 = score(doc=744,freq=2.0), product of:
              0.18281296 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052204985 = queryNorm
              0.30952093 = fieldWeight in 744, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=744)
      0.5 = coord(1/2)
    
    Date
    1. 8.1996 22:01:00
    Footnote
    Übers. d. Titels: TREC: the Text REtrieval Conference
  19. Pal, S.; Mitra, M.; Kamps, J.: Evaluation effort, reliability and reusability in XML retrieval (2011) 0.05
    0.047185786 = product of:
      0.09437157 = sum of:
        0.09437157 = sum of:
          0.059006296 = weight(_text_:retrieval in 4197) [ClassicSimilarity], result of:
            0.059006296 = score(doc=4197,freq=10.0), product of:
              0.15791564 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.052204985 = queryNorm
              0.37365708 = fieldWeight in 4197, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4197)
          0.035365272 = weight(_text_:22 in 4197) [ClassicSimilarity], result of:
            0.035365272 = score(doc=4197,freq=2.0), product of:
              0.18281296 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052204985 = queryNorm
              0.19345059 = fieldWeight in 4197, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4197)
      0.5 = coord(1/2)
    
    Abstract
    The Initiative for the Evaluation of XML retrieval (INEX) provides a TREC-like platform for evaluating content-oriented XML retrieval systems. Since 2007, INEX has been using a set of precision-recall based metrics for its ad hoc tasks. The authors investigate the reliability and robustness of these focused retrieval measures, and of the INEX pooling method. They explore four specific questions: How reliable are the metrics when assessments are incomplete, or when query sets are small? What is the minimum pool/query-set size that can be used to reliably evaluate systems? Can the INEX collections be used to fairly evaluate "new" systems that did not participate in the pooling process? And, for a fixed amount of assessment effort, would this effort be better spent in thoroughly judging a few queries, or in judging many queries relatively superficially? The authors' findings validate properties of precision-recall-based metrics observed in document retrieval settings. Early precision measures are found to be more error-prone and less stable under incomplete judgments and small topic-set sizes. They also find that system rankings remain largely unaffected even when assessment effort is substantially (but systematically) reduced, and confirm that the INEX collections remain usable when evaluating nonparticipating systems. Finally, they observe that for a fixed amount of effort, judging shallow pools for many queries is better than judging deep pools for a smaller set of queries. However, when judging only a random sample of a pool, it is better to completely judge fewer topics than to partially judge many topics. This result confirms the effectiveness of pooling methods.
    Date
    22. 1.2011 14:20:56
  20. King, D.W.: Blazing new trails : in celebration of an audacious career (2000) 0.04
    0.044071056 = product of:
      0.08814211 = sum of:
        0.08814211 = sum of:
          0.052776836 = weight(_text_:retrieval in 1184) [ClassicSimilarity], result of:
            0.052776836 = score(doc=1184,freq=8.0), product of:
              0.15791564 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.052204985 = queryNorm
              0.33420905 = fieldWeight in 1184, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1184)
          0.035365272 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
            0.035365272 = score(doc=1184,freq=2.0), product of:
              0.18281296 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052204985 = queryNorm
              0.19345059 = fieldWeight in 1184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1184)
      0.5 = coord(1/2)
    
    Abstract
    I had the distinct pleasure of working with Pauline Atherton (Cochrane) during the 1960s, a period that can be considered the heyday of automated information system design and evaluation in the United States. I first met Pauline at the 1962 American Documentation Institute annual meeting in North Hollywood, Florida. My company, Westat Research Analysts, had recently been awarded a contract by the U.S. Patent Office to provide statistical support for the design of experiments with automated information retrieval systems. I was asked to attend the meeting to learn more about information retrieval systems and to begin informing others of U.S. Patent Office activities in this area. At one session, Pauline and I questioned a speaker about the research that he presented. Pauline's questions concerned the logic of their approach and mine, the statistical aspects. After the session, she came over to talk to me and we began a professional and personal friendship that continues to this day. During the 1960s, Pauline was involved in several important information-retrieval projects including a series of studies for the American Institute of Physics, a dissertation examining the relevance of retrieved documents, and development and evaluation of an online information-retrieval system. I had the opportunity to work with Pauline and her colleagues an four of those projects and will briefly describe her work in the 1960s.
    Date
    22. 9.1997 19:16:05

Languages

Types

  • a 344
  • s 15
  • m 8
  • el 5
  • r 3
  • x 2
  • d 1
  • p 1
  • More… Less…