Search (6 results, page 1 of 1)

  • × author_ss:"Rijsbergen, C.J. van"
  • × theme_ss:"Retrievalstudien"
  1. Rijsbergen, C.J. van: ¬A test for the separation of relevant and non-relevant documents in experimental retrieval collections (1973) 0.03
    0.031588875 = product of:
      0.06317775 = sum of:
        0.06317775 = sum of:
          0.013257373 = weight(_text_:a in 5002) [ClassicSimilarity], result of:
            0.013257373 = score(doc=5002,freq=12.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.24964198 = fieldWeight in 5002, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0625 = fieldNorm(doc=5002)
          0.04992038 = weight(_text_:22 in 5002) [ClassicSimilarity], result of:
            0.04992038 = score(doc=5002,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.30952093 = fieldWeight in 5002, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=5002)
      0.5 = coord(1/2)
    
    Abstract
    Many retrievalexperiments are intended to discover ways of improving performance, taking the results obtained with some particular technique as a baseline. The fact that substantial alterations to a system often have little or no effect on particular collections is puzzling. This may be due to the initially poor seperation of relevant and non-relevant documents. The paper presents a procedure for characterizing this seperation for a collection, which can be used to show whether proposed modifications of the base system are likely to be useful.
    Date
    19. 3.1996 11:22:12
    Type
    a
  2. Crestani, F.; Rijsbergen, C.J. van: Information retrieval by imaging (1996) 0.02
    0.023258494 = product of:
      0.04651699 = sum of:
        0.04651699 = sum of:
          0.009076704 = weight(_text_:a in 6967) [ClassicSimilarity], result of:
            0.009076704 = score(doc=6967,freq=10.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.1709182 = fieldWeight in 6967, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=6967)
          0.037440285 = weight(_text_:22 in 6967) [ClassicSimilarity], result of:
            0.037440285 = score(doc=6967,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23214069 = fieldWeight in 6967, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=6967)
      0.5 = coord(1/2)
    
    Abstract
    Explains briefly what constitutes the imaging process and explains how imaging can be used in information retrieval. Proposes an approach based on the concept of: 'a term is a possible world'; which enables the exploitation of term to term relationships which are estimated using an information theoretic measure. Reports results of an evaluation exercise to compare the performance of imaging retrieval, using possible world semantics, with a benchmark and using the Cranfield 2 document collection to measure precision and recall. Initially, the performance imaging retrieval was seen to be better but statistical analysis proved that the difference was not significant. The problem with imaging retrieval lies in the amount of computations needed to be performed at run time and a later experiement investigated the possibility of reducing this amount. Notes lines of further investigation
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
    Type
    a
  3. Rijsbergen, C.J. van: Foundations of evaluation (1974) 0.00
    0.0033826875 = product of:
      0.006765375 = sum of:
        0.006765375 = product of:
          0.01353075 = sum of:
            0.01353075 = weight(_text_:a in 1078) [ClassicSimilarity], result of:
              0.01353075 = score(doc=1078,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.25478977 = fieldWeight in 1078, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.15625 = fieldNorm(doc=1078)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  4. Crestani, F.; Ruthven, I.; Sanderson, M.; Rijsbergen, C.J. van: ¬The troubles with using a logical model of IR on a large collection of documents : experimenting retrieval by logical imaging on TREC (1996) 0.00
    0.0029294936 = product of:
      0.005858987 = sum of:
        0.005858987 = product of:
          0.011717974 = sum of:
            0.011717974 = weight(_text_:a in 7522) [ClassicSimilarity], result of:
              0.011717974 = score(doc=7522,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22065444 = fieldWeight in 7522, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=7522)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  5. Rijsbergen, C.J. van: Retrieval effectiveness (1981) 0.00
    0.00270615 = product of:
      0.0054123 = sum of:
        0.0054123 = product of:
          0.0108246 = sum of:
            0.0108246 = weight(_text_:a in 3147) [ClassicSimilarity], result of:
              0.0108246 = score(doc=3147,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20383182 = fieldWeight in 3147, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=3147)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  6. Sparck Jones, K.; Rijsbergen, C.J. van: Progress in documentation : Information retrieval test collection (1976) 0.00
    0.0020506454 = product of:
      0.004101291 = sum of:
        0.004101291 = product of:
          0.008202582 = sum of:
            0.008202582 = weight(_text_:a in 4161) [ClassicSimilarity], result of:
              0.008202582 = score(doc=4161,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1544581 = fieldWeight in 4161, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4161)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Many retrieval experiments have been based on inadequate test collections, and current research is hampered by the lack of proper collections. This short review does not attempt a fully docuemted survey of all the collections used in the past decade: hopefully representative examples have been studied to throw light on the requriements test collections should meet, to show how past collections have been defective, and to suggest guidelines for a future "ideal" test collection. This specifications for this collection can be taken as an indirect comment on our present state of knowledge of major retrieval system variables, and experience in conducting experiments.
    Type
    a