Search (5 results, page 1 of 1)

  • × theme_ss:"Retrievalstudien"
  • × year_i:[1970 TO 1980}
  1. Rijsbergen, C.J. van: ¬A test for the separation of relevant and non-relevant documents in experimental retrieval collections (1973) 0.03
    0.032256197 = product of:
      0.048384294 = sum of:
        0.021338228 = weight(_text_:on in 5002) [ClassicSimilarity], result of:
          0.021338228 = score(doc=5002,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.19440265 = fieldWeight in 5002, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=5002)
        0.027046064 = product of:
          0.054092128 = sum of:
            0.054092128 = weight(_text_:22 in 5002) [ClassicSimilarity], result of:
              0.054092128 = score(doc=5002,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.30952093 = fieldWeight in 5002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5002)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Many retrievalexperiments are intended to discover ways of improving performance, taking the results obtained with some particular technique as a baseline. The fact that substantial alterations to a system often have little or no effect on particular collections is puzzling. This may be due to the initially poor seperation of relevant and non-relevant documents. The paper presents a procedure for characterizing this seperation for a collection, which can be used to show whether proposed modifications of the base system are likely to be useful.
    Date
    19. 3.1996 11:22:12
  2. Sparck Jones, K.; Rijsbergen, C.J. van: Progress in documentation : Information retrieval test collection (1976) 0.01
    0.010779679 = product of:
      0.032339036 = sum of:
        0.032339036 = weight(_text_:on in 4161) [ClassicSimilarity], result of:
          0.032339036 = score(doc=4161,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.29462588 = fieldWeight in 4161, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4161)
      0.33333334 = coord(1/3)
    
    Abstract
    Many retrieval experiments have been based on inadequate test collections, and current research is hampered by the lack of proper collections. This short review does not attempt a fully docuemted survey of all the collections used in the past decade: hopefully representative examples have been studied to throw light on the requriements test collections should meet, to show how past collections have been defective, and to suggest guidelines for a future "ideal" test collection. This specifications for this collection can be taken as an indirect comment on our present state of knowledge of major retrieval system variables, and experience in conducting experiments.
  3. Cooper, W.S.: ¬On selecting a measure of retrieval effectiveness, revisited (1973) 0.01
    0.010669115 = product of:
      0.032007344 = sum of:
        0.032007344 = weight(_text_:on in 1930) [ClassicSimilarity], result of:
          0.032007344 = score(doc=1930,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.29160398 = fieldWeight in 1930, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.09375 = fieldNorm(doc=1930)
      0.33333334 = coord(1/3)
    
  4. King, D.W.; Bryant, E.C.: ¬The evaluation of information services and products (1971) 0.01
    0.008890929 = product of:
      0.026672786 = sum of:
        0.026672786 = weight(_text_:on in 4157) [ClassicSimilarity], result of:
          0.026672786 = score(doc=4157,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.24300331 = fieldWeight in 4157, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.078125 = fieldNorm(doc=4157)
      0.33333334 = coord(1/3)
    
    Content
    Covers the evaluative and control aspects of: dclassification and indexing processes and languages; document screening processes; composition, reproduction, acquisition, storage, and presentation; usersystem interfaces. Also contains brief and lucid primers on user surveys, statistics, sampling methods, and experimental design.
  5. Byrne, J.R.: Relative effectiveness of titles, abstracts, and subject headings for machine retrieval from the COMPENDEX services (1975) 0.01
    0.00622365 = product of:
      0.01867095 = sum of:
        0.01867095 = weight(_text_:on in 1604) [ClassicSimilarity], result of:
          0.01867095 = score(doc=1604,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.17010231 = fieldWeight in 1604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1604)
      0.33333334 = coord(1/3)
    
    Abstract
    We have investigated the relative merits of searching on titles, subject headings, abstracts, free-language terms, and combinations of these elements. The COMPENDEX data base was used for this study since it combined all of these data elements of interest. In general, the results obtained from the experiments indicate that, as expected, titles alone are not satisfactory for efficient retrieval. The combination of titles and abstracts came the closest to 100% retrieval, with searching of abstracts alone doing almost as well. Indexer input, although necessary for 100% retrieval in almost all cases, was found to be relatively unimportant