Search (396 results, page 1 of 20)

  • × theme_ss:"Retrievalstudien"
  1. Robertson, S.E.: ¬The parametric description of retrieval tests : Part II: Overall measures (1969) 0.03
    0.03411103 = product of:
      0.08527757 = sum of:
        0.022345824 = weight(_text_:of in 4156) [ClassicSimilarity], result of:
          0.022345824 = score(doc=4156,freq=16.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.34207192 = fieldWeight in 4156, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4156)
        0.062931746 = product of:
          0.12586349 = sum of:
            0.12586349 = weight(_text_:mind in 4156) [ClassicSimilarity], result of:
              0.12586349 = score(doc=4156,freq=2.0), product of:
                0.2607373 = queryWeight, product of:
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.04177434 = queryNorm
                0.48272148 = fieldWeight in 4156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4156)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Two general requirements for overall measures of retrieval effectiveness are proposed, namely that the measures should be as far as possible independent of generality (this is interpreted to mean that it can be described in terms of recall and fallout), and that it should be able to measure the effectiveness of a performance curve (it should not be restricted to a simple 2X2 table). Several measures that have been proposed are examined with these conditions in mind. It turns out that most of the satisfactory ones are directly or indirectly related to swet's measure A, the area under the recall-fallout curve. In particular, Brookes' measure S and Rocchio's normalized recall are versions of A.
    Source
    Journal of documentation. 25(1969) no.2, S.93-106
  2. Crestani, F.; Rijsbergen, C.J. van: Information retrieval by imaging (1996) 0.03
    0.028363807 = product of:
      0.04727301 = sum of:
        0.009978054 = product of:
          0.04989027 = sum of:
            0.04989027 = weight(_text_:problem in 6967) [ClassicSimilarity], result of:
              0.04989027 = score(doc=6967,freq=2.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.28137225 = fieldWeight in 6967, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6967)
          0.2 = coord(1/5)
        0.02031542 = weight(_text_:of in 6967) [ClassicSimilarity], result of:
          0.02031542 = score(doc=6967,freq=18.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.3109903 = fieldWeight in 6967, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=6967)
        0.016979538 = product of:
          0.033959076 = sum of:
            0.033959076 = weight(_text_:22 in 6967) [ClassicSimilarity], result of:
              0.033959076 = score(doc=6967,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.23214069 = fieldWeight in 6967, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6967)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Explains briefly what constitutes the imaging process and explains how imaging can be used in information retrieval. Proposes an approach based on the concept of: 'a term is a possible world'; which enables the exploitation of term to term relationships which are estimated using an information theoretic measure. Reports results of an evaluation exercise to compare the performance of imaging retrieval, using possible world semantics, with a benchmark and using the Cranfield 2 document collection to measure precision and recall. Initially, the performance imaging retrieval was seen to be better but statistical analysis proved that the difference was not significant. The problem with imaging retrieval lies in the amount of computations needed to be performed at run time and a later experiement investigated the possibility of reducing this amount. Notes lines of further investigation
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  3. Dalrymple, P.W.: Retrieval by reformulation in two library catalogs : toward a cognitive model of searching behavior (1990) 0.02
    0.0247859 = product of:
      0.06196475 = sum of:
        0.022345824 = weight(_text_:of in 5089) [ClassicSimilarity], result of:
          0.022345824 = score(doc=5089,freq=4.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.34207192 = fieldWeight in 5089, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.109375 = fieldNorm(doc=5089)
        0.039618924 = product of:
          0.07923785 = sum of:
            0.07923785 = weight(_text_:22 in 5089) [ClassicSimilarity], result of:
              0.07923785 = score(doc=5089,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.5416616 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    22. 7.2006 18:43:54
    Source
    Journal of the American Society for Information Science. 41(1990) no.4, S.272-281
  4. Hansen, P.; Karlgren, J.: Effects of foreign language and task scenario on relevance assessment (2005) 0.02
    0.023027908 = product of:
      0.05756977 = sum of:
        0.012618518 = weight(_text_:of in 4393) [ClassicSimilarity], result of:
          0.012618518 = score(doc=4393,freq=10.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.19316542 = fieldWeight in 4393, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4393)
        0.04495125 = product of:
          0.0899025 = sum of:
            0.0899025 = weight(_text_:mind in 4393) [ClassicSimilarity], result of:
              0.0899025 = score(doc=4393,freq=2.0), product of:
                0.2607373 = queryWeight, product of:
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.04177434 = queryNorm
                0.34480107 = fieldWeight in 4393, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.241566 = idf(docFreq=233, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4393)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Purpose - This paper aims to investigate how readers assess relevance of retrieved documents in a foreign language they know well compared with their native language, and whether work-task scenario descriptions have effect on the assessment process. Design/methodology/approach - Queries, test collections, and relevance assessments were used from the 2002 Interactive CLEF. Swedish first-language speakers, fluent in English, were given simulated information-seeking scenarios and presented with retrieval results in both languages. Twenty-eight subjects in four groups were asked to rate the retrieved text documents by relevance. A two-level work-task scenario description framework was developed and applied to facilitate the study of context effects on the assessment process. Findings - Relevance assessment takes longer in a foreign language than in the user first language. The quality of assessments by comparison with pre-assessed results is inferior to those made in the users' first language. Work-task scenario descriptions had an effect on the assessment process, both by measured access time and by self-report by subjects. However, effects on results by traditional relevance ranking were detectable. This may be an argument for extending the traditional IR experimental topical relevance measures to cater for context effects. Originality/value - An extended two-level work-task scenario description framework was developed and applied. Contextual aspects had an effect on the relevance assessment process. English texts took longer to assess than Swedish and were assessed less well, especially for the most difficult queries. The IR research field needs to close this gap and to design information access systems with users' language competence in mind.
    Source
    Journal of documentation. 61(2005) no.5, S.623-639
  5. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.02
    0.022167925 = product of:
      0.05541981 = sum of:
        0.015800884 = weight(_text_:of in 6438) [ClassicSimilarity], result of:
          0.015800884 = score(doc=6438,freq=2.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.24188137 = fieldWeight in 6438, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.109375 = fieldNorm(doc=6438)
        0.039618924 = product of:
          0.07923785 = sum of:
            0.07923785 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
              0.07923785 = score(doc=6438,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.5416616 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    11. 8.2001 16:22:19
  6. Saracevic, T.: On a method for studying the structure and nature of requests in information retrieval (1983) 0.02
    0.019139104 = product of:
      0.04784776 = sum of:
        0.019548526 = weight(_text_:of in 2417) [ClassicSimilarity], result of:
          0.019548526 = score(doc=2417,freq=6.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2992506 = fieldWeight in 2417, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=2417)
        0.028299233 = product of:
          0.056598466 = sum of:
            0.056598466 = weight(_text_:22 in 2417) [ClassicSimilarity], result of:
              0.056598466 = score(doc=2417,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.38690117 = fieldWeight in 2417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2417)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Pages
    S.22-25
    Series
    Proceedings of the American Society for Information Science; vol. 20
    Source
    Productivity in the information age : proceedings of the 46th ASIS annual meeting, 1983. Ed.: Raymond F Vondra
  7. Blagden, J.F.: How much noise in a role-free and link-free co-ordinate indexing system? (1966) 0.02
    0.018404907 = product of:
      0.046012264 = sum of:
        0.026202802 = weight(_text_:of in 2718) [ClassicSimilarity], result of:
          0.026202802 = score(doc=2718,freq=22.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.40111488 = fieldWeight in 2718, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2718)
        0.019809462 = product of:
          0.039618924 = sum of:
            0.039618924 = weight(_text_:22 in 2718) [ClassicSimilarity], result of:
              0.039618924 = score(doc=2718,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.2708308 = fieldWeight in 2718, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2718)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    A study of the number of irrelevant documents retrieved in a co-ordinate indexing system that does not employ eitherr roles or links. These tests were based on one hundred actual inquiries received in the library and therefore an evaluation of recall efficiency is not included. Over half the enquiries produced no noise, but the mean average percentage niose figure was approximately 33 per cent based on a total average retireval figure of eighteen documents per search. Details of the size of the indexed collection, methods of indexing, and an analysis of the reasons for the retrieval of irrelevant documents are discussed, thereby providing information officers who are thinking of installing such a system with some evidence on which to base a decision as to whether or not to utilize these devices
    Source
    Journal of documentation. 22(1966), S.203-209
  8. Sanderson, M.: ¬The Reuters test collection (1996) 0.02
    0.017902408 = product of:
      0.044756018 = sum of:
        0.02211663 = weight(_text_:of in 6971) [ClassicSimilarity], result of:
          0.02211663 = score(doc=6971,freq=12.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.33856338 = fieldWeight in 6971, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=6971)
        0.022639386 = product of:
          0.045278773 = sum of:
            0.045278773 = weight(_text_:22 in 6971) [ClassicSimilarity], result of:
              0.045278773 = score(doc=6971,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.30952093 = fieldWeight in 6971, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6971)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Describes the Reuters test collection, which at 22.173 references is significantly larger than most traditional test collections. In addition, Reuters has none of the recall calculation problems normally associated with some of the larger test collections available. Explains the method derived by D.D. Lewis to perform retrieval experiments on the Reuters collection and illustrates the use of the Reuters collection using some simple retrieval experiments that compare the performance of stemming algorithms
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  9. Ng, K.B.; Loewenstern, D.; Basu, C.; Hirsh, H.; Kantor, P.B.: Data fusion of machine-learning methods for the TREC5 routing tak (and other work) (1997) 0.02
    0.017704215 = product of:
      0.044260535 = sum of:
        0.015961302 = weight(_text_:of in 3107) [ClassicSimilarity], result of:
          0.015961302 = score(doc=3107,freq=4.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.24433708 = fieldWeight in 3107, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=3107)
        0.028299233 = product of:
          0.056598466 = sum of:
            0.056598466 = weight(_text_:22 in 3107) [ClassicSimilarity], result of:
              0.056598466 = score(doc=3107,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.38690117 = fieldWeight in 3107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3107)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    27. 2.1999 20:59:22
    Imprint
    Gaithersburgh, MD : National Institute of Standards and Technology
  10. Rijsbergen, C.J. van: ¬A test for the separation of relevant and non-relevant documents in experimental retrieval collections (1973) 0.02
    0.017131606 = product of:
      0.042829014 = sum of:
        0.02018963 = weight(_text_:of in 5002) [ClassicSimilarity], result of:
          0.02018963 = score(doc=5002,freq=10.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.3090647 = fieldWeight in 5002, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=5002)
        0.022639386 = product of:
          0.045278773 = sum of:
            0.045278773 = weight(_text_:22 in 5002) [ClassicSimilarity], result of:
              0.045278773 = score(doc=5002,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.30952093 = fieldWeight in 5002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5002)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Many retrievalexperiments are intended to discover ways of improving performance, taking the results obtained with some particular technique as a baseline. The fact that substantial alterations to a system often have little or no effect on particular collections is puzzling. This may be due to the initially poor seperation of relevant and non-relevant documents. The paper presents a procedure for characterizing this seperation for a collection, which can be used to show whether proposed modifications of the base system are likely to be useful.
    Date
    19. 3.1996 11:22:12
    Source
    Journal of documentation. 29(1973) no.3, S.251-257
  11. Sievert, M.E.; McKinin, E.J.: Why full-text misses some relevant documents : an analysis of documents not retrieved by CCML or MEDIS (1989) 0.02
    0.016558254 = product of:
      0.041395634 = sum of:
        0.024416098 = weight(_text_:of in 3564) [ClassicSimilarity], result of:
          0.024416098 = score(doc=3564,freq=26.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.37376386 = fieldWeight in 3564, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3564)
        0.016979538 = product of:
          0.033959076 = sum of:
            0.033959076 = weight(_text_:22 in 3564) [ClassicSimilarity], result of:
              0.033959076 = score(doc=3564,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.23214069 = fieldWeight in 3564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3564)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Searches conducted as part of the MEDLINE/Full-Text Research Project revealed that the full-text data bases of clinical medical journal articles (CCML (Comprehensive Core Medical Library) from BRS Information Technologies, and MEDIS from Mead Data Central) did not retrieve all the relevant citations. An analysis of the data indicated that 204 relevant citations were retrieved only by MEDLINE. A comparison of the strategies used on the full-text data bases with the text of the articles of these 204 citations revealed that 2 reasons contributed to these failure. The searcher often constructed a restrictive strategy which resulted in the loss of relevant documents; and as in other kinds of retrieval, the problems of natural language caused the loss of relevant documents.
    Date
    9. 1.1996 10:22:31
    Source
    ASIS'89. Managing information and technology. Proceedings of the 52nd annual meeting of the American Society for Information Science, Washington D.C., 30.10.-2.11.1989. Vol.26. Ed.by J. Katzer and G.B. Newby
  12. Spink, A.; Greisdorf, H.: Users' partial relevance judgements during online searching (1997) 0.02
    0.01642621 = product of:
      0.04106552 = sum of:
        0.020162916 = product of:
          0.10081457 = sum of:
            0.10081457 = weight(_text_:problem in 623) [ClassicSimilarity], result of:
              0.10081457 = score(doc=623,freq=6.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.5685763 = fieldWeight in 623, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=623)
          0.2 = coord(1/5)
        0.020902606 = weight(_text_:of in 623) [ClassicSimilarity], result of:
          0.020902606 = score(doc=623,freq=14.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.31997898 = fieldWeight in 623, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=623)
      0.4 = coord(2/5)
    
    Abstract
    Reports results of research to examine users conducting their initial online search on a particular information problem. Findings from 3 separate studies of relevance judgements by 44 initial search users were examined, including 2 studies of 13 end users and a study of 18 user engaged in mediated online searches. Number of items was judged on the scale 'relevant', 'patially relevant' and 'not rlevant'. Results suggest that: a relationship exists between partially rlevant items retrieved anch changes in the users' information problem or question during an information seeking process; partial relevance judgements play an important role for users in the early stages of seeking information on a particular information problem; and 'highly' relevant items may or may not be the only items useful at the early stages of users' information seeking processes
  13. Blair, D.C.: STAIRS Redux : thoughts on the STAIRS evaluation, ten years after (1996) 0.02
    0.016284827 = product of:
      0.040712066 = sum of:
        0.020902606 = weight(_text_:of in 3002) [ClassicSimilarity], result of:
          0.020902606 = score(doc=3002,freq=14.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.31997898 = fieldWeight in 3002, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3002)
        0.019809462 = product of:
          0.039618924 = sum of:
            0.039618924 = weight(_text_:22 in 3002) [ClassicSimilarity], result of:
              0.039618924 = score(doc=3002,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.2708308 = fieldWeight in 3002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3002)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The test of retrieval effectiveness performed on IBM's STAIRS and reported in 'Communications of the ACM' 10 years ago, continues to be cited frequently in the information retrieval literature. The reasons for the study's continuing pertinence to today's research are discussed, and the political, legal, and commercial aspects of the study are presented. In addition, the method of calculating recall that was used in the STAIRS study is discussed in some detail, especially how it reduces the 5 major types of uncertainty in recall estimations. It is also suggested that this method of recall estimation may serve as the basis for recall estimations that might be truly comparable between systems
    Source
    Journal of the American Society for Information Science. 47(1996) no.1, S.4-22
  14. ¬The Fifth Text Retrieval Conference (TREC-5) (1997) 0.02
    0.016279016 = product of:
      0.040697537 = sum of:
        0.018058153 = weight(_text_:of in 3087) [ClassicSimilarity], result of:
          0.018058153 = score(doc=3087,freq=8.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.27643585 = fieldWeight in 3087, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=3087)
        0.022639386 = product of:
          0.045278773 = sum of:
            0.045278773 = weight(_text_:22 in 3087) [ClassicSimilarity], result of:
              0.045278773 = score(doc=3087,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.30952093 = fieldWeight in 3087, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3087)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Proceedings of the 5th TREC-confrerence held in Gaithersburgh, Maryland, Nov 20-22, 1996. Aim of the conference was discussion on retrieval techniques for large test collections. Different research groups used different techniques, such as automated thesauri, term weighting, natural language techniques, relevance feedback and advanced pattern matching, for information retrieval from the same large database. This procedure makes it possible to compare the results. The proceedings include papers, tables of the system results, and brief system descriptions including timing and storage information
    Imprint
    Gaithersburgh, MD : National Institute of Standards and Technology
  15. Van der Walt, H.E.A.; Brakel, P.A. van: Method for the evaluation of the retrieval effectiveness of a CD-ROM bibliographic database (1991) 0.02
    0.016065711 = product of:
      0.040164277 = sum of:
        0.01646295 = product of:
          0.08231475 = sum of:
            0.08231475 = weight(_text_:problem in 3114) [ClassicSimilarity], result of:
              0.08231475 = score(doc=3114,freq=4.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.46424055 = fieldWeight in 3114, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3114)
          0.2 = coord(1/5)
        0.023701325 = weight(_text_:of in 3114) [ClassicSimilarity], result of:
          0.023701325 = score(doc=3114,freq=18.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.36282203 = fieldWeight in 3114, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3114)
      0.4 = coord(2/5)
    
    Abstract
    Addresses the problem of how potential users of CD-ROM data bases can objectively establish which version of the same data base is best suited for a specific situation. The problem was solved by applying the retrieval effectiveness of current on-line data base search systems as a standard measurement. 5 search queries from the medical sciences were presented by experienced users of MEDLINE. Search strategies were written for both DIALOG and DATA-STAR. Search results were compared to create a recall base from documents present in both on-line searches. This recall base was then used to establish the retrieval and precision of 4 CD-ROM data bases: MEDLINE, Compact Cambrdge MEDLINE, DIALOG OnDisc, Comprehensive MEDLINE/EBSCO
    Source
    African journal of library and information science. 59(1991) no.1, S.32-42
  16. Yerbury, H.; Parker, J.: Novice searchers' use of familiar structures in searching bibliographic information retrieval systems (1998) 0.02
    0.015896818 = product of:
      0.039742045 = sum of:
        0.017282499 = product of:
          0.08641249 = sum of:
            0.08641249 = weight(_text_:problem in 2874) [ClassicSimilarity], result of:
              0.08641249 = score(doc=2874,freq=6.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.48735106 = fieldWeight in 2874, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2874)
          0.2 = coord(1/5)
        0.022459546 = weight(_text_:of in 2874) [ClassicSimilarity], result of:
          0.022459546 = score(doc=2874,freq=22.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.34381276 = fieldWeight in 2874, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2874)
      0.4 = coord(2/5)
    
    Abstract
    Reports results of a study of the use of metaphors as problem solving mechanisms by novice searchers of bibliographic databases. Metaphors provide a framework or 'familiar structure' of credible associations within which relationships in other domains may be considered. 28 students taking an undergraduate course in information retrieval at Sydney University of Technology, were recorded as they 'talked through' a search on a bibliographic retrieval system. The transcripts were analyzed using conventional methods and the NUDIST software package for qualitative research. A range of metaphors was apparent from the language use by students in the search process. Those which predominated were: a journey; human interaction; a building or matching process; a problem solving process, and a search for a quantity. Many of the studentes experiencing the interaction as a problem solving process or a search for quantity perceived the outcomes as successful. Concludes that when memory for operating methods and procedures is incomplete an unconscious approach through the use of a conceptual system which is consonant with the task at hand may also lead to success in bibliographic searching
    Source
    Journal of information science. 24(1998) no.4, S.207-214
  17. Allan, J.; Callan, J.P.; Croft, W.B.; Ballesteros, L.; Broglio, J.; Xu, J.; Shu, H.: INQUERY at TREC-5 (1997) 0.02
    0.015834233 = product of:
      0.03958558 = sum of:
        0.011286346 = weight(_text_:of in 3103) [ClassicSimilarity], result of:
          0.011286346 = score(doc=3103,freq=2.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.17277241 = fieldWeight in 3103, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=3103)
        0.028299233 = product of:
          0.056598466 = sum of:
            0.056598466 = weight(_text_:22 in 3103) [ClassicSimilarity], result of:
              0.056598466 = score(doc=3103,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.38690117 = fieldWeight in 3103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3103)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    27. 2.1999 20:55:22
    Imprint
    Gaithersburgh, MD : National Institute of Standards and Technology
  18. Iivonen, M.: Consistency in the selection of search concepts and search terms (1995) 0.02
    0.015775634 = product of:
      0.039439082 = sum of:
        0.022459546 = weight(_text_:of in 1757) [ClassicSimilarity], result of:
          0.022459546 = score(doc=1757,freq=22.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.34381276 = fieldWeight in 1757, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1757)
        0.016979538 = product of:
          0.033959076 = sum of:
            0.033959076 = weight(_text_:22 in 1757) [ClassicSimilarity], result of:
              0.033959076 = score(doc=1757,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.23214069 = fieldWeight in 1757, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1757)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Considers intersearcher and intrasearcher consistency in the selection of search terms. Based on an empirical study where 22 searchers from 4 different types of search environments analyzed altogether 12 search requests of 4 different types in 2 separate test situations between which 2 months elapsed. Statistically very significant differences in consistency were found according to the types of search environments and search requests. Consistency was also considered according to the extent of the scope of search concept. At level I search terms were compared character by character. At level II different search terms were accepted as the same search concept with a rather simple evaluation of linguistic expressions. At level III, in addition to level II, the hierarchical approach of the search request was also controlled. At level IV different search terms were accepted as the same search concept with a broad interpretation of the search concept. Both intersearcher and intrasearcher consistency grew most immediately after a rather simple evaluation of linguistic impressions
  19. Wood, F.; Ford, N.; Miller, D.; Sobczyk, G.; Duffin, R.: Information skills, searching behaviour and cognitive styles for student-centred learning : a computer-assisted learning approach (1996) 0.02
    0.015775634 = product of:
      0.039439082 = sum of:
        0.022459546 = weight(_text_:of in 4341) [ClassicSimilarity], result of:
          0.022459546 = score(doc=4341,freq=22.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.34381276 = fieldWeight in 4341, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4341)
        0.016979538 = product of:
          0.033959076 = sum of:
            0.033959076 = weight(_text_:22 in 4341) [ClassicSimilarity], result of:
              0.033959076 = score(doc=4341,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.23214069 = fieldWeight in 4341, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4341)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Undergraduates were tested to establish how they searched databases, the effectiveness of their searches and their satisfaction with them. The students' cognitive and learning styles were determined by the Lancaster Approaches to Studying Inventory and Riding's Cognitive Styles Analysis tests. There were significant differences in the searching behaviour and the effectiveness of the searches carried out by students with different learning and cognitive styles. Computer-assisted learning (CAL) packages were developed for three departments. The effectiveness of the packages were evaluated. Significant differences were found in the ways students with different learning styles used the packages. Based on the experience gained, guidelines for the teaching of information skills and the production and use of packages were prepared. About 2/3 of the searches had serious weaknesses, indicating a need for effective training. It appears that choice of searching strategies, search effectiveness and use of CAL packages are all affected by the cognitive and learning styles of the searcher. Therefore, students should be made aware of their own styles and, if appropriate, how to adopt more effective strategies
    Source
    Journal of information science. 22(1996) no.2, S.79-92
  20. Drabenstott, K.M.; Weller, M.S.: ¬A comparative approach to system evaluation : delegating control of retrieval tests to an experimental online system (1996) 0.02
    0.01552351 = product of:
      0.038808774 = sum of:
        0.01646295 = product of:
          0.08231475 = sum of:
            0.08231475 = weight(_text_:problem in 7435) [ClassicSimilarity], result of:
              0.08231475 = score(doc=7435,freq=4.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.46424055 = fieldWeight in 7435, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7435)
          0.2 = coord(1/5)
        0.022345824 = weight(_text_:of in 7435) [ClassicSimilarity], result of:
          0.022345824 = score(doc=7435,freq=16.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.34207192 = fieldWeight in 7435, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7435)
      0.4 = coord(2/5)
    
    Abstract
    Describes the comparative approach to system evaluation used in this research project which delegated the administartion of an online retrieval test to an experimental online catalogue to produce data for evaluating the effectiveness of a new subject access design. Describes the methods enlisted to sort out problem test administration, e.g. to identify out-of-scope queries, incomplete system administration, and suspect post-search questionnaire responses. Covers how w the researchers handled problem search administrations and what actions they would use to reduce or eliminate the occurrence of such administrations in future online retrieval tests that delegate control of retrieval tests to online systems
    Source
    Global complexity: information, chaos and control. Proceedings of the 59th Annual Meeting of the American Society for Information Science, ASIS'96, Baltimore, Maryland, 21-24 Oct 1996. Ed.: S. Hardin

Languages

Types

  • a 368
  • s 14
  • m 8
  • el 6
  • r 4
  • x 2
  • d 1
  • p 1
  • More… Less…