Search (465 results, page 1 of 24)

  • × theme_ss:"Retrievalstudien"
  1. Dalrymple, P.W.: Retrieval by reformulation in two library catalogs : toward a cognitive model of searching behavior (1990) 0.04
    0.035522755 = product of:
      0.07104551 = sum of:
        0.07104551 = sum of:
          0.009445025 = weight(_text_:a in 5089) [ClassicSimilarity], result of:
            0.009445025 = score(doc=5089,freq=4.0), product of:
              0.037446223 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03247589 = queryNorm
              0.25222903 = fieldWeight in 5089, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.109375 = fieldNorm(doc=5089)
          0.061600484 = weight(_text_:22 in 5089) [ClassicSimilarity], result of:
            0.061600484 = score(doc=5089,freq=2.0), product of:
              0.11372503 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03247589 = queryNorm
              0.5416616 = fieldWeight in 5089, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=5089)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 18:43:54
    Type
    a
  2. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.03
    0.034139562 = product of:
      0.068279125 = sum of:
        0.068279125 = sum of:
          0.0066786404 = weight(_text_:a in 262) [ClassicSimilarity], result of:
            0.0066786404 = score(doc=262,freq=2.0), product of:
              0.037446223 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03247589 = queryNorm
              0.17835285 = fieldWeight in 262, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.109375 = fieldNorm(doc=262)
          0.061600484 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
            0.061600484 = score(doc=262,freq=2.0), product of:
              0.11372503 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03247589 = queryNorm
              0.5416616 = fieldWeight in 262, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=262)
      0.5 = coord(1/2)
    
    Date
    20.10.2000 12:22:23
    Type
    a
  3. Tomaiuolo, N.G.; Parker, J.: Maximizing relevant retrieval : keyword and natural language searching (1998) 0.03
    0.034139562 = product of:
      0.068279125 = sum of:
        0.068279125 = sum of:
          0.0066786404 = weight(_text_:a in 6418) [ClassicSimilarity], result of:
            0.0066786404 = score(doc=6418,freq=2.0), product of:
              0.037446223 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03247589 = queryNorm
              0.17835285 = fieldWeight in 6418, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.109375 = fieldNorm(doc=6418)
          0.061600484 = weight(_text_:22 in 6418) [ClassicSimilarity], result of:
            0.061600484 = score(doc=6418,freq=2.0), product of:
              0.11372503 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03247589 = queryNorm
              0.5416616 = fieldWeight in 6418, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=6418)
      0.5 = coord(1/2)
    
    Source
    Online. 22(1998) no.6, S.57-58
    Type
    a
  4. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.03
    0.034139562 = product of:
      0.068279125 = sum of:
        0.068279125 = sum of:
          0.0066786404 = weight(_text_:a in 6438) [ClassicSimilarity], result of:
            0.0066786404 = score(doc=6438,freq=2.0), product of:
              0.037446223 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03247589 = queryNorm
              0.17835285 = fieldWeight in 6438, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.109375 = fieldNorm(doc=6438)
          0.061600484 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
            0.061600484 = score(doc=6438,freq=2.0), product of:
              0.11372503 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03247589 = queryNorm
              0.5416616 = fieldWeight in 6438, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=6438)
      0.5 = coord(1/2)
    
    Date
    11. 8.2001 16:22:19
    Type
    a
  5. Ravana, S.D.; Taheri, M.S.; Rajagopal, P.: Document-based approach to improve the accuracy of pairwise comparison in evaluating information retrieval systems (2015) 0.03
    0.032391056 = sum of:
      0.003644435 = product of:
        0.032799914 = sum of:
          0.032799914 = weight(_text_:p in 2587) [ClassicSimilarity], result of:
            0.032799914 = score(doc=2587,freq=4.0), product of:
              0.116767466 = queryWeight, product of:
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.03247589 = queryNorm
              0.28089944 = fieldWeight in 2587, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2587)
        0.11111111 = coord(1/9)
      0.02874662 = sum of:
        0.0067464462 = weight(_text_:a in 2587) [ClassicSimilarity], result of:
          0.0067464462 = score(doc=2587,freq=16.0), product of:
            0.037446223 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03247589 = queryNorm
            0.18016359 = fieldWeight in 2587, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2587)
        0.022000173 = weight(_text_:22 in 2587) [ClassicSimilarity], result of:
          0.022000173 = score(doc=2587,freq=2.0), product of:
            0.11372503 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03247589 = queryNorm
            0.19345059 = fieldWeight in 2587, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2587)
    
    Abstract
    Purpose The purpose of this paper is to propose a method to have more accurate results in comparing performance of the paired information retrieval (IR) systems with reference to the current method, which is based on the mean effectiveness scores of the systems across a set of identified topics/queries. Design/methodology/approach Based on the proposed approach, instead of the classic method of using a set of topic scores, the documents level scores are considered as the evaluation unit. These document scores are the defined document's weight, which play the role of the mean average precision (MAP) score of the systems as a significance test's statics. The experiments were conducted using the TREC 9 Web track collection. Findings The p-values generated through the two types of significance tests, namely the Student's t-test and Mann-Whitney show that by using the document level scores as an evaluation unit, the difference between IR systems is more significant compared with utilizing topic scores. Originality/value Utilizing a suitable test collection is a primary prerequisite for IR systems comparative evaluation. However, in addition to reusable test collections, having an accurate statistical testing is a necessity for these evaluations. The findings of this study will assist IR researchers to evaluate their retrieval systems and algorithms more accurately.
    Date
    20. 1.2015 18:30:22
    Type
    a
  6. Reichert, S.; Mayr, P.: Untersuchung von Relevanzeigenschaften in einem kontrollierten Eyetracking-Experiment (2012) 0.03
    0.032354888 = sum of:
      0.0030924056 = product of:
        0.02783165 = sum of:
          0.02783165 = weight(_text_:p in 328) [ClassicSimilarity], result of:
            0.02783165 = score(doc=328,freq=2.0), product of:
              0.116767466 = queryWeight, product of:
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.03247589 = queryNorm
              0.23835106 = fieldWeight in 328, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.046875 = fieldNorm(doc=328)
        0.11111111 = coord(1/9)
      0.029262481 = sum of:
        0.0028622746 = weight(_text_:a in 328) [ClassicSimilarity], result of:
          0.0028622746 = score(doc=328,freq=2.0), product of:
            0.037446223 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03247589 = queryNorm
            0.07643694 = fieldWeight in 328, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=328)
        0.026400207 = weight(_text_:22 in 328) [ClassicSimilarity], result of:
          0.026400207 = score(doc=328,freq=2.0), product of:
            0.11372503 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03247589 = queryNorm
            0.23214069 = fieldWeight in 328, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=328)
    
    Date
    22. 7.2012 19:25:54
    Type
    a
  7. Rajagopal, P.; Ravana, S.D.; Koh, Y.S.; Balakrishnan, V.: Evaluating the effectiveness of information retrieval systems using effort-based relevance judgment (2019) 0.03
    0.030419772 = sum of:
      0.0025770047 = product of:
        0.023193043 = sum of:
          0.023193043 = weight(_text_:p in 5287) [ClassicSimilarity], result of:
            0.023193043 = score(doc=5287,freq=2.0), product of:
              0.116767466 = queryWeight, product of:
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.03247589 = queryNorm
              0.19862589 = fieldWeight in 5287, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5287)
        0.11111111 = coord(1/9)
      0.027842768 = sum of:
        0.0058425935 = weight(_text_:a in 5287) [ClassicSimilarity], result of:
          0.0058425935 = score(doc=5287,freq=12.0), product of:
            0.037446223 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03247589 = queryNorm
            0.15602624 = fieldWeight in 5287, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5287)
        0.022000173 = weight(_text_:22 in 5287) [ClassicSimilarity], result of:
          0.022000173 = score(doc=5287,freq=2.0), product of:
            0.11372503 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03247589 = queryNorm
            0.19345059 = fieldWeight in 5287, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5287)
    
    Abstract
    Purpose The effort in addition to relevance is a major factor for satisfaction and utility of the document to the actual user. The purpose of this paper is to propose a method in generating relevance judgments that incorporate effort without human judges' involvement. Then the study determines the variation in system rankings due to low effort relevance judgment in evaluating retrieval systems at different depth of evaluation. Design/methodology/approach Effort-based relevance judgments are generated using a proposed boxplot approach for simple document features, HTML features and readability features. The boxplot approach is a simple yet repeatable approach in classifying documents' effort while ensuring outlier scores do not skew the grading of the entire set of documents. Findings The retrieval systems evaluation using low effort relevance judgments has a stronger influence on shallow depth of evaluation compared to deeper depth. It is proved that difference in the system rankings is due to low effort documents and not the number of relevant documents. Originality/value Hence, it is crucial to evaluate retrieval systems at shallow depth using low effort relevance judgments.
    Date
    20. 1.2015 18:30:22
    Type
    a
  8. Saracevic, T.: On a method for studying the structure and nature of requests in information retrieval (1983) 0.03
    0.025373396 = product of:
      0.05074679 = sum of:
        0.05074679 = sum of:
          0.0067464462 = weight(_text_:a in 2417) [ClassicSimilarity], result of:
            0.0067464462 = score(doc=2417,freq=4.0), product of:
              0.037446223 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03247589 = queryNorm
              0.18016359 = fieldWeight in 2417, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.078125 = fieldNorm(doc=2417)
          0.044000346 = weight(_text_:22 in 2417) [ClassicSimilarity], result of:
            0.044000346 = score(doc=2417,freq=2.0), product of:
              0.11372503 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03247589 = queryNorm
              0.38690117 = fieldWeight in 2417, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=2417)
      0.5 = coord(1/2)
    
    Pages
    S.22-25
    Type
    a
  9. Allan, J.; Callan, J.P.; Croft, W.B.; Ballesteros, L.; Broglio, J.; Xu, J.; Shu, H.: INQUERY at TREC-5 (1997) 0.02
    0.024385402 = product of:
      0.048770804 = sum of:
        0.048770804 = sum of:
          0.0047704573 = weight(_text_:a in 3103) [ClassicSimilarity], result of:
            0.0047704573 = score(doc=3103,freq=2.0), product of:
              0.037446223 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03247589 = queryNorm
              0.12739488 = fieldWeight in 3103, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.078125 = fieldNorm(doc=3103)
          0.044000346 = weight(_text_:22 in 3103) [ClassicSimilarity], result of:
            0.044000346 = score(doc=3103,freq=2.0), product of:
              0.11372503 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03247589 = queryNorm
              0.38690117 = fieldWeight in 3103, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=3103)
      0.5 = coord(1/2)
    
    Date
    27. 2.1999 20:55:22
    Type
    a
  10. Ng, K.B.; Loewenstern, D.; Basu, C.; Hirsh, H.; Kantor, P.B.: Data fusion of machine-learning methods for the TREC5 routing tak (and other work) (1997) 0.02
    0.024385402 = product of:
      0.048770804 = sum of:
        0.048770804 = sum of:
          0.0047704573 = weight(_text_:a in 3107) [ClassicSimilarity], result of:
            0.0047704573 = score(doc=3107,freq=2.0), product of:
              0.037446223 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03247589 = queryNorm
              0.12739488 = fieldWeight in 3107, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.078125 = fieldNorm(doc=3107)
          0.044000346 = weight(_text_:22 in 3107) [ClassicSimilarity], result of:
            0.044000346 = score(doc=3107,freq=2.0), product of:
              0.11372503 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03247589 = queryNorm
              0.38690117 = fieldWeight in 3107, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=3107)
      0.5 = coord(1/2)
    
    Date
    27. 2.1999 20:59:22
    Type
    a
  11. Larsen, B.; Ingwersen, P.; Lund, B.: Data fusion according to the principle of polyrepresentation (2009) 0.02
    0.023478108 = sum of:
      0.0020616036 = product of:
        0.018554432 = sum of:
          0.018554432 = weight(_text_:p in 2752) [ClassicSimilarity], result of:
            0.018554432 = score(doc=2752,freq=2.0), product of:
              0.116767466 = queryWeight, product of:
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.03247589 = queryNorm
              0.15890071 = fieldWeight in 2752, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.03125 = fieldNorm(doc=2752)
        0.11111111 = coord(1/9)
      0.021416504 = sum of:
        0.003816366 = weight(_text_:a in 2752) [ClassicSimilarity], result of:
          0.003816366 = score(doc=2752,freq=8.0), product of:
            0.037446223 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03247589 = queryNorm
            0.10191591 = fieldWeight in 2752, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=2752)
        0.017600138 = weight(_text_:22 in 2752) [ClassicSimilarity], result of:
          0.017600138 = score(doc=2752,freq=2.0), product of:
            0.11372503 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03247589 = queryNorm
            0.15476047 = fieldWeight in 2752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=2752)
    
    Abstract
    We report data fusion experiments carried out on the four best-performing retrieval models from TREC 5. Three were conceptually/algorithmically very different from one another; one was algorithmically similar to one of the former. The objective of the test was to observe the performance of the 11 logical data fusion combinations compared to the performance of the four individual models and their intermediate fusions when following the principle of polyrepresentation. This principle is based on cognitive IR perspective (Ingwersen & Järvelin, 2005) and implies that each retrieval model is regarded as a representation of a unique interpretation of information retrieval (IR). It predicts that only fusions of very different, but equally good, IR models may outperform each constituent as well as their intermediate fusions. Two kinds of experiments were carried out. One tested restricted fusions, which entails that only the inner disjoint overlap documents between fused models are ranked. The second set of experiments was based on traditional data fusion methods. The experiments involved the 30 TREC 5 topics that contain more than 44 relevant documents. In all tests, the Borda and CombSUM scoring methods were used. Performance was measured by precision and recall, with document cutoff values (DCVs) at 100 and 15 documents, respectively. Results show that restricted fusions made of two, three, or four cognitively/algorithmically very different retrieval models perform significantly better than do the individual models at DCV100. At DCV15, however, the results of polyrepresentative fusion were less predictable. The traditional fusion method based on polyrepresentation principles demonstrates a clear picture of performance at both DCV levels and verifies the polyrepresentation predictions for data fusion in IR. Data fusion improves retrieval performance over their constituent IR models only if the models all are quite conceptually/algorithmically dissimilar and equally and well performing, in that order of importance.
    Date
    22. 3.2009 18:48:28
    Type
    a
  12. Rijsbergen, C.J. van: ¬A test for the separation of relevant and non-relevant documents in experimental retrieval collections (1973) 0.02
    0.022274213 = product of:
      0.044548426 = sum of:
        0.044548426 = sum of:
          0.009348149 = weight(_text_:a in 5002) [ClassicSimilarity], result of:
            0.009348149 = score(doc=5002,freq=12.0), product of:
              0.037446223 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03247589 = queryNorm
              0.24964198 = fieldWeight in 5002, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0625 = fieldNorm(doc=5002)
          0.035200275 = weight(_text_:22 in 5002) [ClassicSimilarity], result of:
            0.035200275 = score(doc=5002,freq=2.0), product of:
              0.11372503 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03247589 = queryNorm
              0.30952093 = fieldWeight in 5002, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=5002)
      0.5 = coord(1/2)
    
    Abstract
    Many retrievalexperiments are intended to discover ways of improving performance, taking the results obtained with some particular technique as a baseline. The fact that substantial alterations to a system often have little or no effect on particular collections is puzzling. This may be due to the initially poor seperation of relevant and non-relevant documents. The paper presents a procedure for characterizing this seperation for a collection, which can be used to show whether proposed modifications of the base system are likely to be useful.
    Date
    19. 3.1996 11:22:12
    Type
    a
  13. Pemberton, J.K.; Ojala, M.; Garman, N.: Head to head : searching the Web versus traditional services (1998) 0.02
    0.020905208 = product of:
      0.041810416 = sum of:
        0.041810416 = sum of:
          0.00661014 = weight(_text_:a in 3572) [ClassicSimilarity], result of:
            0.00661014 = score(doc=3572,freq=6.0), product of:
              0.037446223 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03247589 = queryNorm
              0.17652355 = fieldWeight in 3572, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0625 = fieldNorm(doc=3572)
          0.035200275 = weight(_text_:22 in 3572) [ClassicSimilarity], result of:
            0.035200275 = score(doc=3572,freq=2.0), product of:
              0.11372503 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03247589 = queryNorm
              0.30952093 = fieldWeight in 3572, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=3572)
      0.5 = coord(1/2)
    
    Abstract
    Describes of 3 searches on the topic of virtual communities done on the WWW using HotBot and traditional databases using LEXIS-NEXIS and ABI/Inform. Concludes that the WWW is a good starting place for a broad concept search but the traditional services are better for more precise topics
    Source
    Online. 22(1998) no.3, S.24-26,28
    Type
    a
  14. Dresel, R.; Hörnig, D.; Kaluza, H.; Peter, A.; Roßmann, A.; Sieber, W.: Evaluation deutscher Web-Suchwerkzeuge : Ein vergleichender Retrievaltest (2001) 0.02
    0.020905208 = product of:
      0.041810416 = sum of:
        0.041810416 = sum of:
          0.00661014 = weight(_text_:a in 261) [ClassicSimilarity], result of:
            0.00661014 = score(doc=261,freq=6.0), product of:
              0.037446223 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03247589 = queryNorm
              0.17652355 = fieldWeight in 261, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0625 = fieldNorm(doc=261)
          0.035200275 = weight(_text_:22 in 261) [ClassicSimilarity], result of:
            0.035200275 = score(doc=261,freq=2.0), product of:
              0.11372503 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03247589 = queryNorm
              0.30952093 = fieldWeight in 261, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=261)
      0.5 = coord(1/2)
    
    Abstract
    Die deutschen Suchmaschinen, Abacho, Acoon, Fireball und Lycos sowie die Web-Kataloge Web.de und Yahoo! werden einem Qualitätstest nach relativem Recall, Precision und Availability unterzogen. Die Methoden der Retrievaltests werden vorgestellt. Im Durchschnitt werden bei einem Cut-Off-Wert von 25 ein Recall von rund 22%, eine Precision von knapp 19% und eine Verfügbarkeit von 24% erreicht
    Type
    a
  15. Blagden, J.F.: How much noise in a role-free and link-free co-ordinate indexing system? (1966) 0.02
    0.019817626 = product of:
      0.039635252 = sum of:
        0.039635252 = sum of:
          0.008835011 = weight(_text_:a in 2718) [ClassicSimilarity], result of:
            0.008835011 = score(doc=2718,freq=14.0), product of:
              0.037446223 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03247589 = queryNorm
              0.23593865 = fieldWeight in 2718, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2718)
          0.030800242 = weight(_text_:22 in 2718) [ClassicSimilarity], result of:
            0.030800242 = score(doc=2718,freq=2.0), product of:
              0.11372503 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03247589 = queryNorm
              0.2708308 = fieldWeight in 2718, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2718)
      0.5 = coord(1/2)
    
    Abstract
    A study of the number of irrelevant documents retrieved in a co-ordinate indexing system that does not employ eitherr roles or links. These tests were based on one hundred actual inquiries received in the library and therefore an evaluation of recall efficiency is not included. Over half the enquiries produced no noise, but the mean average percentage niose figure was approximately 33 per cent based on a total average retireval figure of eighteen documents per search. Details of the size of the indexed collection, methods of indexing, and an analysis of the reasons for the retrieval of irrelevant documents are discussed, thereby providing information officers who are thinking of installing such a system with some evidence on which to base a decision as to whether or not to utilize these devices
    Source
    Journal of documentation. 22(1966), S.203-209
    Type
    a
  16. Sanderson, M.: ¬The Reuters test collection (1996) 0.02
    0.01950832 = product of:
      0.03901664 = sum of:
        0.03901664 = sum of:
          0.003816366 = weight(_text_:a in 6971) [ClassicSimilarity], result of:
            0.003816366 = score(doc=6971,freq=2.0), product of:
              0.037446223 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03247589 = queryNorm
              0.10191591 = fieldWeight in 6971, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0625 = fieldNorm(doc=6971)
          0.035200275 = weight(_text_:22 in 6971) [ClassicSimilarity], result of:
            0.035200275 = score(doc=6971,freq=2.0), product of:
              0.11372503 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03247589 = queryNorm
              0.30952093 = fieldWeight in 6971, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=6971)
      0.5 = coord(1/2)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
    Type
    a
  17. Lespinasse, K.: TREC: une conference pour l'evaluation des systemes de recherche d'information (1997) 0.02
    0.01950832 = product of:
      0.03901664 = sum of:
        0.03901664 = sum of:
          0.003816366 = weight(_text_:a in 744) [ClassicSimilarity], result of:
            0.003816366 = score(doc=744,freq=2.0), product of:
              0.037446223 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03247589 = queryNorm
              0.10191591 = fieldWeight in 744, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0625 = fieldNorm(doc=744)
          0.035200275 = weight(_text_:22 in 744) [ClassicSimilarity], result of:
            0.035200275 = score(doc=744,freq=2.0), product of:
              0.11372503 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03247589 = queryNorm
              0.30952093 = fieldWeight in 744, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=744)
      0.5 = coord(1/2)
    
    Date
    1. 8.1996 22:01:00
    Type
    a
  18. Losee, R.M.: Determining information retrieval and filtering performance without experimentation (1995) 0.02
    0.019133594 = product of:
      0.038267188 = sum of:
        0.038267188 = sum of:
          0.007466947 = weight(_text_:a in 3368) [ClassicSimilarity], result of:
            0.007466947 = score(doc=3368,freq=10.0), product of:
              0.037446223 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03247589 = queryNorm
              0.19940455 = fieldWeight in 3368, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3368)
          0.030800242 = weight(_text_:22 in 3368) [ClassicSimilarity], result of:
            0.030800242 = score(doc=3368,freq=2.0), product of:
              0.11372503 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03247589 = queryNorm
              0.2708308 = fieldWeight in 3368, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3368)
      0.5 = coord(1/2)
    
    Abstract
    The performance of an information retrieval or text and media filtering system may be determined through analytic methods as well as by traditional simulation or experimental methods. These analytic methods can provide precise statements about expected performance. They can thus determine which of 2 similarly performing systems is superior. For both a single query terms and for a multiple query term retrieval model, a model for comparing the performance of different probabilistic retrieval methods is developed. This method may be used in computing the average search length for a query, given only knowledge of database parameter values. Describes predictive models for inverse document frequency, binary independence, and relevance feedback based retrieval and filtering. Simulation illustrate how the single term model performs and sample performance predictions are given for single term and multiple term problems
    Date
    22. 2.1996 13:14:10
    Type
    a
  19. Smithson, S.: Information retrieval evaluation in practice : a case study approach (1994) 0.02
    0.018739441 = product of:
      0.037478883 = sum of:
        0.037478883 = sum of:
          0.0066786404 = weight(_text_:a in 7302) [ClassicSimilarity], result of:
            0.0066786404 = score(doc=7302,freq=8.0), product of:
              0.037446223 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03247589 = queryNorm
              0.17835285 = fieldWeight in 7302, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=7302)
          0.030800242 = weight(_text_:22 in 7302) [ClassicSimilarity], result of:
            0.030800242 = score(doc=7302,freq=2.0), product of:
              0.11372503 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03247589 = queryNorm
              0.2708308 = fieldWeight in 7302, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=7302)
      0.5 = coord(1/2)
    
    Abstract
    The evaluation of information retrieval systems is an important yet difficult operation. This paper describes an exploratory evaluation study that takes an interpretive approach to evaluation. The longitudinal study examines evaluation through the information-seeking behaviour of 22 case studies of 'real' users. The eclectic approach to data collection produced behavioral data that is compared with relevance judgements and satisfaction ratings. The study demonstrates considerable variations among the cases, among different evaluation measures within the same case, and among the same measures at different stages within a single case. It is argued that those involved in evaluation should be aware of the difficulties, and base any evaluation on a good understanding of the cases in question
    Type
    a
  20. Hodges, P.R.: Keyword in title indexes : effectiveness of retrieval in computer searches (1983) 0.02
    0.017761378 = product of:
      0.035522755 = sum of:
        0.035522755 = sum of:
          0.0047225123 = weight(_text_:a in 5001) [ClassicSimilarity], result of:
            0.0047225123 = score(doc=5001,freq=4.0), product of:
              0.037446223 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03247589 = queryNorm
              0.12611452 = fieldWeight in 5001, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5001)
          0.030800242 = weight(_text_:22 in 5001) [ClassicSimilarity], result of:
            0.030800242 = score(doc=5001,freq=2.0), product of:
              0.11372503 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03247589 = queryNorm
              0.2708308 = fieldWeight in 5001, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5001)
      0.5 = coord(1/2)
    
    Abstract
    A study was done to test the effectiveness of retrieval using title word searching. It was based on actual search profiles used in the Mechanized Information Center at Ohio State University, in order ro replicate as closely as possible actual searching conditions. Fewer than 50% of the relevant titles were retrieved by keywords in titles. The low rate of retrieval can be attributes to three sources: titles themselves, user and information specialist ignorance of the subject vocabulary in use, and to general language problems. Across fields it was found that the social sciences had the best retrieval rate, with science having the next best, and arts and humanities the lowest. Ways to enhance and supplement keyword in title searching on the computer and in printed indexes are discussed.
    Date
    14. 3.1996 13:22:21
    Type
    a

Years

Languages

Types

  • a 443
  • el 10
  • s 9
  • r 6
  • m 5
  • p 2
  • d 1
  • More… Less…