Search (456 results, page 1 of 23)

  • × theme_ss:"Retrievalstudien"
  1. Fricke, M.: Measuring recall (1998) 0.07
    0.07030652 = product of:
      0.17576629 = sum of:
        0.014509009 = weight(_text_:information in 3802) [ClassicSimilarity], result of:
          0.014509009 = score(doc=3802,freq=6.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.2687516 = fieldWeight in 3802, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3802)
        0.024872115 = weight(_text_:retrieval in 3802) [ClassicSimilarity], result of:
          0.024872115 = score(doc=3802,freq=2.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.26736724 = fieldWeight in 3802, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=3802)
        0.11247083 = weight(_text_:ranking in 3802) [ClassicSimilarity], result of:
          0.11247083 = score(doc=3802,freq=4.0), product of:
            0.16634533 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.030753274 = queryNorm
            0.67612857 = fieldWeight in 3802, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0625 = fieldNorm(doc=3802)
        0.023914335 = product of:
          0.04782867 = sum of:
            0.04782867 = weight(_text_:evaluation in 3802) [ClassicSimilarity], result of:
              0.04782867 = score(doc=3802,freq=2.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.37076265 = fieldWeight in 3802, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3802)
          0.5 = coord(1/2)
      0.4 = coord(4/10)
    
    Abstract
    Recall, the proortion of the relevant documents retrieved, is a key indicator of the performance of an information retrieval system. With large information systems, like the WWW, recal is almost impossible to measure or estimate by all standard techniques. Proposes an 'needle hiding' technique for measuring recall under these circumstances. Shows that ranking by relative recall does not have to be isomorphic to ranking by recall and hence the use of relative recall for comparative evaluation might not be entirely sound
    Source
    Journal of information science. 24(1998) no.6, S.409-417
  2. Alemayehu, N.: Analysis of performance variation using quey expansion (2003) 0.06
    0.06354545 = product of:
      0.15886362 = sum of:
        0.010881756 = weight(_text_:information in 1454) [ClassicSimilarity], result of:
          0.010881756 = score(doc=1454,freq=6.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.20156369 = fieldWeight in 1454, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1454)
        0.04569299 = weight(_text_:retrieval in 1454) [ClassicSimilarity], result of:
          0.04569299 = score(doc=1454,freq=12.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.49118498 = fieldWeight in 1454, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1454)
        0.08435312 = weight(_text_:ranking in 1454) [ClassicSimilarity], result of:
          0.08435312 = score(doc=1454,freq=4.0), product of:
            0.16634533 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.030753274 = queryNorm
            0.5070964 = fieldWeight in 1454, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=1454)
        0.017935753 = product of:
          0.035871506 = sum of:
            0.035871506 = weight(_text_:evaluation in 1454) [ClassicSimilarity], result of:
              0.035871506 = score(doc=1454,freq=2.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.278072 = fieldWeight in 1454, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1454)
          0.5 = coord(1/2)
      0.4 = coord(4/10)
    
    Abstract
    Information retrieval performance evaluation is commonly made based an the classical recall and precision based figures or graphs. However, important information indicating causes for variation may remain hidden under the average recall and precision figures. Identifying significant causes for variation can help researchers and developers to focus an opportunities for improvement that underlay the averages. This article presents a case study showing the potential of a statistical repeated measures analysis of variance for testing the significance of factors in retrieval performance variation. The TREC-9 Query Track performance data is used as a case study and the factors studied are retrieval method, topic, and their interaction. The results show that retrieval method, topic, and their interaction are all significant. A topic level analysis is also made to see the nature of variation in the performance of retrieval methods across topics. The observed retrieval performances of expansion runs are truly significant improvements for most of the topics. Analyses of the effect of query expansion an document ranking confirm that expansion affects ranking positively.
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.5, S.379-391
  3. Smithson, S.: Information retrieval evaluation in practice : a case study approach (1994) 0.06
    0.057892047 = product of:
      0.19297348 = sum of:
        0.014659365 = weight(_text_:information in 7302) [ClassicSimilarity], result of:
          0.014659365 = score(doc=7302,freq=8.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.27153665 = fieldWeight in 7302, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7302)
        0.03077767 = weight(_text_:retrieval in 7302) [ClassicSimilarity], result of:
          0.03077767 = score(doc=7302,freq=4.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.33085006 = fieldWeight in 7302, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7302)
        0.14753644 = sum of:
          0.11836993 = weight(_text_:evaluation in 7302) [ClassicSimilarity], result of:
            0.11836993 = score(doc=7302,freq=16.0), product of:
              0.12900078 = queryWeight, product of:
                4.1947007 = idf(docFreq=1811, maxDocs=44218)
                0.030753274 = queryNorm
              0.9175908 = fieldWeight in 7302, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                4.1947007 = idf(docFreq=1811, maxDocs=44218)
                0.0546875 = fieldNorm(doc=7302)
          0.029166508 = weight(_text_:22 in 7302) [ClassicSimilarity], result of:
            0.029166508 = score(doc=7302,freq=2.0), product of:
              0.107692726 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.030753274 = queryNorm
              0.2708308 = fieldWeight in 7302, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=7302)
      0.3 = coord(3/10)
    
    Abstract
    The evaluation of information retrieval systems is an important yet difficult operation. This paper describes an exploratory evaluation study that takes an interpretive approach to evaluation. The longitudinal study examines evaluation through the information-seeking behaviour of 22 case studies of 'real' users. The eclectic approach to data collection produced behavioral data that is compared with relevance judgements and satisfaction ratings. The study demonstrates considerable variations among the cases, among different evaluation measures within the same case, and among the same measures at different stages within a single case. It is argued that those involved in evaluation should be aware of the difficulties, and base any evaluation on a good understanding of the cases in question
    Source
    Information processing and management. 30(1994) no.2, S.205-221
  4. Ahlgren, P.; Grönqvist, L.: Evaluation of retrieval effectiveness with incomplete relevance data : theoretical and experimental comparison of three measures (2008) 0.05
    0.051448073 = product of:
      0.12862018 = sum of:
        0.0073296824 = weight(_text_:information in 2032) [ClassicSimilarity], result of:
          0.0073296824 = score(doc=2032,freq=2.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.13576832 = fieldWeight in 2032, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2032)
        0.03077767 = weight(_text_:retrieval in 2032) [ClassicSimilarity], result of:
          0.03077767 = score(doc=2032,freq=4.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.33085006 = fieldWeight in 2032, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2032)
        0.069587775 = weight(_text_:ranking in 2032) [ClassicSimilarity], result of:
          0.069587775 = score(doc=2032,freq=2.0), product of:
            0.16634533 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.030753274 = queryNorm
            0.4183332 = fieldWeight in 2032, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2032)
        0.020925045 = product of:
          0.04185009 = sum of:
            0.04185009 = weight(_text_:evaluation in 2032) [ClassicSimilarity], result of:
              0.04185009 = score(doc=2032,freq=2.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.32441732 = fieldWeight in 2032, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2032)
          0.5 = coord(1/2)
      0.4 = coord(4/10)
    
    Abstract
    This paper investigates two relatively new measures of retrieval effectiveness in relation to the problem of incomplete relevance data. The measures, Bpref and RankEff, which do not take into account documents that have not been relevance judged, are compared theoretically and experimentally. The experimental comparisons involve a third measure, the well-known mean uninterpolated average precision. The results indicate that RankEff is the most stable of the three measures when the amount of relevance data is reduced, with respect to system ranking and absolute values. In addition, RankEff has the lowest error-rate.
    Source
    Information processing and management. 44(2008) no.1, S.212-225
  5. Gao, R.; Ge, Y.; Sha, C.: FAIR: Fairness-aware information retrieval evaluation (2022) 0.05
    0.047572672 = product of:
      0.11893168 = sum of:
        0.011706905 = weight(_text_:information in 669) [ClassicSimilarity], result of:
          0.011706905 = score(doc=669,freq=10.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.21684799 = fieldWeight in 669, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=669)
        0.02198405 = weight(_text_:retrieval in 669) [ClassicSimilarity], result of:
          0.02198405 = score(doc=669,freq=4.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.23632148 = fieldWeight in 669, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=669)
        0.07029427 = weight(_text_:ranking in 669) [ClassicSimilarity], result of:
          0.07029427 = score(doc=669,freq=4.0), product of:
            0.16634533 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.030753274 = queryNorm
            0.42258036 = fieldWeight in 669, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=669)
        0.01494646 = product of:
          0.02989292 = sum of:
            0.02989292 = weight(_text_:evaluation in 669) [ClassicSimilarity], result of:
              0.02989292 = score(doc=669,freq=2.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.23172665 = fieldWeight in 669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=669)
          0.5 = coord(1/2)
      0.4 = coord(4/10)
    
    Abstract
    With the emerging needs of creating fairness-aware solutions for search and recommendation systems, a daunting challenge exists of evaluating such solutions. While many of the traditional information retrieval (IR) metrics can capture the relevance, diversity, and novelty for the utility with respect to users, they are not suitable for inferring whether the presented results are fair from the perspective of responsible information exposure. On the other hand, existing fairness metrics do not account for user utility or do not measure it adequately. To address this problem, we propose a new metric called FAIR. By unifying standard IR metrics and fairness measures into an integrated metric, this metric offers a new perspective for evaluating fairness-aware ranking results. Based on this metric, we developed an effective ranking algorithm that jointly optimized user utility and fairness. The experimental results showed that our FAIR metric could highlight results with good user utility and fair information exposure. We showed how FAIR related to a set of existing utility and fairness metrics and demonstrated the effectiveness of our FAIR-based algorithm. We believe our work opens up a new direction of pursuing a metric for evaluating and implementing the FAIR systems.
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.10, S.1461-1473
  6. Angelini, M.; Fazzini, V.; Ferro, N.; Santucci, G.; Silvello, G.: CLAIRE: A combinatorial visual analytics system for information retrieval evaluation (2018) 0.05
    0.046797723 = product of:
      0.11699431 = sum of:
        0.010470974 = weight(_text_:information in 5049) [ClassicSimilarity], result of:
          0.010470974 = score(doc=5049,freq=8.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.19395474 = fieldWeight in 5049, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5049)
        0.026924854 = weight(_text_:retrieval in 5049) [ClassicSimilarity], result of:
          0.026924854 = score(doc=5049,freq=6.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.28943354 = fieldWeight in 5049, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5049)
        0.049705554 = weight(_text_:ranking in 5049) [ClassicSimilarity], result of:
          0.049705554 = score(doc=5049,freq=2.0), product of:
            0.16634533 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.030753274 = queryNorm
            0.29880944 = fieldWeight in 5049, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5049)
        0.02989292 = product of:
          0.05978584 = sum of:
            0.05978584 = weight(_text_:evaluation in 5049) [ClassicSimilarity], result of:
              0.05978584 = score(doc=5049,freq=8.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.4634533 = fieldWeight in 5049, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5049)
          0.5 = coord(1/2)
      0.4 = coord(4/10)
    
    Abstract
    Information Retrieval (IR) develops complex systems, constituted of several components, which aim at returning and optimally ranking the most relevant documents in response to user queries. In this context, experimental evaluation plays a central role, since it allows for measuring IR systems effectiveness, increasing the understanding of their functioning, and better directing the efforts for improving them. Current evaluation methodologies are limited by two major factors: (i) IR systems are evaluated as "black boxes", since it is not possible to decompose the contributions of the different components, e.g., stop lists, stemmers, and IR models; (ii) given that it is not possible to predict the effectiveness of an IR system, both academia and industry need to explore huge numbers of systems, originated by large combinatorial compositions of their components, to understand how they perform and how these components interact together. We propose a Combinatorial visuaL Analytics system for Information Retrieval Evaluation (CLAIRE) which allows for exploring and making sense of the performances of a large amount of IR systems, in order to quickly and intuitively grasp which system configurations are preferred, what are the contributions of the different components and how these components interact together. The CLAIRE system is then validated against use cases based on several test collections using a wide set of systems, generated by a combinatorial composition of several off-the-shelf components, representing the most common denominator almost always present in English IR systems. In particular, we validate the findings enabled by CLAIRE with respect to consolidated deep statistical analyses and we show that the CLAIRE system allows the generation of new insights, which were not detectable with traditional approaches.
    Source
    Information processing and management. 54(2018) no.6, S.1077-1100
  7. Fan, W.; Luo, M.; Wang, L.; Xi, W.; Fox, E.A.: Tuning before feedback : combining ranking discovery and blind feedback for robust retrieval (2004) 0.05
    0.046155058 = product of:
      0.15385018 = sum of:
        0.010470974 = weight(_text_:information in 4052) [ClassicSimilarity], result of:
          0.010470974 = score(doc=4052,freq=2.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.19395474 = fieldWeight in 4052, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=4052)
        0.0439681 = weight(_text_:retrieval in 4052) [ClassicSimilarity], result of:
          0.0439681 = score(doc=4052,freq=4.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.47264296 = fieldWeight in 4052, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=4052)
        0.09941111 = weight(_text_:ranking in 4052) [ClassicSimilarity], result of:
          0.09941111 = score(doc=4052,freq=2.0), product of:
            0.16634533 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.030753274 = queryNorm
            0.5976189 = fieldWeight in 4052, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.078125 = fieldNorm(doc=4052)
      0.3 = coord(3/10)
    
    Source
    SIGIR'04: Proceedings of the 27th Annual International ACM-SIGIR Conference an Research and Development in Information Retrieval. Ed.: K. Järvelin, u.a
  8. Borlund, P.: Evaluation of interactive information retrieval systems (2000) 0.05
    0.04596623 = product of:
      0.15322076 = sum of:
        0.025130337 = weight(_text_:information in 2556) [ClassicSimilarity], result of:
          0.025130337 = score(doc=2556,freq=18.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.46549135 = fieldWeight in 2556, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=2556)
        0.074616335 = weight(_text_:retrieval in 2556) [ClassicSimilarity], result of:
          0.074616335 = score(doc=2556,freq=18.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.8021017 = fieldWeight in 2556, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=2556)
        0.053474084 = product of:
          0.10694817 = sum of:
            0.10694817 = weight(_text_:evaluation in 2556) [ClassicSimilarity], result of:
              0.10694817 = score(doc=2556,freq=10.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.82905054 = fieldWeight in 2556, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2556)
          0.5 = coord(1/2)
      0.3 = coord(3/10)
    
    LCSH
    Information storage and retrieval systems / Evaluation
    Interactive computer systems / Evaluation
    RSWK
    Information Retrieval / Datenbankverwaltung / Hochschulschrift (GBV)
    Information Retrieval / Dialogsystem (SWB)
    Information Retrieval / Dialogsystem / Leistungsbewertung (BVB)
    Subject
    Information Retrieval / Datenbankverwaltung / Hochschulschrift (GBV)
    Information Retrieval / Dialogsystem (SWB)
    Information Retrieval / Dialogsystem / Leistungsbewertung (BVB)
    Information storage and retrieval systems / Evaluation
    Interactive computer systems / Evaluation
  9. Lespinasse, K.: TREC: une conference pour l'evaluation des systemes de recherche d'information (1997) 0.04
    0.041307535 = product of:
      0.13769178 = sum of:
        0.011846555 = weight(_text_:information in 744) [ClassicSimilarity], result of:
          0.011846555 = score(doc=744,freq=4.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.21943474 = fieldWeight in 744, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=744)
        0.024872115 = weight(_text_:retrieval in 744) [ClassicSimilarity], result of:
          0.024872115 = score(doc=744,freq=2.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.26736724 = fieldWeight in 744, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=744)
        0.100973114 = sum of:
          0.06763996 = weight(_text_:evaluation in 744) [ClassicSimilarity], result of:
            0.06763996 = score(doc=744,freq=4.0), product of:
              0.12900078 = queryWeight, product of:
                4.1947007 = idf(docFreq=1811, maxDocs=44218)
                0.030753274 = queryNorm
              0.5243376 = fieldWeight in 744, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.1947007 = idf(docFreq=1811, maxDocs=44218)
                0.0625 = fieldNorm(doc=744)
          0.033333153 = weight(_text_:22 in 744) [ClassicSimilarity], result of:
            0.033333153 = score(doc=744,freq=2.0), product of:
              0.107692726 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.030753274 = queryNorm
              0.30952093 = fieldWeight in 744, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=744)
      0.3 = coord(3/10)
    
    Abstract
    TREC ia an annual conference held in the USA devoted to electronic systems for large full text information searching. The conference deals with evaluation and comparison techniques developed since 1992 by participants from the research and industrial fields. The work of the conference is destined for designers (rather than users) of systems which access full text information. Describes the context, objectives, organization, evaluation methods and limits of TREC
    Date
    1. 8.1996 22:01:00
    Footnote
    Übers. d. Titels: TREC: the Text REtrieval Conference
  10. Buckley, C.; Voorhees, E.M.: Retrieval system evaluation (2005) 0.04
    0.040619902 = product of:
      0.13539967 = sum of:
        0.014659365 = weight(_text_:information in 648) [ClassicSimilarity], result of:
          0.014659365 = score(doc=648,freq=2.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.27153665 = fieldWeight in 648, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=648)
        0.06155534 = weight(_text_:retrieval in 648) [ClassicSimilarity], result of:
          0.06155534 = score(doc=648,freq=4.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.6617001 = fieldWeight in 648, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.109375 = fieldNorm(doc=648)
        0.059184965 = product of:
          0.11836993 = sum of:
            0.11836993 = weight(_text_:evaluation in 648) [ClassicSimilarity], result of:
              0.11836993 = score(doc=648,freq=4.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.9175908 = fieldWeight in 648, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.109375 = fieldNorm(doc=648)
          0.5 = coord(1/2)
      0.3 = coord(3/10)
    
    Source
    TREC: experiment and evaluation in information retrieval. Ed.: E.M. Voorhees, u. D.K. Harman
  11. Rokaya, M.; Atlam, E.; Fuketa, M.; Dorji, T.C.; Aoe, J.-i.: Ranking of field association terms using Co-word analysis (2008) 0.04
    0.040359095 = product of:
      0.1345303 = sum of:
        0.0125651695 = weight(_text_:information in 2060) [ClassicSimilarity], result of:
          0.0125651695 = score(doc=2060,freq=8.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.23274569 = fieldWeight in 2060, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2060)
        0.018654086 = weight(_text_:retrieval in 2060) [ClassicSimilarity], result of:
          0.018654086 = score(doc=2060,freq=2.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.20052543 = fieldWeight in 2060, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2060)
        0.103311054 = weight(_text_:ranking in 2060) [ClassicSimilarity], result of:
          0.103311054 = score(doc=2060,freq=6.0), product of:
            0.16634533 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.030753274 = queryNorm
            0.62106377 = fieldWeight in 2060, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=2060)
      0.3 = coord(3/10)
    
    Abstract
    Information retrieval involves finding some desired information in a store of information or a database. In this paper, Co-word analysis will be used to achieve a ranking of a selected sample of FA terms. Based on this ranking a better arranging of search results can be achieved. Experimental results achieved using 41 MB of data (7660 documents) in the field of sports. The corpus was collected from CNN newspaper, sports field. This corpus was chosen to be distributed over 11 sub-fields of the field sports from the experimental results, the average precision increased by 18.3% after applying the proposed arranging scheme depending on the absolute frequency to count the terms weights, and the average precision increased by 17.2% after applying the proposed arranging scheme depending on a formula based on "TF*IDF" to count the terms weights.
    Source
    Information processing and management. 44(2008) no.2, S.738-755
  12. Smith, M.P.; Pollitt, A.S.: Ranking and relevance feedback extensions to a view-based searching system (1995) 0.04
    0.03963857 = product of:
      0.13212857 = sum of:
        0.010881756 = weight(_text_:information in 3855) [ClassicSimilarity], result of:
          0.010881756 = score(doc=3855,freq=6.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.20156369 = fieldWeight in 3855, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3855)
        0.103311054 = weight(_text_:ranking in 3855) [ClassicSimilarity], result of:
          0.103311054 = score(doc=3855,freq=6.0), product of:
            0.16634533 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.030753274 = queryNorm
            0.62106377 = fieldWeight in 3855, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=3855)
        0.017935753 = product of:
          0.035871506 = sum of:
            0.035871506 = weight(_text_:evaluation in 3855) [ClassicSimilarity], result of:
              0.035871506 = score(doc=3855,freq=2.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.278072 = fieldWeight in 3855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3855)
          0.5 = coord(1/2)
      0.3 = coord(3/10)
    
    Abstract
    The University of Huddersfield, UK, is researching ways of incorporating ranking and relevance feedback techniques into a thesaurus based searching system. The INSPEC database on STN International was searched using the VUSE (View-based Search Engine) interface. Thesaurus terms from documents judged to be relevant by users were used to query INSPEC and create a ranking of documents based on probabilistic methods. An evaluation was carried out to establish whether or not it would be better for the user to continue searching with the thesaurus based front end or to use relevance feedback, looking at the ranked list of documents it would produce. Also looks at the amount of effort the user had to expend to get relevant documents in terms of the number of non relevant documents seen between relevant documents
    Imprint
    Oxford : Learned Information
    Source
    Online information 95: Proceedings of the 19th International online information meeting, London, 5-7 December 1995. Ed.: D.I. Raitt u. B. Jeapes
  13. Evaluation of information retrieval systems : special topic issue (1996) 0.04
    0.037576564 = product of:
      0.12525521 = sum of:
        0.021763513 = weight(_text_:information in 6812) [ClassicSimilarity], result of:
          0.021763513 = score(doc=6812,freq=6.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.40312737 = fieldWeight in 6812, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=6812)
        0.052761722 = weight(_text_:retrieval in 6812) [ClassicSimilarity], result of:
          0.052761722 = score(doc=6812,freq=4.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.5671716 = fieldWeight in 6812, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=6812)
        0.05072997 = product of:
          0.10145994 = sum of:
            0.10145994 = weight(_text_:evaluation in 6812) [ClassicSimilarity], result of:
              0.10145994 = score(doc=6812,freq=4.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.7865064 = fieldWeight in 6812, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6812)
          0.5 = coord(1/2)
      0.3 = coord(3/10)
    
    Abstract
    A special issue devoted to the topic of evaluation of information retrieval systems
    Source
    Journal of the American Society for Information Science. 47(1996) no.1, S.1-105
  14. Petrelli, D.: On the role of user-centred evaluation in the advancement of interactive information retrieval (2008) 0.04
    0.03736157 = product of:
      0.12453856 = sum of:
        0.013851797 = weight(_text_:information in 2026) [ClassicSimilarity], result of:
          0.013851797 = score(doc=2026,freq=14.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.256578 = fieldWeight in 2026, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2026)
        0.038077492 = weight(_text_:retrieval in 2026) [ClassicSimilarity], result of:
          0.038077492 = score(doc=2026,freq=12.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.40932083 = fieldWeight in 2026, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2026)
        0.072609276 = sum of:
          0.05177606 = weight(_text_:evaluation in 2026) [ClassicSimilarity], result of:
            0.05177606 = score(doc=2026,freq=6.0), product of:
              0.12900078 = queryWeight, product of:
                4.1947007 = idf(docFreq=1811, maxDocs=44218)
                0.030753274 = queryNorm
              0.40136236 = fieldWeight in 2026, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.1947007 = idf(docFreq=1811, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2026)
          0.02083322 = weight(_text_:22 in 2026) [ClassicSimilarity], result of:
            0.02083322 = score(doc=2026,freq=2.0), product of:
              0.107692726 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.030753274 = queryNorm
              0.19345059 = fieldWeight in 2026, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2026)
      0.3 = coord(3/10)
    
    Abstract
    This paper discusses the role of user-centred evaluations as an essential method for researching interactive information retrieval. It draws mainly on the work carried out during the Clarity Project where different user-centred evaluations were run during the lifecycle of a cross-language information retrieval system. The iterative testing was not only instrumental to the development of a usable system, but it enhanced our knowledge of the potential, impact, and actual use of cross-language information retrieval technology. Indeed the role of the user evaluation was dual: by testing a specific prototype it was possible to gain a micro-view and assess the effectiveness of each component of the complex system; by cumulating the result of all the evaluations (in total 43 people were involved) it was possible to build a macro-view of how cross-language retrieval would impact on users and their tasks. By showing the richness of results that can be acquired, this paper aims at stimulating researchers into considering user-centred evaluations as a flexible, adaptable and comprehensive technique for investigating non-traditional information access systems.
    Footnote
    Beitrag eines Themenbereichs: Evaluation of Interactive Information Retrieval Systems
    Source
    Information processing and management. 44(2008) no.1, S.22-38
  15. Crestani, F.; Rijsbergen, C.J. van: Information retrieval by imaging (1996) 0.04
    0.03728212 = product of:
      0.124273725 = sum of:
        0.014048288 = weight(_text_:information in 6967) [ClassicSimilarity], result of:
          0.014048288 = score(doc=6967,freq=10.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.2602176 = fieldWeight in 6967, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=6967)
        0.04935407 = weight(_text_:retrieval in 6967) [ClassicSimilarity], result of:
          0.04935407 = score(doc=6967,freq=14.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.5305404 = fieldWeight in 6967, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=6967)
        0.06087137 = sum of:
          0.035871506 = weight(_text_:evaluation in 6967) [ClassicSimilarity], result of:
            0.035871506 = score(doc=6967,freq=2.0), product of:
              0.12900078 = queryWeight, product of:
                4.1947007 = idf(docFreq=1811, maxDocs=44218)
                0.030753274 = queryNorm
              0.278072 = fieldWeight in 6967, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.1947007 = idf(docFreq=1811, maxDocs=44218)
                0.046875 = fieldNorm(doc=6967)
          0.024999864 = weight(_text_:22 in 6967) [ClassicSimilarity], result of:
            0.024999864 = score(doc=6967,freq=2.0), product of:
              0.107692726 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.030753274 = queryNorm
              0.23214069 = fieldWeight in 6967, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=6967)
      0.3 = coord(3/10)
    
    Abstract
    Explains briefly what constitutes the imaging process and explains how imaging can be used in information retrieval. Proposes an approach based on the concept of: 'a term is a possible world'; which enables the exploitation of term to term relationships which are estimated using an information theoretic measure. Reports results of an evaluation exercise to compare the performance of imaging retrieval, using possible world semantics, with a benchmark and using the Cranfield 2 document collection to measure precision and recall. Initially, the performance imaging retrieval was seen to be better but statistical analysis proved that the difference was not significant. The problem with imaging retrieval lies in the amount of computations needed to be performed at run time and a later experiement investigated the possibility of reducing this amount. Notes lines of further investigation
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  16. Huang, M.-H.: ¬The evaluation of information retrieval systems (1997) 0.04
    0.03723196 = product of:
      0.12410653 = sum of:
        0.010470974 = weight(_text_:information in 1827) [ClassicSimilarity], result of:
          0.010470974 = score(doc=1827,freq=2.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.19395474 = fieldWeight in 1827, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=1827)
        0.05384971 = weight(_text_:retrieval in 1827) [ClassicSimilarity], result of:
          0.05384971 = score(doc=1827,freq=6.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.5788671 = fieldWeight in 1827, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=1827)
        0.05978584 = product of:
          0.11957168 = sum of:
            0.11957168 = weight(_text_:evaluation in 1827) [ClassicSimilarity], result of:
              0.11957168 = score(doc=1827,freq=8.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.9269066 = fieldWeight in 1827, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1827)
          0.5 = coord(1/2)
      0.3 = coord(3/10)
    
    Abstract
    Describes the current status of retrieval system evaluation and predicts its future development. discusses various performance measures and 'utility' concepts from a historical perspective. Also addresses the current status of search evaluation and dicusses the empirical findings of retrieval system evaluation
  17. Robertson, S.E.; Thompson, C.L.: ¬An operational evaluation of weighting, ranking and relevance feedback via a front-end system (1987) 0.04
    0.036205128 = product of:
      0.18102564 = sum of:
        0.13917555 = weight(_text_:ranking in 3858) [ClassicSimilarity], result of:
          0.13917555 = score(doc=3858,freq=2.0), product of:
            0.16634533 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.030753274 = queryNorm
            0.8366664 = fieldWeight in 3858, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.109375 = fieldNorm(doc=3858)
        0.04185009 = product of:
          0.08370018 = sum of:
            0.08370018 = weight(_text_:evaluation in 3858) [ClassicSimilarity], result of:
              0.08370018 = score(doc=3858,freq=2.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.64883465 = fieldWeight in 3858, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3858)
          0.5 = coord(1/2)
      0.2 = coord(2/10)
    
  18. Davis, C.H.: From document retrieval to Web browsing : some universal concerns (1997) 0.04
    0.03529449 = product of:
      0.1176483 = sum of:
        0.010365736 = weight(_text_:information in 399) [ClassicSimilarity], result of:
          0.010365736 = score(doc=399,freq=4.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.1920054 = fieldWeight in 399, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=399)
        0.037694797 = weight(_text_:retrieval in 399) [ClassicSimilarity], result of:
          0.037694797 = score(doc=399,freq=6.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.40520695 = fieldWeight in 399, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=399)
        0.069587775 = weight(_text_:ranking in 399) [ClassicSimilarity], result of:
          0.069587775 = score(doc=399,freq=2.0), product of:
            0.16634533 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.030753274 = queryNorm
            0.4183332 = fieldWeight in 399, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0546875 = fieldNorm(doc=399)
      0.3 = coord(3/10)
    
    Abstract
    Computer based systems can produce enourmous retrieval sets even when good search logic is used. Sometimes this is desirable, more often it is not. Appropriate filters can limit search results, but they represent only a partial solution. Simple ranking techniques are needed that are both effective and easily understood by the humans doing the searching. Optimal search output, whether from a traditional database or the Internet, will result when intuitive interfaces are designed that inspire confidence while making the necessary mathematics transparent. Weighted term searching using powers of 2, a technique proposed early in the history of information retrieval, can be simplifies and used in combination with modern graphics and textual input to achieve these results
    Source
    Journal of information; communication; and library science. 3(1997) no.3, S.3-10
  19. Cooper, W.S.: ¬The paradoxal role of unexamined documents in the evaluation of retrieval effectiveness (1976) 0.03
    0.03429794 = product of:
      0.11432646 = sum of:
        0.01675356 = weight(_text_:information in 2186) [ClassicSimilarity], result of:
          0.01675356 = score(doc=2186,freq=2.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.3103276 = fieldWeight in 2186, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.125 = fieldNorm(doc=2186)
        0.04974423 = weight(_text_:retrieval in 2186) [ClassicSimilarity], result of:
          0.04974423 = score(doc=2186,freq=2.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.5347345 = fieldWeight in 2186, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.125 = fieldNorm(doc=2186)
        0.04782867 = product of:
          0.09565734 = sum of:
            0.09565734 = weight(_text_:evaluation in 2186) [ClassicSimilarity], result of:
              0.09565734 = score(doc=2186,freq=2.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.7415253 = fieldWeight in 2186, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.125 = fieldNorm(doc=2186)
          0.5 = coord(1/2)
      0.3 = coord(3/10)
    
    Source
    Information processing and management. 12(1976), S.367-375
  20. Blair, D.C.: STAIRS Redux : thoughts on the STAIRS evaluation, ten years after (1996) 0.03
    0.033648 = product of:
      0.11216 = sum of:
        0.010365736 = weight(_text_:information in 3002) [ClassicSimilarity], result of:
          0.010365736 = score(doc=3002,freq=4.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.1920054 = fieldWeight in 3002, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3002)
        0.03077767 = weight(_text_:retrieval in 3002) [ClassicSimilarity], result of:
          0.03077767 = score(doc=3002,freq=4.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.33085006 = fieldWeight in 3002, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3002)
        0.071016595 = sum of:
          0.04185009 = weight(_text_:evaluation in 3002) [ClassicSimilarity], result of:
            0.04185009 = score(doc=3002,freq=2.0), product of:
              0.12900078 = queryWeight, product of:
                4.1947007 = idf(docFreq=1811, maxDocs=44218)
                0.030753274 = queryNorm
              0.32441732 = fieldWeight in 3002, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.1947007 = idf(docFreq=1811, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3002)
          0.029166508 = weight(_text_:22 in 3002) [ClassicSimilarity], result of:
            0.029166508 = score(doc=3002,freq=2.0), product of:
              0.107692726 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.030753274 = queryNorm
              0.2708308 = fieldWeight in 3002, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3002)
      0.3 = coord(3/10)
    
    Abstract
    The test of retrieval effectiveness performed on IBM's STAIRS and reported in 'Communications of the ACM' 10 years ago, continues to be cited frequently in the information retrieval literature. The reasons for the study's continuing pertinence to today's research are discussed, and the political, legal, and commercial aspects of the study are presented. In addition, the method of calculating recall that was used in the STAIRS study is discussed in some detail, especially how it reduces the 5 major types of uncertainty in recall estimations. It is also suggested that this method of recall estimation may serve as the basis for recall estimations that might be truly comparable between systems
    Source
    Journal of the American Society for Information Science. 47(1996) no.1, S.4-22

Years

Languages

Types

  • a 421
  • s 15
  • el 10
  • m 10
  • r 7
  • x 3
  • d 1
  • p 1
  • More… Less…