Search (89 results, page 1 of 5)

  • × theme_ss:"Retrievalstudien"
  1. Saracevic, T.: On a method for studying the structure and nature of requests in information retrieval (1983) 0.05
    0.045764312 = product of:
      0.06864647 = sum of:
        0.038346052 = product of:
          0.076692104 = sum of:
            0.076692104 = weight(_text_:t in 2417) [ClassicSimilarity], result of:
              0.076692104 = score(doc=2417,freq=2.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.43524727 = fieldWeight in 2417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2417)
          0.5 = coord(1/2)
        0.030300418 = product of:
          0.060600836 = sum of:
            0.060600836 = weight(_text_:22 in 2417) [ClassicSimilarity], result of:
              0.060600836 = score(doc=2417,freq=2.0), product of:
                0.1566313 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04472842 = queryNorm
                0.38690117 = fieldWeight in 2417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2417)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Pages
    S.22-25
  2. King, D.W.: Blazing new trails : in celebration of an audacious career (2000) 0.04
    0.03630027 = product of:
      0.10890081 = sum of:
        0.10890081 = sum of:
          0.07860039 = weight(_text_:i in 1184) [ClassicSimilarity], result of:
            0.07860039 = score(doc=1184,freq=10.0), product of:
              0.16870351 = queryWeight, product of:
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.04472842 = queryNorm
              0.46590847 = fieldWeight in 1184, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1184)
          0.030300418 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
            0.030300418 = score(doc=1184,freq=2.0), product of:
              0.1566313 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04472842 = queryNorm
              0.19345059 = fieldWeight in 1184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1184)
      0.33333334 = coord(1/3)
    
    Abstract
    I had the distinct pleasure of working with Pauline Atherton (Cochrane) during the 1960s, a period that can be considered the heyday of automated information system design and evaluation in the United States. I first met Pauline at the 1962 American Documentation Institute annual meeting in North Hollywood, Florida. My company, Westat Research Analysts, had recently been awarded a contract by the U.S. Patent Office to provide statistical support for the design of experiments with automated information retrieval systems. I was asked to attend the meeting to learn more about information retrieval systems and to begin informing others of U.S. Patent Office activities in this area. At one session, Pauline and I questioned a speaker about the research that he presented. Pauline's questions concerned the logic of their approach and mine, the statistical aspects. After the session, she came over to talk to me and we began a professional and personal friendship that continues to this day. During the 1960s, Pauline was involved in several important information-retrieval projects including a series of studies for the American Institute of Physics, a dissertation examining the relevance of retrieved documents, and development and evaluation of an online information-retrieval system. I had the opportunity to work with Pauline and her colleagues an four of those projects and will briefly describe her work in the 1960s.
    Date
    22. 9.1997 19:16:05
  3. Parapar, J.; Losada, D.E.; Presedo-Quindimil, M.A.; Barreiro, A.: Using score distributions to compare statistical significance tests for information retrieval evaluation (2020) 0.03
    0.029352438 = product of:
      0.044028655 = sum of:
        0.019173026 = product of:
          0.038346052 = sum of:
            0.038346052 = weight(_text_:t in 5506) [ClassicSimilarity], result of:
              0.038346052 = score(doc=5506,freq=2.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.21762364 = fieldWeight in 5506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5506)
          0.5 = coord(1/2)
        0.024855627 = product of:
          0.049711253 = sum of:
            0.049711253 = weight(_text_:i in 5506) [ClassicSimilarity], result of:
              0.049711253 = score(doc=5506,freq=4.0), product of:
                0.16870351 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04472842 = queryNorm
                0.29466638 = fieldWeight in 5506, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5506)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Statistical significance tests can provide evidence that the observed difference in performance between 2 methods is not due to chance. In information retrieval (IR), some studies have examined the validity and suitability of such tests for comparing search systems. We argue here that current methods for assessing the reliability of statistical tests suffer from some methodological weaknesses, and we propose a novel way to study significance tests for retrieval evaluation. Using Score Distributions, we model the output of multiple search systems, produce simulated search results from such models, and compare them using various significance tests. A key strength of this approach is that we assess statistical tests under perfect knowledge about the truth or falseness of the null hypothesis. This new method for studying the power of significance tests in IR evaluation is formal and innovative. Following this type of analysis, we found that both the sign test and Wilcoxon signed test have more power than the permutation test and the t-test. The sign test and Wilcoxon signed test also have good behavior in terms of type I errors. The bootstrap test shows few type I errors, but it has less power than the other methods tested.
  4. Iivonen, M.: Consistency in the selection of search concepts and search terms (1995) 0.03
    0.026180632 = product of:
      0.0785419 = sum of:
        0.0785419 = sum of:
          0.0421814 = weight(_text_:i in 1757) [ClassicSimilarity], result of:
            0.0421814 = score(doc=1757,freq=2.0), product of:
              0.16870351 = queryWeight, product of:
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.04472842 = queryNorm
              0.25003272 = fieldWeight in 1757, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.046875 = fieldNorm(doc=1757)
          0.0363605 = weight(_text_:22 in 1757) [ClassicSimilarity], result of:
            0.0363605 = score(doc=1757,freq=2.0), product of:
              0.1566313 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04472842 = queryNorm
              0.23214069 = fieldWeight in 1757, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1757)
      0.33333334 = coord(1/3)
    
    Abstract
    Considers intersearcher and intrasearcher consistency in the selection of search terms. Based on an empirical study where 22 searchers from 4 different types of search environments analyzed altogether 12 search requests of 4 different types in 2 separate test situations between which 2 months elapsed. Statistically very significant differences in consistency were found according to the types of search environments and search requests. Consistency was also considered according to the extent of the scope of search concept. At level I search terms were compared character by character. At level II different search terms were accepted as the same search concept with a rather simple evaluation of linguistic expressions. At level III, in addition to level II, the hierarchical approach of the search request was also controlled. At level IV different search terms were accepted as the same search concept with a broad interpretation of the search concept. Both intersearcher and intrasearcher consistency grew most immediately after a rather simple evaluation of linguistic impressions
  5. Ravana, S.D.; Taheri, M.S.; Rajagopal, P.: Document-based approach to improve the accuracy of pairwise comparison in evaluating information retrieval systems (2015) 0.02
    0.022882156 = product of:
      0.034323234 = sum of:
        0.019173026 = product of:
          0.038346052 = sum of:
            0.038346052 = weight(_text_:t in 2587) [ClassicSimilarity], result of:
              0.038346052 = score(doc=2587,freq=2.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.21762364 = fieldWeight in 2587, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2587)
          0.5 = coord(1/2)
        0.015150209 = product of:
          0.030300418 = sum of:
            0.030300418 = weight(_text_:22 in 2587) [ClassicSimilarity], result of:
              0.030300418 = score(doc=2587,freq=2.0), product of:
                0.1566313 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04472842 = queryNorm
                0.19345059 = fieldWeight in 2587, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2587)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose The purpose of this paper is to propose a method to have more accurate results in comparing performance of the paired information retrieval (IR) systems with reference to the current method, which is based on the mean effectiveness scores of the systems across a set of identified topics/queries. Design/methodology/approach Based on the proposed approach, instead of the classic method of using a set of topic scores, the documents level scores are considered as the evaluation unit. These document scores are the defined document's weight, which play the role of the mean average precision (MAP) score of the systems as a significance test's statics. The experiments were conducted using the TREC 9 Web track collection. Findings The p-values generated through the two types of significance tests, namely the Student's t-test and Mann-Whitney show that by using the document level scores as an evaluation unit, the difference between IR systems is more significant compared with utilizing topic scores. Originality/value Utilizing a suitable test collection is a primary prerequisite for IR systems comparative evaluation. However, in addition to reusable test collections, having an accurate statistical testing is a necessity for these evaluations. The findings of this study will assist IR researchers to evaluate their retrieval systems and algorithms more accurately.
    Date
    20. 1.2015 18:30:22
  6. Belkin, N.J.: ¬An overview of results from Rutgers' investigations of interactive information retrieval (1998) 0.02
    0.021817192 = product of:
      0.06545158 = sum of:
        0.06545158 = sum of:
          0.03515116 = weight(_text_:i in 2339) [ClassicSimilarity], result of:
            0.03515116 = score(doc=2339,freq=2.0), product of:
              0.16870351 = queryWeight, product of:
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.04472842 = queryNorm
              0.20836058 = fieldWeight in 2339, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2339)
          0.030300418 = weight(_text_:22 in 2339) [ClassicSimilarity], result of:
            0.030300418 = score(doc=2339,freq=2.0), product of:
              0.1566313 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04472842 = queryNorm
              0.19345059 = fieldWeight in 2339, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2339)
      0.33333334 = coord(1/3)
    
    Abstract
    Over the last 4 years, the Information Interaction Laboratory at Rutgers' School of communication, Information and Library Studies has performed a series of investigations concerned with various aspects of people's interactions with advanced information retrieval (IR) systems. We have benn especially concerned with understanding not just what people do, and why, and with what effect, but also with what they would like to do, and how they attempt to accomplish it, and with what difficulties. These investigations have led to some quite interesting conclusions about the nature and structure of people's interactions with information, about support for cooperative human-computer interaction in query reformulation, and about the value of visualization of search results for supporting various forms of interaction with information. In this discussion, I give an overview of the research program and its projects, present representative results from the projects, and discuss some implications of these results for support of subject searching in information retrieval systems
    Date
    22. 9.1997 19:16:05
  7. Kutlu, M.; Elsayed, T.; Lease, M.: Intelligent topic selection for low-cost information retrieval evaluation : a new perspective on deep vs. shallow judging (2018) 0.02
    0.019599257 = product of:
      0.029398885 = sum of:
        0.01533842 = product of:
          0.03067684 = sum of:
            0.03067684 = weight(_text_:t in 5092) [ClassicSimilarity], result of:
              0.03067684 = score(doc=5092,freq=2.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.17409891 = fieldWeight in 5092, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5092)
          0.5 = coord(1/2)
        0.014060466 = product of:
          0.028120931 = sum of:
            0.028120931 = weight(_text_:i in 5092) [ClassicSimilarity], result of:
              0.028120931 = score(doc=5092,freq=2.0), product of:
                0.16870351 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04472842 = queryNorm
                0.16668847 = fieldWeight in 5092, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5092)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    While test collections provide the cornerstone for Cranfield-based evaluation of information retrieval (IR) systems, it has become practically infeasible to rely on traditional pooling techniques to construct test collections at the scale of today's massive document collections (e.g., ClueWeb12's 700M+ Webpages). This has motivated a flurry of studies proposing more cost-effective yet reliable IR evaluation methods. In this paper, we propose a new intelligent topic selection method which reduces the number of search topics (and thereby costly human relevance judgments) needed for reliable IR evaluation. To rigorously assess our method, we integrate previously disparate lines of research on intelligent topic selection and deep vs. shallow judging (i.e., whether it is more cost-effective to collect many relevance judgments for a few topics or a few judgments for many topics). While prior work on intelligent topic selection has never been evaluated against shallow judging baselines, prior work on deep vs. shallow judging has largely argued for shallowed judging, but assuming random topic selection. We argue that for evaluating any topic selection method, ultimately one must ask whether it is actually useful to select topics, or should one simply perform shallow judging over many topics? In seeking a rigorous answer to this over-arching question, we conduct a comprehensive investigation over a set of relevant factors never previously studied together: 1) method of topic selection; 2) the effect of topic familiarity on human judging speed; and 3) how different topic generation processes (requiring varying human effort) impact (i) budget utilization and (ii) the resultant quality of judgments. Experiments on NIST TREC Robust 2003 and Robust 2004 test collections show that not only can we reliably evaluate IR systems with fewer topics, but also that: 1) when topics are intelligently selected, deep judging is often more cost-effective than shallow judging in evaluation reliability; and 2) topic familiarity and topic generation costs greatly impact the evaluation cost vs. reliability trade-off. Our findings challenge conventional wisdom in showing that deep judging is often preferable to shallow judging when topics are selected intelligently.
  8. Strzalkowski, T.; Guthrie, L.; Karlgren, J.; Leistensnider, J.; Lin, F.; Perez-Carballo, J.; Straszheim, T.; Wang, J.; Wilding, J.: Natural language information retrieval : TREC-5 report (1997) 0.02
    0.018076502 = product of:
      0.0542295 = sum of:
        0.0542295 = product of:
          0.108459 = sum of:
            0.108459 = weight(_text_:t in 3100) [ClassicSimilarity], result of:
              0.108459 = score(doc=3100,freq=4.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.6155326 = fieldWeight in 3100, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3100)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  9. Aitchison, T.M.: Comparative evaluation of index languages : Part I, Design. Part II, Results (1969) 0.02
    0.016403876 = product of:
      0.04921163 = sum of:
        0.04921163 = product of:
          0.09842326 = sum of:
            0.09842326 = weight(_text_:i in 561) [ClassicSimilarity], result of:
              0.09842326 = score(doc=561,freq=2.0), product of:
                0.16870351 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04472842 = queryNorm
                0.58340967 = fieldWeight in 561, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.109375 = fieldNorm(doc=561)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  10. Järvelin, K.: Evaluation (2011) 0.02
    0.016403876 = product of:
      0.04921163 = sum of:
        0.04921163 = product of:
          0.09842326 = sum of:
            0.09842326 = weight(_text_:i in 548) [ClassicSimilarity], result of:
              0.09842326 = score(doc=548,freq=2.0), product of:
                0.16870351 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04472842 = queryNorm
                0.58340967 = fieldWeight in 548, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.109375 = fieldNorm(doc=548)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Interactive information seeking, behaviour and retrieval. Eds.: Ruthven, I. u. D. Kelly
  11. Kaltenborn, K.-F.: Endnutzerrecherchen in der CD-ROM-Datenbank Medline : T.1: Evaluations- und Benutzerforschung über Nutzungscharakteristika, Bewertung der Rechercheergebnisse und künftige Informationsgewinnung; T.2: Evaluations- und Benutzerforschung über Recherchequalität und Nutzer-Computer/Datenbank-Interaktion (1991) 0.02
    0.01533842 = product of:
      0.04601526 = sum of:
        0.04601526 = product of:
          0.09203052 = sum of:
            0.09203052 = weight(_text_:t in 5105) [ClassicSimilarity], result of:
              0.09203052 = score(doc=5105,freq=8.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.5222967 = fieldWeight in 5105, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5105)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Nachrichten für Dokumentation. 42(1991) H.2, S.107-114. (T.1); 42(1991) H.3, S.177-190 (T.2)
  12. Davis, M.; Dunning, T.: ¬A TREC evaluation of query translation methods for multi-lingual text retrieval (1996) 0.02
    0.01533842 = product of:
      0.04601526 = sum of:
        0.04601526 = product of:
          0.09203052 = sum of:
            0.09203052 = weight(_text_:t in 1917) [ClassicSimilarity], result of:
              0.09203052 = score(doc=1917,freq=2.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.5222967 = fieldWeight in 1917, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1917)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  13. Strzalkowski, T.; Perez-Carballo, J.: Natural language information retrieval : TREC-4 report (1996) 0.02
    0.01533842 = product of:
      0.04601526 = sum of:
        0.04601526 = product of:
          0.09203052 = sum of:
            0.09203052 = weight(_text_:t in 3211) [ClassicSimilarity], result of:
              0.09203052 = score(doc=3211,freq=2.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.5222967 = fieldWeight in 3211, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3211)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  14. Strzalkowski, T.; Sparck Jones, K.: NLP track at TREC-5 (1997) 0.02
    0.01533842 = product of:
      0.04601526 = sum of:
        0.04601526 = product of:
          0.09203052 = sum of:
            0.09203052 = weight(_text_:t in 3098) [ClassicSimilarity], result of:
              0.09203052 = score(doc=3098,freq=2.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.5222967 = fieldWeight in 3098, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3098)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  15. Kuriyama, K.; Kando, N.; Nozue, T.; Eguchi, K.: Pooling for a large-scale test collection : an analysis of the search results from the First NTCIR Workshop (2002) 0.02
    0.01533842 = product of:
      0.04601526 = sum of:
        0.04601526 = product of:
          0.09203052 = sum of:
            0.09203052 = weight(_text_:t in 3830) [ClassicSimilarity], result of:
              0.09203052 = score(doc=3830,freq=2.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.5222967 = fieldWeight in 3830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3830)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  16. Agata, T.: ¬A measure for evaluating search engines on the World Wide Web : retrieval test with ESL (Expected Search Length) (1997) 0.02
    0.01533842 = product of:
      0.04601526 = sum of:
        0.04601526 = product of:
          0.09203052 = sum of:
            0.09203052 = weight(_text_:t in 3892) [ClassicSimilarity], result of:
              0.09203052 = score(doc=3892,freq=2.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.5222967 = fieldWeight in 3892, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3892)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  17. Mandl, T.: Neue Entwicklungen bei den Evaluierungsinitiativen im Information Retrieval (2006) 0.01
    0.014461201 = product of:
      0.043383602 = sum of:
        0.043383602 = product of:
          0.086767204 = sum of:
            0.086767204 = weight(_text_:t in 5975) [ClassicSimilarity], result of:
              0.086767204 = score(doc=5975,freq=4.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.49242607 = fieldWeight in 5975, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5975)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Effektive Information Retrieval Verfahren in Theorie und Praxis: ausgewählte und erweiterte Beiträge des Vierten Hildesheimer Evaluierungs- und Retrievalworkshop (HIER 2005), Hildesheim, 20.7.2005. Hrsg.: T. Mandl u. C. Womser-Hacker
  18. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.01
    0.014140195 = product of:
      0.042420585 = sum of:
        0.042420585 = product of:
          0.08484117 = sum of:
            0.08484117 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.08484117 = score(doc=262,freq=2.0), product of:
                0.1566313 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04472842 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    20.10.2000 12:22:23
  19. Tomaiuolo, N.G.; Parker, J.: Maximizing relevant retrieval : keyword and natural language searching (1998) 0.01
    0.014140195 = product of:
      0.042420585 = sum of:
        0.042420585 = product of:
          0.08484117 = sum of:
            0.08484117 = weight(_text_:22 in 6418) [ClassicSimilarity], result of:
              0.08484117 = score(doc=6418,freq=2.0), product of:
                0.1566313 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04472842 = queryNorm
                0.5416616 = fieldWeight in 6418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6418)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Online. 22(1998) no.6, S.57-58
  20. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.01
    0.014140195 = product of:
      0.042420585 = sum of:
        0.042420585 = product of:
          0.08484117 = sum of:
            0.08484117 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
              0.08484117 = score(doc=6438,freq=2.0), product of:
                0.1566313 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04472842 = queryNorm
                0.5416616 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    11. 8.2001 16:22:19

Languages

  • e 73
  • d 12
  • f 1
  • ja 1
  • m 1
  • More… Less…

Types

  • a 81
  • s 5
  • m 4
  • p 1
  • r 1
  • More… Less…