Search (94 results, page 1 of 5)

  • × language_ss:"e"
  • × theme_ss:"Retrievalstudien"
  • × year_i:[2000 TO 2010}
  1. Leininger, K.: Interindexer consistency in PsychINFO (2000) 0.01
    0.009005977 = product of:
      0.027017929 = sum of:
        0.009274333 = weight(_text_:in in 2552) [ClassicSimilarity], result of:
          0.009274333 = score(doc=2552,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1561842 = fieldWeight in 2552, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2552)
        0.017743597 = product of:
          0.035487194 = sum of:
            0.035487194 = weight(_text_:22 in 2552) [ClassicSimilarity], result of:
              0.035487194 = score(doc=2552,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.23214069 = fieldWeight in 2552, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2552)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Reports results of a study to examine interindexer consistency (the degree to which indexers, when assigning terms to a chosen record, will choose the same terms to reflect that record) in the PsycINFO database using 60 records that were inadvertently processed twice between 1996 and 1998. Five aspects of interindexer consistency were analysed. Two methods were used to calculate interindexer consistency: one posited by Hooper (1965) and the other by Rollin (1981). Aspects analysed were: checktag consistency (66.24% using Hooper's calculation and 77.17% using Rollin's); major-to-all term consistency (49.31% and 62.59% respectively); overall indexing consistency (49.02% and 63.32%); classification code consistency (44.17% and 45.00%); and major-to-major term consistency (43.24% and 56.09%). The average consistency across all categories was 50.4% using Hooper's method and 60.83% using Rollin's. Although comparison with previous studies is difficult due to methodological variations in the overall study of indexing consistency and the specific characteristics of the database, results generally support previous findings when trends and similar studies are analysed.
    Date
    9. 2.1997 18:44:22
  2. King, D.W.: Blazing new trails : in celebration of an audacious career (2000) 0.01
    0.008863994 = product of:
      0.02659198 = sum of:
        0.011805649 = weight(_text_:in in 1184) [ClassicSimilarity], result of:
          0.011805649 = score(doc=1184,freq=14.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.19881277 = fieldWeight in 1184, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1184)
        0.014786332 = product of:
          0.029572664 = sum of:
            0.029572664 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
              0.029572664 = score(doc=1184,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.19345059 = fieldWeight in 1184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1184)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    I had the distinct pleasure of working with Pauline Atherton (Cochrane) during the 1960s, a period that can be considered the heyday of automated information system design and evaluation in the United States. I first met Pauline at the 1962 American Documentation Institute annual meeting in North Hollywood, Florida. My company, Westat Research Analysts, had recently been awarded a contract by the U.S. Patent Office to provide statistical support for the design of experiments with automated information retrieval systems. I was asked to attend the meeting to learn more about information retrieval systems and to begin informing others of U.S. Patent Office activities in this area. At one session, Pauline and I questioned a speaker about the research that he presented. Pauline's questions concerned the logic of their approach and mine, the statistical aspects. After the session, she came over to talk to me and we began a professional and personal friendship that continues to this day. During the 1960s, Pauline was involved in several important information-retrieval projects including a series of studies for the American Institute of Physics, a dissertation examining the relevance of retrieved documents, and development and evaluation of an online information-retrieval system. I had the opportunity to work with Pauline and her colleagues an four of those projects and will briefly describe her work in the 1960s.
    Date
    22. 9.1997 19:16:05
    Source
    Saving the time of the library user through subject access innovation: Papers in honor of Pauline Atherton Cochrane. Ed.: W.J. Wheeler
  3. Petrelli, D.: On the role of user-centred evaluation in the advancement of interactive information retrieval (2008) 0.01
    0.007032239 = product of:
      0.021096716 = sum of:
        0.006310384 = weight(_text_:in in 2026) [ClassicSimilarity], result of:
          0.006310384 = score(doc=2026,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.10626988 = fieldWeight in 2026, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2026)
        0.014786332 = product of:
          0.029572664 = sum of:
            0.029572664 = weight(_text_:22 in 2026) [ClassicSimilarity], result of:
              0.029572664 = score(doc=2026,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.19345059 = fieldWeight in 2026, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2026)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper discusses the role of user-centred evaluations as an essential method for researching interactive information retrieval. It draws mainly on the work carried out during the Clarity Project where different user-centred evaluations were run during the lifecycle of a cross-language information retrieval system. The iterative testing was not only instrumental to the development of a usable system, but it enhanced our knowledge of the potential, impact, and actual use of cross-language information retrieval technology. Indeed the role of the user evaluation was dual: by testing a specific prototype it was possible to gain a micro-view and assess the effectiveness of each component of the complex system; by cumulating the result of all the evaluations (in total 43 people were involved) it was possible to build a macro-view of how cross-language retrieval would impact on users and their tasks. By showing the richness of results that can be acquired, this paper aims at stimulating researchers into considering user-centred evaluations as a flexible, adaptable and comprehensive technique for investigating non-traditional information access systems.
    Source
    Information processing and management. 44(2008) no.1, S.22-38
  4. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.01
    0.006900288 = product of:
      0.04140173 = sum of:
        0.04140173 = product of:
          0.08280346 = sum of:
            0.08280346 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
              0.08280346 = score(doc=6438,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.5416616 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    11. 8.2001 16:22:19
  5. Larsen, B.; Ingwersen, P.; Lund, B.: Data fusion according to the principle of polyrepresentation (2009) 0.01
    0.0060039847 = product of:
      0.018011954 = sum of:
        0.0061828885 = weight(_text_:in in 2752) [ClassicSimilarity], result of:
          0.0061828885 = score(doc=2752,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1041228 = fieldWeight in 2752, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=2752)
        0.011829065 = product of:
          0.02365813 = sum of:
            0.02365813 = weight(_text_:22 in 2752) [ClassicSimilarity], result of:
              0.02365813 = score(doc=2752,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.15476047 = fieldWeight in 2752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2752)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    We report data fusion experiments carried out on the four best-performing retrieval models from TREC 5. Three were conceptually/algorithmically very different from one another; one was algorithmically similar to one of the former. The objective of the test was to observe the performance of the 11 logical data fusion combinations compared to the performance of the four individual models and their intermediate fusions when following the principle of polyrepresentation. This principle is based on cognitive IR perspective (Ingwersen & Järvelin, 2005) and implies that each retrieval model is regarded as a representation of a unique interpretation of information retrieval (IR). It predicts that only fusions of very different, but equally good, IR models may outperform each constituent as well as their intermediate fusions. Two kinds of experiments were carried out. One tested restricted fusions, which entails that only the inner disjoint overlap documents between fused models are ranked. The second set of experiments was based on traditional data fusion methods. The experiments involved the 30 TREC 5 topics that contain more than 44 relevant documents. In all tests, the Borda and CombSUM scoring methods were used. Performance was measured by precision and recall, with document cutoff values (DCVs) at 100 and 15 documents, respectively. Results show that restricted fusions made of two, three, or four cognitively/algorithmically very different retrieval models perform significantly better than do the individual models at DCV100. At DCV15, however, the results of polyrepresentative fusion were less predictable. The traditional fusion method based on polyrepresentation principles demonstrates a clear picture of performance at both DCV levels and verifies the polyrepresentation predictions for data fusion in IR. Data fusion improves retrieval performance over their constituent IR models only if the models all are quite conceptually/algorithmically dissimilar and equally and well performing, in that order of importance.
    Date
    22. 3.2009 18:48:28
  6. Hood, W.W.; Wilson, C.S.: ¬The scatter of documents over databases in different subject domains : how many databases are needed? (2001) 0.00
    0.0025762038 = product of:
      0.015457222 = sum of:
        0.015457222 = weight(_text_:in in 6936) [ClassicSimilarity], result of:
          0.015457222 = score(doc=6936,freq=24.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.260307 = fieldWeight in 6936, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6936)
      0.16666667 = coord(1/6)
    
    Abstract
    The distribution of bibliographic records in on-line bibliographic databases is examined using 14 different search topics. These topics were searched using the DIALOG database host, and using as many suitable databases as possible. The presence of duplicate records in the searches was taken into consideration in the analysis, and the problem with lexical ambiguity in at least one search topic is discussed. The study answers questions such as how many databases are needed in a multifile search for particular topics, and what coverage will be achieved using a certain number of databases. The distribution of the percentages of records retrieved over a number of databases for 13 of the 14 search topics roughly fell into three groups: (1) high concentration of records in one database with about 80% coverage in five to eight databases; (2) moderate concentration in one database with about 80% coverage in seven to 10 databases; and (3) low concentration in one database with about 80% coverage in 16 to 19 databases. The study does conform with earlier results, but shows that the number of databases needed for searches with varying complexities of search strategies, is much more topic dependent than previous studies would indicate.
  7. Voorhees, E.M.: Question answering in TREC (2005) 0.00
    0.0025241538 = product of:
      0.015144923 = sum of:
        0.015144923 = weight(_text_:in in 6487) [ClassicSimilarity], result of:
          0.015144923 = score(doc=6487,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.25504774 = fieldWeight in 6487, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.09375 = fieldNorm(doc=6487)
      0.16666667 = coord(1/6)
    
    Source
    TREC: experiment and evaluation in information retrieval. Ed.: E.M. Voorhees, u. D.K. Harman
  8. Robertson, S.: On the history of evaluation in IR (2009) 0.00
    0.0023797948 = product of:
      0.014278769 = sum of:
        0.014278769 = weight(_text_:in in 3653) [ClassicSimilarity], result of:
          0.014278769 = score(doc=3653,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.24046129 = fieldWeight in 3653, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=3653)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper is a personal take on the history of evaluation experiments in information retrieval. It describes some of the early experiments that were formative in our understanding, and goes on to discuss the current dominance of TREC (the Text REtrieval Conference) and to assess its impact.
    Source
    Information science in transition, Ed.: A. Gilchrist
  9. Savoy , J.: Cross-language information retrieval : experiments based an CLEF 2000 corpora (2003) 0.00
    0.0023611297 = product of:
      0.014166778 = sum of:
        0.014166778 = weight(_text_:in in 1034) [ClassicSimilarity], result of:
          0.014166778 = score(doc=1034,freq=14.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.23857531 = fieldWeight in 1034, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1034)
      0.16666667 = coord(1/6)
    
    Abstract
    Search engines play an essential role in the usability of Internet-based information systems and without them the Web would be much less accessible, and at the very least would develop at a much slower rate. Given that non-English users now tend to make up the majority in this environment, our main objective is to analyze and evaluate the retrieval effectiveness of various indexing and search strategies based on test-collections written in four different languages: English, French, German, and Italian. Our second objective is to describe and evaluate various approaches that might be implemented in order to effectively access document collections written in another language. As a third objective, we will explore the underlying problems involved in searching document collections written in the four different languages, and we will suggest and evaluate different database merging strategies capable of providing the user with a single unique result list.
  10. Eastman, C.M.: 30,000 hits may be better than 300 : precision anomalies in Internet searches (2002) 0.00
    0.0023517415 = product of:
      0.014110449 = sum of:
        0.014110449 = weight(_text_:in in 5231) [ClassicSimilarity], result of:
          0.014110449 = score(doc=5231,freq=20.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2376267 = fieldWeight in 5231, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5231)
      0.16666667 = coord(1/6)
    
    Abstract
    In this issue we begin with a paper where Eastman points out that conventional narrower queries (the use of conjunctions and phrases) in a web engine search will reduce returned number of hits but not necessarily increase precision in the top ranked documents in the return. Thus by precision anomalies Eastman means that search narrowing activity results in no precision change or a decrease in precision. Multiple queries with multiple engines were run by students for a three-year period and the formulation/engine combination was recorded as was the number of hits. Relevance was also recorded for the top ten and top twenty ranked retrievals. While narrower searches reduced total hits they did not usually improve precision. Initial high precision and poor query reformulation account for some of the results, as did Alta Vista's failure to use the ranking algorithm incorporated in its regular search in its advanced search feature. However, since the top listed returns often reoccurred in all formulations, it would seem that the ranking algorithms are doing a consistent job of practical precision ranking that is not improved by reformulation.
  11. Beall, J.; Kafadar, K.: Measuring typographical errors' impact on retrieval in bibliographic databases (2007) 0.00
    0.0021859813 = product of:
      0.013115887 = sum of:
        0.013115887 = weight(_text_:in in 261) [ClassicSimilarity], result of:
          0.013115887 = score(doc=261,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.22087781 = fieldWeight in 261, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=261)
      0.16666667 = coord(1/6)
    
    Abstract
    Typographical errors can block access to records in online catalogs; but, when a word contains a typo and is also spelled correctly elsewhere in the same record, access may not be blocked. To quantify the effect of typographical errors in records on information retrieval, we conducted a study to measure the proportion of records that contain a typographical error but that do not also contain a correct spelling of the same word. This article presents the experimental design, results of the study, and a statistical analysis of the results.We find that the average proportion of records that are blocked by the presence of a typo (that is, records in which a correct spelling of the word does not also occur) ranges from 35% to 99%, depending upon the frequency of the word being searched and the likelihood of the word being misspelled.
    Footnote
    Simultaneously published as Cataloger, Editor, and Scholar: Essays in Honor of Ruth C. Carter
  12. Fraser, L.; Locatis, C.: Effects of link annotations on search performance in layered and unlayered hierarchically organized information spaces (2001) 0.00
    0.0021034614 = product of:
      0.012620768 = sum of:
        0.012620768 = weight(_text_:in in 6937) [ClassicSimilarity], result of:
          0.012620768 = score(doc=6937,freq=16.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21253976 = fieldWeight in 6937, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6937)
      0.16666667 = coord(1/6)
    
    Abstract
    The effects of link annotations on user search performance in hypertext environments having deep (layered) and shallow link structures were investigated in this study. Four environments were tested-layered-annotated, layered-unannotated, shallow-annotated, and shallow-unannotated. A single document was divided into 48 sections, and layered and unlayered versions were created. Additional versions were created by adding annotations to the links in the layered and unlayered versions. Subjects were given three queries of varying difficulty and then asked to find the answers to the queries that were contained within the hypertext environment to which they were randomly assigned. Correspondence between the wording links and queries was used to define difficulty level. The results of the study confirmed previous research that shallow link structures are better than deep (layered) link structures. Annotations had virtually no effect on the search performance of the subjects. The subjects performed similarly in the annotated and unannotated environments, regardless of whether the link structures were shallow or deep. An analysis of question difficulty suggests that the wording in links has primacy over the wording in annotations in influencing user search behavior.
  13. Kazai, G.; Lalmas, M.: ¬The overlap problem in content-oriented XML retrieval evaluation (2004) 0.00
    0.0021034614 = product of:
      0.012620768 = sum of:
        0.012620768 = weight(_text_:in in 4083) [ClassicSimilarity], result of:
          0.012620768 = score(doc=4083,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21253976 = fieldWeight in 4083, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=4083)
      0.16666667 = coord(1/6)
    
    Source
    SIGIR'04: Proceedings of the 27th Annual International ACM-SIGIR Conference an Research and Development in Information Retrieval. Ed.: K. Järvelin, u.a
  14. Hansen, P.; Karlgren, J.: Effects of foreign language and task scenario on relevance assessment (2005) 0.00
    0.0021034614 = product of:
      0.012620768 = sum of:
        0.012620768 = weight(_text_:in in 4393) [ClassicSimilarity], result of:
          0.012620768 = score(doc=4393,freq=16.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21253976 = fieldWeight in 4393, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4393)
      0.16666667 = coord(1/6)
    
    Abstract
    Purpose - This paper aims to investigate how readers assess relevance of retrieved documents in a foreign language they know well compared with their native language, and whether work-task scenario descriptions have effect on the assessment process. Design/methodology/approach - Queries, test collections, and relevance assessments were used from the 2002 Interactive CLEF. Swedish first-language speakers, fluent in English, were given simulated information-seeking scenarios and presented with retrieval results in both languages. Twenty-eight subjects in four groups were asked to rate the retrieved text documents by relevance. A two-level work-task scenario description framework was developed and applied to facilitate the study of context effects on the assessment process. Findings - Relevance assessment takes longer in a foreign language than in the user first language. The quality of assessments by comparison with pre-assessed results is inferior to those made in the users' first language. Work-task scenario descriptions had an effect on the assessment process, both by measured access time and by self-report by subjects. However, effects on results by traditional relevance ranking were detectable. This may be an argument for extending the traditional IR experimental topical relevance measures to cater for context effects. Originality/value - An extended two-level work-task scenario description framework was developed and applied. Contextual aspects had an effect on the relevance assessment process. English texts took longer to assess than Swedish and were assessed less well, especially for the most difficult queries. The IR research field needs to close this gap and to design information access systems with users' language competence in mind.
  15. Voorhees, E.M.: Variations in relevance judgements and the measurement of retrieval effectiveness (2000) 0.00
    0.0020823204 = product of:
      0.012493922 = sum of:
        0.012493922 = weight(_text_:in in 8710) [ClassicSimilarity], result of:
          0.012493922 = score(doc=8710,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21040362 = fieldWeight in 8710, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.109375 = fieldNorm(doc=8710)
      0.16666667 = coord(1/6)
    
  16. Harman, D.K.: ¬The TREC test collections (2005) 0.00
    0.0020823204 = product of:
      0.012493922 = sum of:
        0.012493922 = weight(_text_:in in 4637) [ClassicSimilarity], result of:
          0.012493922 = score(doc=4637,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21040362 = fieldWeight in 4637, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.109375 = fieldNorm(doc=4637)
      0.16666667 = coord(1/6)
    
    Source
    TREC: experiment and evaluation in information retrieval. Ed.: E.M. Voorhees, u. D.K. Harman
  17. Buckley, C.; Voorhees, E.M.: Retrieval system evaluation (2005) 0.00
    0.0020823204 = product of:
      0.012493922 = sum of:
        0.012493922 = weight(_text_:in in 648) [ClassicSimilarity], result of:
          0.012493922 = score(doc=648,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21040362 = fieldWeight in 648, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.109375 = fieldNorm(doc=648)
      0.16666667 = coord(1/6)
    
    Source
    TREC: experiment and evaluation in information retrieval. Ed.: E.M. Voorhees, u. D.K. Harman
  18. Harman, D.K.: ¬The TREC ad hoc experiments (2005) 0.00
    0.0020823204 = product of:
      0.012493922 = sum of:
        0.012493922 = weight(_text_:in in 5711) [ClassicSimilarity], result of:
          0.012493922 = score(doc=5711,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21040362 = fieldWeight in 5711, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.109375 = fieldNorm(doc=5711)
      0.16666667 = coord(1/6)
    
    Source
    TREC: experiment and evaluation in information retrieval. Ed.: E.M. Voorhees, u. D.K. Harman
  19. Rao, A.; Lu, A.; Meier, E.; Ahmed, S.; Pliske, D.: Query processing in TREC6 (2000) 0.00
    0.0020823204 = product of:
      0.012493922 = sum of:
        0.012493922 = weight(_text_:in in 6420) [ClassicSimilarity], result of:
          0.012493922 = score(doc=6420,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21040362 = fieldWeight in 6420, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.109375 = fieldNorm(doc=6420)
      0.16666667 = coord(1/6)
    
  20. Robertson, S.; Callan, J.: Routing and filtering (2005) 0.00
    0.0020823204 = product of:
      0.012493922 = sum of:
        0.012493922 = weight(_text_:in in 4688) [ClassicSimilarity], result of:
          0.012493922 = score(doc=4688,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21040362 = fieldWeight in 4688, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.109375 = fieldNorm(doc=4688)
      0.16666667 = coord(1/6)
    
    Source
    TREC: experiment and evaluation in information retrieval. Ed.: E.M. Voorhees, u. D.K. Harman

Types

  • a 93
  • m 1
  • s 1
  • More… Less…