Search (57 results, page 1 of 3)

  • × theme_ss:"Retrievalstudien"
  1. Ruthven, I.; Baillie, M.; Elsweiler, D.: ¬The relative effects of knowledge, interest and confidence in assessing relevance (2007) 0.04
    0.039732672 = product of:
      0.11919801 = sum of:
        0.11919801 = weight(_text_:interest in 835) [ClassicSimilarity], result of:
          0.11919801 = score(doc=835,freq=6.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.47537887 = fieldWeight in 835, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.0390625 = fieldNorm(doc=835)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - The purpose of this paper is to examine how different aspects of an assessor's context, in particular their knowledge of a search topic, their interest in the search topic and their confidence in assessing relevance for a topic, affect the relevance judgements made and the assessor's ability to predict which documents they will assess as being relevant. Design/methodology/approach - The study was conducted as part of the Text REtrieval Conference (TREC) HARD track. Using a specially constructed questionnaire information was sought on TREC assessors' personal context and, using the TREC assessments gathered, the responses were correlated to the questionnaire questions and the final relevance decisions. Findings - This study found that each of the three factors (interest, knowledge and confidence) had an affect on how many documents were assessed as relevant and the balance between how many documents were marked as marginally or highly relevant. Also these factors are shown to affect an assessors' ability to predict what information they will finally mark as being relevant. Research limitations/implications - The major limitation is that the research is conducted within the TREC initiative. This means that we can report on results but cannot report on discussions with the assessors. The research implications are numerous but mainly on the effect of personal context on the outcomes of a user study. Practical implications - One major consequence is that we should take more account of how we construct search tasks for IIR evaluation to create tasks that are interesting and relevant to experimental subjects. Originality/value - Examining different search variables within one study to compare the relative effects on these variables on the search outcomes.
  2. Byrne, J.R.: Relative effectiveness of titles, abstracts, and subject headings for machine retrieval from the COMPENDEX services (1975) 0.03
    0.032115534 = product of:
      0.0963466 = sum of:
        0.0963466 = weight(_text_:interest in 1604) [ClassicSimilarity], result of:
          0.0963466 = score(doc=1604,freq=2.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.38424414 = fieldWeight in 1604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1604)
      0.33333334 = coord(1/3)
    
    Abstract
    We have investigated the relative merits of searching on titles, subject headings, abstracts, free-language terms, and combinations of these elements. The COMPENDEX data base was used for this study since it combined all of these data elements of interest. In general, the results obtained from the experiments indicate that, as expected, titles alone are not satisfactory for efficient retrieval. The combination of titles and abstracts came the closest to 100% retrieval, with searching of abstracts alone doing almost as well. Indexer input, although necessary for 100% retrieval in almost all cases, was found to be relatively unimportant
  3. Palmquist, R.A.; Kim, K.-S.: Cognitive style and on-line database search experience as predictors of Web search performance (2000) 0.03
    0.027527599 = product of:
      0.082582794 = sum of:
        0.082582794 = weight(_text_:interest in 4605) [ClassicSimilarity], result of:
          0.082582794 = score(doc=4605,freq=2.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.3293521 = fieldWeight in 4605, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.046875 = fieldNorm(doc=4605)
      0.33333334 = coord(1/3)
    
    Abstract
    This study sought to investigate the effects of cognitive style (field dependent and field independent) and on-line database search experience (novice and experienced) on the WWW search performance of undergraduate college students (n=48). It also attempted to find user factors that could be used to predict search efficiency. search performance, the dependent variable was defined in 2 ways: (1) time required for retrieving a relevant information item, and (2) the number of nodes traversed for retrieving a relevant information item. the search tasks required were carried out on a University Web site, and included a factual task and a topical search task of interest to the participant. Results indicated that while cognitive style (FD/FI) significantly influenced the search performance of novice searchers, the influence was greatly reduced in those searchers who had on-line database search experience. Based on the findings, suggestions for possible changes to the design of the current Web interface and to user training programs are provided
  4. Blandford, A.; Adams, A.; Attfield, S.; Buchanan, G.; Gow, J.; Makri, S.; Rimmer, J.; Warwick, C.: ¬The PRET A Rapporter framework : evaluating digital libraries from the perspective of information work (2008) 0.03
    0.027527599 = product of:
      0.082582794 = sum of:
        0.082582794 = weight(_text_:interest in 2021) [ClassicSimilarity], result of:
          0.082582794 = score(doc=2021,freq=2.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.3293521 = fieldWeight in 2021, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.046875 = fieldNorm(doc=2021)
      0.33333334 = coord(1/3)
    
    Abstract
    The strongest tradition of IR systems evaluation has focused on system effectiveness; more recently, there has been a growing interest in evaluation of Interactive IR systems, balancing system and user-oriented evaluation criteria. In this paper we shift the focus to considering how IR systems, and particularly digital libraries, can be evaluated to assess (and improve) their fit with users' broader work activities. Taking this focus, we answer a different set of evaluation questions that reveal more about the design of interfaces, user-system interactions and how systems may be deployed in the information working context. The planning and conduct of such evaluation studies share some features with the established methods for conducting IR evaluation studies, but come with a shift in emphasis; for example, a greater range of ethical considerations may be pertinent. We present the PRET A Rapporter framework for structuring user-centred evaluation studies and illustrate its application to three evaluation studies of digital library systems.
  5. Li, J.; Zhang, P.; Song, D.; Wu, Y.: Understanding an enriched multidimensional user relevance model by analyzing query logs (2017) 0.03
    0.027527599 = product of:
      0.082582794 = sum of:
        0.082582794 = weight(_text_:interest in 3961) [ClassicSimilarity], result of:
          0.082582794 = score(doc=3961,freq=2.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.3293521 = fieldWeight in 3961, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.046875 = fieldNorm(doc=3961)
      0.33333334 = coord(1/3)
    
    Abstract
    Modeling multidimensional relevance in information retrieval (IR) has attracted much attention in recent years. However, most existing studies are conducted through relatively small-scale user studies, which may not reflect a real-world and natural search scenario. In this article, we propose to study the multidimensional user relevance model (MURM) on large scale query logs, which record users' various search behaviors (e.g., query reformulations, clicks and dwelling time, etc.) in natural search settings. We advance an existing MURM model (including five dimensions: topicality, novelty, reliability, understandability, and scope) by providing two additional dimensions, that is, interest and habit. The two new dimensions represent personalized relevance judgment on retrieved documents. Further, for each dimension in the enriched MURM model, a set of computable features are formulated. By conducting extensive document ranking experiments on Bing's query logs and TREC session Track data, we systematically investigated the impact of each dimension on retrieval performance and gained a series of insightful findings which may bring benefits for the design of future IR systems.
  6. Leininger, K.: Interindexer consistency in PsychINFO (2000) 0.02
    0.024986658 = product of:
      0.07495997 = sum of:
        0.07495997 = sum of:
          0.03393283 = weight(_text_:classification in 2552) [ClassicSimilarity], result of:
            0.03393283 = score(doc=2552,freq=2.0), product of:
              0.16072905 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.05046903 = queryNorm
              0.21111822 = fieldWeight in 2552, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.046875 = fieldNorm(doc=2552)
          0.04102714 = weight(_text_:22 in 2552) [ClassicSimilarity], result of:
            0.04102714 = score(doc=2552,freq=2.0), product of:
              0.17673394 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05046903 = queryNorm
              0.23214069 = fieldWeight in 2552, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2552)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports results of a study to examine interindexer consistency (the degree to which indexers, when assigning terms to a chosen record, will choose the same terms to reflect that record) in the PsycINFO database using 60 records that were inadvertently processed twice between 1996 and 1998. Five aspects of interindexer consistency were analysed. Two methods were used to calculate interindexer consistency: one posited by Hooper (1965) and the other by Rollin (1981). Aspects analysed were: checktag consistency (66.24% using Hooper's calculation and 77.17% using Rollin's); major-to-all term consistency (49.31% and 62.59% respectively); overall indexing consistency (49.02% and 63.32%); classification code consistency (44.17% and 45.00%); and major-to-major term consistency (43.24% and 56.09%). The average consistency across all categories was 50.4% using Hooper's method and 60.83% using Rollin's. Although comparison with previous studies is difficult due to methodological variations in the overall study of indexing consistency and the specific characteristics of the database, results generally support previous findings when trends and similar studies are analysed.
    Date
    9. 2.1997 18:44:22
  7. Ellis, D.: ¬The dilemma of measurement in information retrieval research (1996) 0.02
    0.022939665 = product of:
      0.068818994 = sum of:
        0.068818994 = weight(_text_:interest in 3003) [ClassicSimilarity], result of:
          0.068818994 = score(doc=3003,freq=2.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.27446008 = fieldWeight in 3003, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3003)
      0.33333334 = coord(1/3)
    
    Abstract
    The problem of measurement in information retrieval research is traced to its source in the first retrieval tests. The problem is seen as presenting a chronic dilemma for the field. This dilemma has taken 3 forms as the discipline has evloved: (1) the dilemma of measurement in the archetypal approach: stated relevance versus user relevance; (2) the dilemma of measurement in the probabilistic approach: realism versus formalism; and (3) the dilemma of measurement in the Information Retrieval-Expert System (IR-ES) approach: linear measures of relevance versus logarithmic measures of knowledge. It is argued that the dilemma of measurement has remained intractable even given the different assumptions of the different approaches for 3 connecte reasons - the nature of the subject matter of the field; the nature of relevance jidgement; and the nature of cognition and knowledge. Finally, it is concluded that the original vision of information retrieval research as a discipline founded on quantification proved restricting for its theoretical and methodological development and that increasing recognition of this is reflected in growing interest in qualitative methods in information retrieval research in relation to cognitive, behavioral, and affective aspects of the information retrieval interaction
  8. Kelly, D.; Sugimoto, C.R.: ¬A systematic review of interactive information retrieval evaluation studies, 1967-2006 (2013) 0.02
    0.022939665 = product of:
      0.068818994 = sum of:
        0.068818994 = weight(_text_:interest in 684) [ClassicSimilarity], result of:
          0.068818994 = score(doc=684,freq=2.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.27446008 = fieldWeight in 684, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.0390625 = fieldNorm(doc=684)
      0.33333334 = coord(1/3)
    
    Abstract
    With the increasing number and diversity of search tools available, interest in the evaluation of search systems, particularly from a user perspective, has grown among researchers. More researchers are designing and evaluating interactive information retrieval (IIR) systems and beginning to innovate in evaluation methods. Maturation of a research specialty relies on the ability to replicate research, provide standards for measurement and analysis, and understand past endeavors. This article presents a historical overview of 40 years of IIR evaluation studies using the method of systematic review. A total of 2,791 journal and conference units were manually examined and 127 articles were selected for analysis in this study, based on predefined inclusion and exclusion criteria. These articles were systematically coded using features such as author, publication date, sources and references, and properties of the research method used in the articles, such as number of subjects, tasks, corpora, and measures. Results include data describing the growth of IIR studies over time, the most frequently occurring and cited authors and sources, and the most common types of corpora and measures used. An additional product of this research is a bibliography of IIR evaluation research that can be used by students, teachers, and those new to the area. To the authors' knowledge, this is the first historical, systematic characterization of the IIR evaluation literature, including the documentation of methods and measures used by researchers in this specialty.
  9. Ruthven, I.: Relevance behaviour in TREC (2014) 0.02
    0.022939665 = product of:
      0.068818994 = sum of:
        0.068818994 = weight(_text_:interest in 1785) [ClassicSimilarity], result of:
          0.068818994 = score(doc=1785,freq=2.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.27446008 = fieldWeight in 1785, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1785)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - The purpose of this paper is to examine how various types of TREC data can be used to better understand relevance and serve as test-bed for exploring relevance. The author proposes that there are many interesting studies that can be performed on the TREC data collections that are not directly related to evaluating systems but to learning more about human judgements of information and relevance and that these studies can provide useful research questions for other types of investigation. Design/methodology/approach - Through several case studies the author shows how existing data from TREC can be used to learn more about the factors that may affect relevance judgements and interactive search decisions and answer new research questions for exploring relevance. Findings - The paper uncovers factors, such as familiarity, interest and strictness of relevance criteria, that affect the nature of relevance assessments within TREC, contrasting these against findings from user studies of relevance. Research limitations/implications - The research only considers certain uses of TREC data and assessment given by professional relevance assessors but motivates further exploration of the TREC data so that the research community can further exploit the effort involved in the construction of TREC test collections. Originality/value - The paper presents an original viewpoint on relevance investigations and TREC itself by motivating TREC as a source of inspiration on understanding relevance rather than purely as a source of evaluation material.
  10. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.02
    0.015955 = product of:
      0.047864996 = sum of:
        0.047864996 = product of:
          0.09572999 = sum of:
            0.09572999 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.09572999 = score(doc=262,freq=2.0), product of:
                0.17673394 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046903 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    20.10.2000 12:22:23
  11. Tomaiuolo, N.G.; Parker, J.: Maximizing relevant retrieval : keyword and natural language searching (1998) 0.02
    0.015955 = product of:
      0.047864996 = sum of:
        0.047864996 = product of:
          0.09572999 = sum of:
            0.09572999 = weight(_text_:22 in 6418) [ClassicSimilarity], result of:
              0.09572999 = score(doc=6418,freq=2.0), product of:
                0.17673394 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046903 = queryNorm
                0.5416616 = fieldWeight in 6418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6418)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Online. 22(1998) no.6, S.57-58
  12. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.02
    0.015955 = product of:
      0.047864996 = sum of:
        0.047864996 = product of:
          0.09572999 = sum of:
            0.09572999 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
              0.09572999 = score(doc=6438,freq=2.0), product of:
                0.17673394 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046903 = queryNorm
                0.5416616 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    11. 8.2001 16:22:19
  13. Dalrymple, P.W.: Retrieval by reformulation in two library catalogs : toward a cognitive model of searching behavior (1990) 0.02
    0.015955 = product of:
      0.047864996 = sum of:
        0.047864996 = product of:
          0.09572999 = sum of:
            0.09572999 = weight(_text_:22 in 5089) [ClassicSimilarity], result of:
              0.09572999 = score(doc=5089,freq=2.0), product of:
                0.17673394 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046903 = queryNorm
                0.5416616 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 7.2006 18:43:54
  14. Cleverdon, C.W.; Mills, J.: ¬The testing of index language devices (1997) 0.01
    0.013060754 = product of:
      0.03918226 = sum of:
        0.03918226 = product of:
          0.07836452 = sum of:
            0.07836452 = weight(_text_:classification in 576) [ClassicSimilarity], result of:
              0.07836452 = score(doc=576,freq=6.0), product of:
                0.16072905 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05046903 = queryNorm
                0.48755667 = fieldWeight in 576, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0625 = fieldNorm(doc=576)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    From classification to 'knowledge organization': Dorking revisited or 'past is prelude'. A collection of reprints to commemorate the firty year span between the Dorking Conference (First International Study Conference on Classification Research 1957) and the Sixth International Study Conference on Classification Research (London 1997). Ed.: A. Gilchrist
  15. Cross-language information retrieval (1998) 0.01
    0.011469833 = product of:
      0.034409497 = sum of:
        0.034409497 = weight(_text_:interest in 6299) [ClassicSimilarity], result of:
          0.034409497 = score(doc=6299,freq=2.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.13723004 = fieldWeight in 6299, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.01953125 = fieldNorm(doc=6299)
      0.33333334 = coord(1/3)
    
    Footnote
    Christian Fluhr at al (DIST/SMTI, France) outline the EMIR (European Multilingual Information Retrieval) and ESPRIT projects. They found that using SYSTRAN to machine translate queries and to access material from various multilingual databases produced less relevant results than a method referred to as 'multilingual reformulation' (the mechanics of which are only hinted at). An interesting technique is Latent Semantic Indexing (LSI), described by Michael Littman et al (Brown University) and, most clearly, by David Evans et al (Carnegie Mellon University). LSI involves creating matrices of documents and the terms they contain and 'fitting' related documents into a reduced matrix space. This effectively allows queries to be mapped onto a common semantic representation of the documents. Eugenio Picchi and Carol Peters (Pisa) report on a procedure to create links between translation equivalents in an Italian-English parallel corpus. The links are used to construct parallel linguistic contexts in real-time for any term or combination of terms that is being searched for in either language. Their interest is primarily lexicographic but they plan to apply the same procedure to comparable corpora, i.e. to texts which are not translations of each other but which share the same domain. Kiyoshi Yamabana et al (NEC, Japan) address the issue of how to disambiguate between alternative translations of query terms. Their DMAX (double maximise) method looks at co-occurrence frequencies between both source language words and target language words in order to arrive at the most probable translation. The statistical data for the decision are derived, not from the translation texts but independently from monolingual corpora in each language. An interactive user interface allows the user to influence the selection of terms during the matching process. Denis Gachot et al (SYSTRAN) describe the SYSTRAN NLP browser, a prototype tool which collects parsing information derived from a text or corpus previously translated with SYSTRAN. The user enters queries into the browser in either a structured or free form and receives grammatical and lexical information about the source text and/or its translation.
  16. TREC: experiment and evaluation in information retrieval (2005) 0.01
    0.011469833 = product of:
      0.034409497 = sum of:
        0.034409497 = weight(_text_:interest in 636) [ClassicSimilarity], result of:
          0.034409497 = score(doc=636,freq=2.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.13723004 = fieldWeight in 636, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.01953125 = fieldNorm(doc=636)
      0.33333334 = coord(1/3)
    
    Abstract
    The Text REtrieval Conference (TREC), a yearly workshop hosted by the US government's National Institute of Standards and Technology, provides the infrastructure necessary for large-scale evaluation of text retrieval methodologies. With the goal of accelerating research in this area, TREC created the first large test collections of full-text documents and standardized retrieval evaluation. The impact has been significant; since TREC's beginning in 1992, retrieval effectiveness has approximately doubled. TREC has built a variety of large test collections, including collections for such specialized retrieval tasks as cross-language retrieval and retrieval of speech. Moreover, TREC has accelerated the transfer of research ideas into commercial systems, as demonstrated in the number of retrieval techniques developed in TREC that are now used in Web search engines. This book provides a comprehensive review of TREC research, summarizing the variety of TREC results, documenting the best practices in experimental information retrieval, and suggesting areas for further research. The first part of the book describes TREC's history, test collections, and retrieval methodology. Next, the book provides "track" reports -- describing the evaluations of specific tasks, including routing and filtering, interactive retrieval, and retrieving noisy text. The final part of the book offers perspectives on TREC from such participants as Microsoft Research, University of Massachusetts, Cornell University, University of Waterloo, City University of New York, and IBM. The book will be of interest to researchers in information retrieval and related technologies, including natural language processing.
  17. Allan, J.; Callan, J.P.; Croft, W.B.; Ballesteros, L.; Broglio, J.; Xu, J.; Shu, H.: INQUERY at TREC-5 (1997) 0.01
    0.011396429 = product of:
      0.034189284 = sum of:
        0.034189284 = product of:
          0.06837857 = sum of:
            0.06837857 = weight(_text_:22 in 3103) [ClassicSimilarity], result of:
              0.06837857 = score(doc=3103,freq=2.0), product of:
                0.17673394 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046903 = queryNorm
                0.38690117 = fieldWeight in 3103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3103)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    27. 2.1999 20:55:22
  18. Ng, K.B.; Loewenstern, D.; Basu, C.; Hirsh, H.; Kantor, P.B.: Data fusion of machine-learning methods for the TREC5 routing tak (and other work) (1997) 0.01
    0.011396429 = product of:
      0.034189284 = sum of:
        0.034189284 = product of:
          0.06837857 = sum of:
            0.06837857 = weight(_text_:22 in 3107) [ClassicSimilarity], result of:
              0.06837857 = score(doc=3107,freq=2.0), product of:
                0.17673394 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046903 = queryNorm
                0.38690117 = fieldWeight in 3107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3107)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    27. 2.1999 20:59:22
  19. Saracevic, T.: On a method for studying the structure and nature of requests in information retrieval (1983) 0.01
    0.011396429 = product of:
      0.034189284 = sum of:
        0.034189284 = product of:
          0.06837857 = sum of:
            0.06837857 = weight(_text_:22 in 2417) [ClassicSimilarity], result of:
              0.06837857 = score(doc=2417,freq=2.0), product of:
                0.17673394 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046903 = queryNorm
                0.38690117 = fieldWeight in 2417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2417)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Pages
    S.22-25
  20. Sparck Jones, K.: Reflections on TREC (1997) 0.01
    0.009795565 = product of:
      0.029386694 = sum of:
        0.029386694 = product of:
          0.058773387 = sum of:
            0.058773387 = weight(_text_:classification in 580) [ClassicSimilarity], result of:
              0.058773387 = score(doc=580,freq=6.0), product of:
                0.16072905 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05046903 = queryNorm
                0.3656675 = fieldWeight in 580, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.046875 = fieldNorm(doc=580)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    From classification to 'knowledge organization': Dorking revisited or 'past is prelude'. A collection of reprints to commemorate the firty year span between the Dorking Conference (First International Study Conference on Classification Research 1957) and the Sixth International Study Conference on Classification Research (London 1997). Ed.: A. Gilchrist

Years

Languages

  • e 52
  • d 3
  • f 1
  • More… Less…

Types

  • a 51
  • s 5
  • m 4
  • el 1
  • More… Less…