Search (80 results, page 1 of 4)

  • × language_ss:"e"
  • × theme_ss:"Retrievalstudien"
  • × year_i:[2000 TO 2010}
  1. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.04
    0.03755433 = product of:
      0.11266299 = sum of:
        0.11266299 = product of:
          0.16899449 = sum of:
            0.07221426 = weight(_text_:retrieval in 6438) [ClassicSimilarity], result of:
              0.07221426 = score(doc=6438,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.46789268 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
            0.09678023 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
              0.09678023 = score(doc=6438,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.5416616 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Date
    11. 8.2001 16:22:19
  2. King, D.W.: Blazing new trails : in celebration of an audacious career (2000) 0.04
    0.03736912 = product of:
      0.11210736 = sum of:
        0.11210736 = sum of:
          0.025961377 = weight(_text_:online in 1184) [ClassicSimilarity], result of:
            0.025961377 = score(doc=1184,freq=2.0), product of:
              0.1548489 = queryWeight, product of:
                3.0349014 = idf(docFreq=5778, maxDocs=44218)
                0.051022716 = queryNorm
              0.16765618 = fieldWeight in 1184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0349014 = idf(docFreq=5778, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1184)
          0.051581617 = weight(_text_:retrieval in 1184) [ClassicSimilarity], result of:
            0.051581617 = score(doc=1184,freq=8.0), product of:
              0.15433937 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.051022716 = queryNorm
              0.33420905 = fieldWeight in 1184, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1184)
          0.03456437 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
            0.03456437 = score(doc=1184,freq=2.0), product of:
              0.17867287 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051022716 = queryNorm
              0.19345059 = fieldWeight in 1184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1184)
      0.33333334 = coord(1/3)
    
    Abstract
    I had the distinct pleasure of working with Pauline Atherton (Cochrane) during the 1960s, a period that can be considered the heyday of automated information system design and evaluation in the United States. I first met Pauline at the 1962 American Documentation Institute annual meeting in North Hollywood, Florida. My company, Westat Research Analysts, had recently been awarded a contract by the U.S. Patent Office to provide statistical support for the design of experiments with automated information retrieval systems. I was asked to attend the meeting to learn more about information retrieval systems and to begin informing others of U.S. Patent Office activities in this area. At one session, Pauline and I questioned a speaker about the research that he presented. Pauline's questions concerned the logic of their approach and mine, the statistical aspects. After the session, she came over to talk to me and we began a professional and personal friendship that continues to this day. During the 1960s, Pauline was involved in several important information-retrieval projects including a series of studies for the American Institute of Physics, a dissertation examining the relevance of retrieved documents, and development and evaluation of an online information-retrieval system. I had the opportunity to work with Pauline and her colleagues an four of those projects and will briefly describe her work in the 1960s.
    Date
    22. 9.1997 19:16:05
  3. Petrelli, D.: On the role of user-centred evaluation in the advancement of interactive information retrieval (2008) 0.02
    0.021719709 = product of:
      0.06515913 = sum of:
        0.06515913 = product of:
          0.09773869 = sum of:
            0.06317432 = weight(_text_:retrieval in 2026) [ClassicSimilarity], result of:
              0.06317432 = score(doc=2026,freq=12.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.40932083 = fieldWeight in 2026, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2026)
            0.03456437 = weight(_text_:22 in 2026) [ClassicSimilarity], result of:
              0.03456437 = score(doc=2026,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.19345059 = fieldWeight in 2026, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2026)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper discusses the role of user-centred evaluations as an essential method for researching interactive information retrieval. It draws mainly on the work carried out during the Clarity Project where different user-centred evaluations were run during the lifecycle of a cross-language information retrieval system. The iterative testing was not only instrumental to the development of a usable system, but it enhanced our knowledge of the potential, impact, and actual use of cross-language information retrieval technology. Indeed the role of the user evaluation was dual: by testing a specific prototype it was possible to gain a micro-view and assess the effectiveness of each component of the complex system; by cumulating the result of all the evaluations (in total 43 people were involved) it was possible to build a macro-view of how cross-language retrieval would impact on users and their tasks. By showing the richness of results that can be acquired, this paper aims at stimulating researchers into considering user-centred evaluations as a flexible, adaptable and comprehensive technique for investigating non-traditional information access systems.
    Footnote
    Beitrag eines Themenbereichs: Evaluation of Interactive Information Retrieval Systems
    Source
    Information processing and management. 44(2008) no.1, S.22-38
  4. Beall, J.; Kafadar, K.: Measuring typographical errors' impact on retrieval in bibliographic databases (2007) 0.02
    0.016649358 = product of:
      0.049948074 = sum of:
        0.049948074 = product of:
          0.07492211 = sum of:
            0.031153653 = weight(_text_:online in 261) [ClassicSimilarity], result of:
              0.031153653 = score(doc=261,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.20118743 = fieldWeight in 261, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=261)
            0.043768454 = weight(_text_:retrieval in 261) [ClassicSimilarity], result of:
              0.043768454 = score(doc=261,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.2835858 = fieldWeight in 261, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=261)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Typographical errors can block access to records in online catalogs; but, when a word contains a typo and is also spelled correctly elsewhere in the same record, access may not be blocked. To quantify the effect of typographical errors in records on information retrieval, we conducted a study to measure the proportion of records that contain a typographical error but that do not also contain a correct spelling of the same word. This article presents the experimental design, results of the study, and a statistical analysis of the results.We find that the average proportion of records that are blocked by the presence of a typo (that is, records in which a correct spelling of the word does not also occur) ranges from 35% to 99%, depending upon the frequency of the word being searched and the likelihood of the word being misspelled.
  5. Larsen, B.; Ingwersen, P.; Lund, B.: Data fusion according to the principle of polyrepresentation (2009) 0.02
    0.016397223 = product of:
      0.049191665 = sum of:
        0.049191665 = product of:
          0.073787495 = sum of:
            0.046136 = weight(_text_:retrieval in 2752) [ClassicSimilarity], result of:
              0.046136 = score(doc=2752,freq=10.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.29892567 = fieldWeight in 2752, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2752)
            0.027651496 = weight(_text_:22 in 2752) [ClassicSimilarity], result of:
              0.027651496 = score(doc=2752,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.15476047 = fieldWeight in 2752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2752)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    We report data fusion experiments carried out on the four best-performing retrieval models from TREC 5. Three were conceptually/algorithmically very different from one another; one was algorithmically similar to one of the former. The objective of the test was to observe the performance of the 11 logical data fusion combinations compared to the performance of the four individual models and their intermediate fusions when following the principle of polyrepresentation. This principle is based on cognitive IR perspective (Ingwersen & Järvelin, 2005) and implies that each retrieval model is regarded as a representation of a unique interpretation of information retrieval (IR). It predicts that only fusions of very different, but equally good, IR models may outperform each constituent as well as their intermediate fusions. Two kinds of experiments were carried out. One tested restricted fusions, which entails that only the inner disjoint overlap documents between fused models are ranked. The second set of experiments was based on traditional data fusion methods. The experiments involved the 30 TREC 5 topics that contain more than 44 relevant documents. In all tests, the Borda and CombSUM scoring methods were used. Performance was measured by precision and recall, with document cutoff values (DCVs) at 100 and 15 documents, respectively. Results show that restricted fusions made of two, three, or four cognitively/algorithmically very different retrieval models perform significantly better than do the individual models at DCV100. At DCV15, however, the results of polyrepresentative fusion were less predictable. The traditional fusion method based on polyrepresentation principles demonstrates a clear picture of performance at both DCV levels and verifies the polyrepresentation predictions for data fusion in IR. Data fusion improves retrieval performance over their constituent IR models only if the models all are quite conceptually/algorithmically dissimilar and equally and well performing, in that order of importance.
    Date
    22. 3.2009 18:48:28
  6. Borlund, P.: Evaluation of interactive information retrieval systems (2000) 0.01
    0.013755097 = product of:
      0.04126529 = sum of:
        0.04126529 = product of:
          0.12379587 = sum of:
            0.12379587 = weight(_text_:retrieval in 2556) [ClassicSimilarity], result of:
              0.12379587 = score(doc=2556,freq=18.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.8021017 = fieldWeight in 2556, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2556)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    LCSH
    Information storage and retrieval systems / Evaluation
    RSWK
    Information Retrieval / Datenbankverwaltung / Hochschulschrift (GBV)
    Information Retrieval / Dialogsystem (SWB)
    Information Retrieval / Dialogsystem / Leistungsbewertung (BVB)
    Subject
    Information Retrieval / Datenbankverwaltung / Hochschulschrift (GBV)
    Information Retrieval / Dialogsystem (SWB)
    Information Retrieval / Dialogsystem / Leistungsbewertung (BVB)
    Information storage and retrieval systems / Evaluation
  7. Buckley, C.; Voorhees, E.M.: Retrieval system evaluation (2005) 0.01
    0.011347376 = product of:
      0.034042127 = sum of:
        0.034042127 = product of:
          0.10212638 = sum of:
            0.10212638 = weight(_text_:retrieval in 648) [ClassicSimilarity], result of:
              0.10212638 = score(doc=648,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.6617001 = fieldWeight in 648, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.109375 = fieldNorm(doc=648)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    TREC: experiment and evaluation in information retrieval. Ed.: E.M. Voorhees, u. D.K. Harman
  8. Cole, C.: Intelligent information retrieval : Part IV: Testing the timing of two information retrieval devices in a naturalistic setting (2001) 0.01
    0.009726323 = product of:
      0.02917897 = sum of:
        0.02917897 = product of:
          0.08753691 = sum of:
            0.08753691 = weight(_text_:retrieval in 365) [ClassicSimilarity], result of:
              0.08753691 = score(doc=365,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.5671716 = fieldWeight in 365, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.09375 = fieldNorm(doc=365)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
  9. Buckley, C.; Voorhees, E.M.: Retrieval evaluation with incomplete information (2004) 0.01
    0.009726323 = product of:
      0.02917897 = sum of:
        0.02917897 = product of:
          0.08753691 = sum of:
            0.08753691 = weight(_text_:retrieval in 4127) [ClassicSimilarity], result of:
              0.08753691 = score(doc=4127,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.5671716 = fieldWeight in 4127, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4127)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    SIGIR'04: Proceedings of the 27th Annual International ACM-SIGIR Conference an Research and Development in Information Retrieval. Ed.: K. Järvelin, u.a
  10. Kwok, K.-L.: Ten years of ad hoc retrieval at TREC using PIRCS (2005) 0.01
    0.009726323 = product of:
      0.02917897 = sum of:
        0.02917897 = product of:
          0.08753691 = sum of:
            0.08753691 = weight(_text_:retrieval in 5090) [ClassicSimilarity], result of:
              0.08753691 = score(doc=5090,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.5671716 = fieldWeight in 5090, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5090)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    TREC: experiment and evaluation in information retrieval. Ed.: E.M. Voorhees, u. D.K. Harman
  11. Voorhees, E.M.: Text REtrieval Conference (TREC) (2009) 0.01
    0.009170066 = product of:
      0.027510196 = sum of:
        0.027510196 = product of:
          0.08253059 = sum of:
            0.08253059 = weight(_text_:retrieval in 3890) [ClassicSimilarity], result of:
              0.08253059 = score(doc=3890,freq=8.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.5347345 = fieldWeight in 3890, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3890)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    This entry summarizes the history, results, and impact of the Text REtrieval Conference (TREC), a workshop series designed to support the information retrieval community by building the infrastructure necessary for large-scale evaluation of retrieval technology.
  12. Voorhees, E.M.: On test collections for adaptive information retrieval (2008) 0.01
    0.009098142 = product of:
      0.027294425 = sum of:
        0.027294425 = product of:
          0.081883274 = sum of:
            0.081883274 = weight(_text_:retrieval in 2444) [ClassicSimilarity], result of:
              0.081883274 = score(doc=2444,freq=14.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.5305404 = fieldWeight in 2444, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2444)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Traditional Cranfield test collections represent an abstraction of a retrieval task that Sparck Jones calls the "core competency" of retrieval: a task that is necessary, but not sufficient, for user retrieval tasks. The abstraction facilitates research by controlling for (some) sources of variability, thus increasing the power of experiments that compare system effectiveness while reducing their cost. However, even within the highly-abstracted case of the Cranfield paradigm, meta-analysis demonstrates that the user/topic effect is greater than the system effect, so experiments must include a relatively large number of topics to distinguish systems' effectiveness. The evidence further suggests that changing the abstraction slightly to include just a bit more characterization of the user will result in a dramatic loss of power or increase in cost of retrieval experiments. Defining a new, feasible abstraction for supporting adaptive IR research will require winnowing the list of all possible factors that can affect retrieval behavior to a minimum number of essential factors.
    Footnote
    Beitrag in einem Themenheft "Adaptive information retrieval"
  13. Della Mea, V.; Mizzaro, S.: Measuring retrieval effectiveness : a new proposal and a first experimental validation (2004) 0.01
    0.008970889 = product of:
      0.026912667 = sum of:
        0.026912667 = product of:
          0.080738 = sum of:
            0.080738 = weight(_text_:retrieval in 2263) [ClassicSimilarity], result of:
              0.080738 = score(doc=2263,freq=10.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.5231199 = fieldWeight in 2263, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2263)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Most common effectiveness measures for information retrieval systems are based an the assumptions of binary relevance (either a document is relevant to a given query or it is not) and binary retrieval (either a document is retrieved or it is not). In this article, these assumptions are questioned, and a new measure named ADM (average distance measure) is proposed, discussed from a conceptual point of view, and experimentally validated an Text Retrieval Conference (TREC) data. Both conceptual analysis and experimental evidence demonstrate ADM's adequacy in measuring the effectiveness of information retrieval systems. Some potential problems about precision and recall are also highlighted and discussed.
  14. TREC: experiment and evaluation in information retrieval (2005) 0.01
    0.008476693 = product of:
      0.02543008 = sum of:
        0.02543008 = product of:
          0.076290235 = sum of:
            0.076290235 = weight(_text_:retrieval in 636) [ClassicSimilarity], result of:
              0.076290235 = score(doc=636,freq=70.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.49430186 = fieldWeight in 636, product of:
                  8.3666 = tf(freq=70.0), with freq of:
                    70.0 = termFreq=70.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=636)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The Text REtrieval Conference (TREC), a yearly workshop hosted by the US government's National Institute of Standards and Technology, provides the infrastructure necessary for large-scale evaluation of text retrieval methodologies. With the goal of accelerating research in this area, TREC created the first large test collections of full-text documents and standardized retrieval evaluation. The impact has been significant; since TREC's beginning in 1992, retrieval effectiveness has approximately doubled. TREC has built a variety of large test collections, including collections for such specialized retrieval tasks as cross-language retrieval and retrieval of speech. Moreover, TREC has accelerated the transfer of research ideas into commercial systems, as demonstrated in the number of retrieval techniques developed in TREC that are now used in Web search engines. This book provides a comprehensive review of TREC research, summarizing the variety of TREC results, documenting the best practices in experimental information retrieval, and suggesting areas for further research. The first part of the book describes TREC's history, test collections, and retrieval methodology. Next, the book provides "track" reports -- describing the evaluations of specific tasks, including routing and filtering, interactive retrieval, and retrieving noisy text. The final part of the book offers perspectives on TREC from such participants as Microsoft Research, University of Massachusetts, Cornell University, University of Waterloo, City University of New York, and IBM. The book will be of interest to researchers in information retrieval and related technologies, including natural language processing.
    Content
    Enthält die Beiträge: 1. The Text REtrieval Conference - Ellen M. Voorhees and Donna K. Harman 2. The TREC Test Collections - Donna K. Harman 3. Retrieval System Evaluation - Chris Buckley and Ellen M. Voorhees 4. The TREC Ad Hoc Experiments - Donna K. Harman 5. Routing and Filtering - Stephen Robertson and Jamie Callan 6. The TREC Interactive Tracks: Putting the User into Search - Susan T. Dumais and Nicholas J. Belkin 7. Beyond English - Donna K. Harman 8. Retrieving Noisy Text - Ellen M. Voorhees and John S. Garofolo 9.The Very Large Collection and Web Tracks - David Hawking and Nick Craswell 10. Question Answering in TREC - Ellen M. Voorhees 11. The University of Massachusetts and a Dozen TRECs - James Allan, W. Bruce Croft and Jamie Callan 12. How Okapi Came to TREC - Stephen Robertson 13. The SMART Project at TREC - Chris Buckley 14. Ten Years of Ad Hoc Retrieval at TREC Using PIRCS - Kui-Lam Kwok 15. MultiText Experiments for TREC - Gordon V. Cormack, Charles L. A. Clarke, Christopher R. Palmer and Thomas R. Lynam 16. A Language-Modeling Approach to TREC - Djoerd Hiemstra and Wessel Kraaij 17. BM Research Activities at TREC - Eric W. Brown, David Carmel, Martin Franz, Abraham Ittycheriah, Tapas Kanungo, Yoelle Maarek, J. Scott McCarley, Robert L. Mack, John M. Prager, John R. Smith, Aya Soffer, Jason Y. Zien and Alan D. Marwick Epilogue: Metareflections on TREC - Karen Sparck Jones
    Footnote
    Rez. in: JASIST 58(2007) no.6, S.910-911 (J.L. Vicedo u. J. Gomez): "The Text REtrieval Conference (TREC) is a yearly workshop hosted by the U.S. government's National Institute of Standards and Technology (NIST) that fosters and supports research in information retrieval as well as speeding the transfer of technology between research labs and industry. Since 1992, TREC has provided the infrastructure necessary for large-scale evaluations of different text retrieval methodologies. TREC impact has been very important and its success has been mainly supported by its continuous adaptation to the emerging information retrieval needs. Not in vain, TREC has built evaluation benchmarks for more than 20 different retrieval problems such as Web retrieval, speech retrieval, or question-answering. The large and intense trajectory of annual TREC conferences has resulted in an immense bulk of documents reflecting the different eval uation and research efforts developed. This situation makes it difficult sometimes to observe clearly how research in information retrieval (IR) has evolved over the course of TREC. TREC: Experiment and Evaluation in Information Retrieval succeeds in organizing and condensing all this research into a manageable volume that describes TREC history and summarizes the main lessons learned. The book is organized into three parts. The first part is devoted to the description of TREC's origin and history, the test collections, and the evaluation methodology developed. The second part describes a selection of the major evaluation exercises (tracks), and the third part contains contributions from research groups that had a large and remarkable participation in TREC. Finally, Karen Spark Jones, one of the main promoters of research in IR, closes the book with an epilogue that analyzes the impact of TREC on this research field.
    ... TREC: Experiment and Evaluation in Information Retrieval is a reliable and comprehensive review of the TREC program and has been adopted by NIST as the official history of TREC (see http://trec.nist.gov). We were favorably surprised by the book. Well structured and written, chapters are self-contained and the existence of references to specialized and more detailed publications is continuous, which makes it easier to expand into the different aspects analyzed in the text. This book succeeds in compiling TREC evolution from its inception in 1992 to 2003 in an adequate and manageable volume. Thanks to the impressive effort performed by the authors and their experience in the field, it can satiate the interests of a great variety of readers. While expert researchers in the IR field and IR-related industrial companies can use it as a reference manual, it seems especially useful for students and non-expert readers willing to approach this research area. Like NIST, we would recommend this reading to anyone who may be interested in textual information retrieval."
    LCSH
    Information storage and retrieval systems / Congresses
    Text REtrieval Conference
    RSWK
    Information Retrieval / Textverarbeitung / Aufsatzsammlung (BVB)
    Kongress / Information Retrieval / Kongress (GBV)
    Subject
    Information Retrieval / Textverarbeitung / Aufsatzsammlung (BVB)
    Kongress / Information Retrieval / Kongress (GBV)
    Information storage and retrieval systems / Congresses
    Text REtrieval Conference
  15. Alemayehu, N.: Analysis of performance variation using quey expansion (2003) 0.01
    0.008423243 = product of:
      0.025269728 = sum of:
        0.025269728 = product of:
          0.07580918 = sum of:
            0.07580918 = weight(_text_:retrieval in 1454) [ClassicSimilarity], result of:
              0.07580918 = score(doc=1454,freq=12.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.49118498 = fieldWeight in 1454, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1454)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Information retrieval performance evaluation is commonly made based an the classical recall and precision based figures or graphs. However, important information indicating causes for variation may remain hidden under the average recall and precision figures. Identifying significant causes for variation can help researchers and developers to focus an opportunities for improvement that underlay the averages. This article presents a case study showing the potential of a statistical repeated measures analysis of variance for testing the significance of factors in retrieval performance variation. The TREC-9 Query Track performance data is used as a case study and the factors studied are retrieval method, topic, and their interaction. The results show that retrieval method, topic, and their interaction are all significant. A topic level analysis is also made to see the nature of variation in the performance of retrieval methods across topics. The observed retrieval performances of expansion runs are truly significant improvements for most of the topics. Analyses of the effect of query expansion an document ranking confirm that expansion affects ranking positively.
  16. Fan, W.; Luo, M.; Wang, L.; Xi, W.; Fox, E.A.: Tuning before feedback : combining ranking discovery and blind feedback for robust retrieval (2004) 0.01
    0.00810527 = product of:
      0.024315808 = sum of:
        0.024315808 = product of:
          0.07294742 = sum of:
            0.07294742 = weight(_text_:retrieval in 4052) [ClassicSimilarity], result of:
              0.07294742 = score(doc=4052,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.47264296 = fieldWeight in 4052, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4052)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    SIGIR'04: Proceedings of the 27th Annual International ACM-SIGIR Conference an Research and Development in Information Retrieval. Ed.: K. Järvelin, u.a
  17. Kazai, G.; Lalmas, M.: ¬The overlap problem in content-oriented XML retrieval evaluation (2004) 0.01
    0.00810527 = product of:
      0.024315808 = sum of:
        0.024315808 = product of:
          0.07294742 = sum of:
            0.07294742 = weight(_text_:retrieval in 4083) [ClassicSimilarity], result of:
              0.07294742 = score(doc=4083,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.47264296 = fieldWeight in 4083, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4083)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    SIGIR'04: Proceedings of the 27th Annual International ACM-SIGIR Conference an Research and Development in Information Retrieval. Ed.: K. Järvelin, u.a
  18. Ménard, E.: Image retrieval : a comparative study on the influence of indexing vocabularies (2009) 0.01
    0.00810527 = product of:
      0.024315808 = sum of:
        0.024315808 = product of:
          0.07294742 = sum of:
            0.07294742 = weight(_text_:retrieval in 3250) [ClassicSimilarity], result of:
              0.07294742 = score(doc=3250,freq=16.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.47264296 = fieldWeight in 3250, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3250)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper reports on a research project that compared two different approaches for the indexing of ordinary images representing common objects: traditional indexing with controlled vocabulary and free indexing with uncontrolled vocabulary. We also compared image retrieval within two contexts: a monolingual context where the language of the query is the same as the indexing language and, secondly, a multilingual context where the language of the query is different from the indexing language. As a means of comparison in evaluating the performance of each indexing form, a simulation of the retrieval process involving 30 images was performed with 60 participants. A questionnaire was also submitted to participants in order to gather information with regard to the retrieval process and performance. The results of the retrieval simulation confirm that the retrieval is more effective and more satisfactory for the searcher when the images are indexed with the approach combining the controlled and uncontrolled vocabularies. The results also indicate that the indexing approach with controlled vocabulary is more efficient (queries needed to retrieve an image) than the uncontrolled vocabulary indexing approach. However, no significant differences in terms of temporal efficiency (time required to retrieve an image) was observed. Finally, the comparison of the two linguistic contexts reveal that the retrieval is more effective and more efficient (queries needed to retrieve an image) in the monolingual context rather than the multilingual context. Furthermore, image searchers are more satisfied when the retrieval is done in a monolingual context rather than a multilingual context.
  19. Wilbur, W.J.: Global term weights for document retrieval learned from TREC data (2001) 0.01
    0.008023808 = product of:
      0.024071421 = sum of:
        0.024071421 = product of:
          0.07221426 = sum of:
            0.07221426 = weight(_text_:retrieval in 2647) [ClassicSimilarity], result of:
              0.07221426 = score(doc=2647,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.46789268 = fieldWeight in 2647, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2647)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
  20. Hersh, W.R.; Over, P.: Interactivity at the Text Retrieval Conference (TREC) (2001) 0.01
    0.008023808 = product of:
      0.024071421 = sum of:
        0.024071421 = product of:
          0.07221426 = sum of:
            0.07221426 = weight(_text_:retrieval in 5486) [ClassicSimilarity], result of:
              0.07221426 = score(doc=5486,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.46789268 = fieldWeight in 5486, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5486)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    

Types

  • a 78
  • m 2
  • el 1
  • s 1
  • More… Less…