Search (107 results, page 6 of 6)

  • × theme_ss:"Retrievalstudien"
  1. Losada, D.E.; Parapar, J.; Barreiro, A.: Multi-armed bandits for adjudicating documents in pooling-based evaluation of information retrieval systems (2017) 0.01
    0.00636322 = product of:
      0.01272644 = sum of:
        0.01272644 = product of:
          0.02545288 = sum of:
            0.02545288 = weight(_text_:j in 5098) [ClassicSimilarity], result of:
              0.02545288 = score(doc=5098,freq=2.0), product of:
                0.14500295 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.045634337 = queryNorm
                0.17553353 = fieldWeight in 5098, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5098)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  2. Parapar, J.; Losada, D.E.; Presedo-Quindimil, M.A.; Barreiro, A.: Using score distributions to compare statistical significance tests for information retrieval evaluation (2020) 0.01
    0.00636322 = product of:
      0.01272644 = sum of:
        0.01272644 = product of:
          0.02545288 = sum of:
            0.02545288 = weight(_text_:j in 5506) [ClassicSimilarity], result of:
              0.02545288 = score(doc=5506,freq=2.0), product of:
                0.14500295 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.045634337 = queryNorm
                0.17553353 = fieldWeight in 5506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5506)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  3. Larsen, B.; Ingwersen, P.; Lund, B.: Data fusion according to the principle of polyrepresentation (2009) 0.01
    0.006182823 = product of:
      0.012365646 = sum of:
        0.012365646 = product of:
          0.024731291 = sum of:
            0.024731291 = weight(_text_:22 in 2752) [ClassicSimilarity], result of:
              0.024731291 = score(doc=2752,freq=2.0), product of:
                0.15980367 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045634337 = queryNorm
                0.15476047 = fieldWeight in 2752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2752)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2009 18:48:28
  4. TREC: experiment and evaluation in information retrieval (2005) 0.01
    0.00551071 = product of:
      0.01102142 = sum of:
        0.01102142 = product of:
          0.02204284 = sum of:
            0.02204284 = weight(_text_:j in 636) [ClassicSimilarity], result of:
              0.02204284 = score(doc=636,freq=6.0), product of:
                0.14500295 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.045634337 = queryNorm
                0.1520165 = fieldWeight in 636, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=636)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Enthält die Beiträge: 1. The Text REtrieval Conference - Ellen M. Voorhees and Donna K. Harman 2. The TREC Test Collections - Donna K. Harman 3. Retrieval System Evaluation - Chris Buckley and Ellen M. Voorhees 4. The TREC Ad Hoc Experiments - Donna K. Harman 5. Routing and Filtering - Stephen Robertson and Jamie Callan 6. The TREC Interactive Tracks: Putting the User into Search - Susan T. Dumais and Nicholas J. Belkin 7. Beyond English - Donna K. Harman 8. Retrieving Noisy Text - Ellen M. Voorhees and John S. Garofolo 9.The Very Large Collection and Web Tracks - David Hawking and Nick Craswell 10. Question Answering in TREC - Ellen M. Voorhees 11. The University of Massachusetts and a Dozen TRECs - James Allan, W. Bruce Croft and Jamie Callan 12. How Okapi Came to TREC - Stephen Robertson 13. The SMART Project at TREC - Chris Buckley 14. Ten Years of Ad Hoc Retrieval at TREC Using PIRCS - Kui-Lam Kwok 15. MultiText Experiments for TREC - Gordon V. Cormack, Charles L. A. Clarke, Christopher R. Palmer and Thomas R. Lynam 16. A Language-Modeling Approach to TREC - Djoerd Hiemstra and Wessel Kraaij 17. BM Research Activities at TREC - Eric W. Brown, David Carmel, Martin Franz, Abraham Ittycheriah, Tapas Kanungo, Yoelle Maarek, J. Scott McCarley, Robert L. Mack, John M. Prager, John R. Smith, Aya Soffer, Jason Y. Zien and Alan D. Marwick Epilogue: Metareflections on TREC - Karen Sparck Jones
    Footnote
    Rez. in: JASIST 58(2007) no.6, S.910-911 (J.L. Vicedo u. J. Gomez): "The Text REtrieval Conference (TREC) is a yearly workshop hosted by the U.S. government's National Institute of Standards and Technology (NIST) that fosters and supports research in information retrieval as well as speeding the transfer of technology between research labs and industry. Since 1992, TREC has provided the infrastructure necessary for large-scale evaluations of different text retrieval methodologies. TREC impact has been very important and its success has been mainly supported by its continuous adaptation to the emerging information retrieval needs. Not in vain, TREC has built evaluation benchmarks for more than 20 different retrieval problems such as Web retrieval, speech retrieval, or question-answering. The large and intense trajectory of annual TREC conferences has resulted in an immense bulk of documents reflecting the different eval uation and research efforts developed. This situation makes it difficult sometimes to observe clearly how research in information retrieval (IR) has evolved over the course of TREC. TREC: Experiment and Evaluation in Information Retrieval succeeds in organizing and condensing all this research into a manageable volume that describes TREC history and summarizes the main lessons learned. The book is organized into three parts. The first part is devoted to the description of TREC's origin and history, the test collections, and the evaluation methodology developed. The second part describes a selection of the major evaluation exercises (tracks), and the third part contains contributions from research groups that had a large and remarkable participation in TREC. Finally, Karen Spark Jones, one of the main promoters of research in IR, closes the book with an epilogue that analyzes the impact of TREC on this research field.
  5. Bateman, J.: Modelling the importance of end-user relevance criteria (1999) 0.01
    0.005090576 = product of:
      0.010181152 = sum of:
        0.010181152 = product of:
          0.020362305 = sum of:
            0.020362305 = weight(_text_:j in 6606) [ClassicSimilarity], result of:
              0.020362305 = score(doc=6606,freq=2.0), product of:
                0.14500295 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.045634337 = queryNorm
                0.14042683 = fieldWeight in 6606, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.03125 = fieldNorm(doc=6606)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  6. Cleverdon, C.W.; Mills, J.: ¬The testing of index language devices (1985) 0.01
    0.005090576 = product of:
      0.010181152 = sum of:
        0.010181152 = product of:
          0.020362305 = sum of:
            0.020362305 = weight(_text_:j in 3643) [ClassicSimilarity], result of:
              0.020362305 = score(doc=3643,freq=2.0), product of:
                0.14500295 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.045634337 = queryNorm
                0.14042683 = fieldWeight in 3643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3643)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  7. Lioma, C.; Ounis, I.: ¬A syntactically-based query reformulation technique for information retrieval (2008) 0.01
    0.005090576 = product of:
      0.010181152 = sum of:
        0.010181152 = product of:
          0.020362305 = sum of:
            0.020362305 = weight(_text_:j in 2031) [ClassicSimilarity], result of:
              0.020362305 = score(doc=2031,freq=2.0), product of:
                0.14500295 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.045634337 = queryNorm
                0.14042683 = fieldWeight in 2031, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2031)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Whereas in language words of high frequency are generally associated with low content [Bookstein, A., & Swanson, D. (1974). Probabilistic models for automatic indexing. Journal of the American Society of Information Science, 25(5), 312-318; Damerau, F. J. (1965). An experiment in automatic indexing. American Documentation, 16, 283-289; Harter, S. P. (1974). A probabilistic approach to automatic keyword indexing. PhD thesis, University of Chicago; Sparck-Jones, K. (1972). A statistical interpretation of term specificity and its application in retrieval. Journal of Documentation, 28, 11-21; Yu, C., & Salton, G. (1976). Precision weighting - an effective automatic indexing method. Journal of the Association for Computer Machinery (ACM), 23(1), 76-88], shallow syntactic fragments of high frequency generally correspond to lexical fragments of high content [Lioma, C., & Ounis, I. (2006). Examining the content load of part of speech blocks for information retrieval. In Proceedings of the international committee on computational linguistics and the association for computational linguistics (COLING/ACL 2006), Sydney, Australia]. We implement this finding to Information Retrieval, as follows. We present a novel automatic query reformulation technique, which is based on shallow syntactic evidence induced from various language samples, and used to enhance the performance of an Information Retrieval system. Firstly, we draw shallow syntactic evidence from language samples of varying size, and compare the effect of language sample size upon retrieval performance, when using our syntactically-based query reformulation (SQR) technique. Secondly, we compare SQR to a state-of-the-art probabilistic pseudo-relevance feedback technique. Additionally, we combine both techniques and evaluate their compatibility. We evaluate our proposed technique across two standard Text REtrieval Conference (TREC) English test collections, and three statistically different weighting models. Experimental results suggest that SQR markedly enhances retrieval performance, and is at least comparable to pseudo-relevance feedback. Notably, the combination of SQR and pseudo-relevance feedback further enhances retrieval performance considerably. These collective experimental results confirm the tenet that high frequency shallow syntactic fragments correspond to content-bearing lexical fragments.

Languages

  • e 89
  • d 14
  • f 2
  • chi 1
  • More… Less…

Types

  • a 97
  • s 4
  • m 3
  • el 2
  • p 2
  • r 1
  • x 1
  • More… Less…