Search (3 results, page 1 of 1)

  • × classification_ss:"54.75 / Sprachverarbeitung <Informatik>"
  1. TREC: experiment and evaluation in information retrieval (2005) 0.02
    0.018613702 = sum of:
      0.0074650636 = product of:
        0.029860254 = sum of:
          0.029860254 = weight(_text_:authors in 636) [ClassicSimilarity], result of:
            0.029860254 = score(doc=636,freq=2.0), product of:
              0.2371355 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05201693 = queryNorm
              0.12592064 = fieldWeight in 636, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.01953125 = fieldNorm(doc=636)
        0.25 = coord(1/4)
      0.011148638 = product of:
        0.022297276 = sum of:
          0.022297276 = weight(_text_:t in 636) [ClassicSimilarity], result of:
            0.022297276 = score(doc=636,freq=2.0), product of:
              0.20491594 = queryWeight, product of:
                3.9394085 = idf(docFreq=2338, maxDocs=44218)
                0.05201693 = queryNorm
              0.10881182 = fieldWeight in 636, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9394085 = idf(docFreq=2338, maxDocs=44218)
                0.01953125 = fieldNorm(doc=636)
        0.5 = coord(1/2)
    
    Content
    Enthält die Beiträge: 1. The Text REtrieval Conference - Ellen M. Voorhees and Donna K. Harman 2. The TREC Test Collections - Donna K. Harman 3. Retrieval System Evaluation - Chris Buckley and Ellen M. Voorhees 4. The TREC Ad Hoc Experiments - Donna K. Harman 5. Routing and Filtering - Stephen Robertson and Jamie Callan 6. The TREC Interactive Tracks: Putting the User into Search - Susan T. Dumais and Nicholas J. Belkin 7. Beyond English - Donna K. Harman 8. Retrieving Noisy Text - Ellen M. Voorhees and John S. Garofolo 9.The Very Large Collection and Web Tracks - David Hawking and Nick Craswell 10. Question Answering in TREC - Ellen M. Voorhees 11. The University of Massachusetts and a Dozen TRECs - James Allan, W. Bruce Croft and Jamie Callan 12. How Okapi Came to TREC - Stephen Robertson 13. The SMART Project at TREC - Chris Buckley 14. Ten Years of Ad Hoc Retrieval at TREC Using PIRCS - Kui-Lam Kwok 15. MultiText Experiments for TREC - Gordon V. Cormack, Charles L. A. Clarke, Christopher R. Palmer and Thomas R. Lynam 16. A Language-Modeling Approach to TREC - Djoerd Hiemstra and Wessel Kraaij 17. BM Research Activities at TREC - Eric W. Brown, David Carmel, Martin Franz, Abraham Ittycheriah, Tapas Kanungo, Yoelle Maarek, J. Scott McCarley, Robert L. Mack, John M. Prager, John R. Smith, Aya Soffer, Jason Y. Zien and Alan D. Marwick Epilogue: Metareflections on TREC - Karen Sparck Jones
    Footnote
    ... TREC: Experiment and Evaluation in Information Retrieval is a reliable and comprehensive review of the TREC program and has been adopted by NIST as the official history of TREC (see http://trec.nist.gov). We were favorably surprised by the book. Well structured and written, chapters are self-contained and the existence of references to specialized and more detailed publications is continuous, which makes it easier to expand into the different aspects analyzed in the text. This book succeeds in compiling TREC evolution from its inception in 1992 to 2003 in an adequate and manageable volume. Thanks to the impressive effort performed by the authors and their experience in the field, it can satiate the interests of a great variety of readers. While expert researchers in the IR field and IR-related industrial companies can use it as a reference manual, it seems especially useful for students and non-expert readers willing to approach this research area. Like NIST, we would recommend this reading to anyone who may be interested in textual information retrieval."
  2. Hutchins, W.J.; Somers, H.L.: ¬An introduction to machine translation (1992) 0.02
    0.015766555 = product of:
      0.03153311 = sum of:
        0.03153311 = product of:
          0.06306622 = sum of:
            0.06306622 = weight(_text_:t in 4512) [ClassicSimilarity], result of:
              0.06306622 = score(doc=4512,freq=4.0), product of:
                0.20491594 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.05201693 = queryNorm
                0.3077663 = fieldWeight in 4512, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4512)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Classification
    Mat T 1091 / Automatische Übersetzung
    SBB
    Mat T 1091 / Automatische Übersetzung
  3. Croft, W.B.; Metzler, D.; Strohman, T.: Search engines : information retrieval in practice (2010) 0.01
    0.013378366 = product of:
      0.026756732 = sum of:
        0.026756732 = product of:
          0.053513464 = sum of:
            0.053513464 = weight(_text_:t in 2605) [ClassicSimilarity], result of:
              0.053513464 = score(doc=2605,freq=2.0), product of:
                0.20491594 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.05201693 = queryNorm
                0.26114836 = fieldWeight in 2605, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2605)
          0.5 = coord(1/2)
      0.5 = coord(1/2)