Search (3 results, page 1 of 1)

  • × classification_ss:"025.04 / dc22"
  1. TREC: experiment and evaluation in information retrieval (2005) 0.02
    0.017530624 = product of:
      0.070122495 = sum of:
        0.037541576 = weight(_text_:supported in 636) [ClassicSimilarity], result of:
          0.037541576 = score(doc=636,freq=2.0), product of:
            0.22949564 = queryWeight, product of:
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.03875087 = queryNorm
            0.16358295 = fieldWeight in 636, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.01953125 = fieldNorm(doc=636)
        0.032580916 = product of:
          0.06516183 = sum of:
            0.06516183 = weight(_text_:aufsatzsammlung in 636) [ClassicSimilarity], result of:
              0.06516183 = score(doc=636,freq=4.0), product of:
                0.25424787 = queryWeight, product of:
                  6.5610886 = idf(docFreq=169, maxDocs=44218)
                  0.03875087 = queryNorm
                0.25629252 = fieldWeight in 636, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.5610886 = idf(docFreq=169, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=636)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Footnote
    Rez. in: JASIST 58(2007) no.6, S.910-911 (J.L. Vicedo u. J. Gomez): "The Text REtrieval Conference (TREC) is a yearly workshop hosted by the U.S. government's National Institute of Standards and Technology (NIST) that fosters and supports research in information retrieval as well as speeding the transfer of technology between research labs and industry. Since 1992, TREC has provided the infrastructure necessary for large-scale evaluations of different text retrieval methodologies. TREC impact has been very important and its success has been mainly supported by its continuous adaptation to the emerging information retrieval needs. Not in vain, TREC has built evaluation benchmarks for more than 20 different retrieval problems such as Web retrieval, speech retrieval, or question-answering. The large and intense trajectory of annual TREC conferences has resulted in an immense bulk of documents reflecting the different eval uation and research efforts developed. This situation makes it difficult sometimes to observe clearly how research in information retrieval (IR) has evolved over the course of TREC. TREC: Experiment and Evaluation in Information Retrieval succeeds in organizing and condensing all this research into a manageable volume that describes TREC history and summarizes the main lessons learned. The book is organized into three parts. The first part is devoted to the description of TREC's origin and history, the test collections, and the evaluation methodology developed. The second part describes a selection of the major evaluation exercises (tracks), and the third part contains contributions from research groups that had a large and remarkable participation in TREC. Finally, Karen Spark Jones, one of the main promoters of research in IR, closes the book with an epilogue that analyzes the impact of TREC on this research field.
    RSWK
    Information Retrieval / Textverarbeitung / Aufsatzsammlung (BVB)
    Subject
    Information Retrieval / Textverarbeitung / Aufsatzsammlung (BVB)
  2. Antoniou, G.; Harmelen, F. van: ¬A semantic Web primer (2004) 0.00
    0.004742827 = product of:
      0.037942614 = sum of:
        0.037942614 = weight(_text_:cooperative in 468) [ClassicSimilarity], result of:
          0.037942614 = score(doc=468,freq=2.0), product of:
            0.23071818 = queryWeight, product of:
              5.953884 = idf(docFreq=311, maxDocs=44218)
              0.03875087 = queryNorm
            0.16445437 = fieldWeight in 468, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.953884 = idf(docFreq=311, maxDocs=44218)
              0.01953125 = fieldNorm(doc=468)
      0.125 = coord(1/8)
    
    Series
    Cooperative information systems
  3. O'Connor, B.C.; Kearns, J.; Anderson, R.L.: Doing things with information : beyond indexing and abstracting (2008) 0.00
    0.002883905 = product of:
      0.02307124 = sum of:
        0.02307124 = weight(_text_:work in 4297) [ClassicSimilarity], result of:
          0.02307124 = score(doc=4297,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.16220987 = fieldWeight in 4297, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03125 = fieldNorm(doc=4297)
      0.125 = coord(1/8)
    
    Abstract
    The relationship between a person with a question and a source of information is complex. Indexing and abstracting often fail because too much emphasis is put on the mechanics of description, and too little has been given as to what ought to be represented. Research literature suggests that inappropriate representation results in failed searches a significant number of times, perhaps even in a majority of cases. "Doing Things with Information" seeks to rectify this unfortunate situation by emphasizing methods of modeling and constructing appropriate representations of such questions and documents. Students in programs of information studies will find focal points for discussion about system design and refinement of existing systems. Librarians, scholars, and those who work within large document collections, whether paper or electronic, will find insights into the strengths and weaknesses of the access systems they use.