Search (3 results, page 1 of 1)

  • × classification_ss:"025.04 / dc22"
  1. Sherman, C.: Google power : Unleash the full potential of Google (2005) 0.01
    0.012479308 = product of:
      0.08735515 = sum of:
        0.08735515 = weight(_text_:government in 3185) [ClassicSimilarity], result of:
          0.08735515 = score(doc=3185,freq=2.0), product of:
            0.23146805 = queryWeight, product of:
              5.6930003 = idf(docFreq=404, maxDocs=44218)
              0.04065836 = queryNorm
            0.37739617 = fieldWeight in 3185, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6930003 = idf(docFreq=404, maxDocs=44218)
              0.046875 = fieldNorm(doc=3185)
      0.14285715 = coord(1/7)
    
    Abstract
    With this title, readers learn to push the search engine to its limits and extract the best content from Google, without having to learn complicated code. "Google Power" takes Google users under the hood, and teaches them a wide range of advanced web search techniques, through practical examples. Its content is organised by topic, so reader learns how to conduct in-depth searches on the most popular search topics, from health to government listings to people.
  2. Aberer, K. et al.: ¬The Semantic Web : 6th International Semantic Web Conference, 2nd Asian Semantic Web Conference, ISWC 2007 + ASWC 2007, Busan, Korea, November 11-15, 2007 : proceedings (2007) 0.01
    0.008121559 = product of:
      0.05685091 = sum of:
        0.05685091 = weight(_text_:networks in 2477) [ClassicSimilarity], result of:
          0.05685091 = score(doc=2477,freq=4.0), product of:
            0.19231078 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.04065836 = queryNorm
            0.29562 = fieldWeight in 2477, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.03125 = fieldNorm(doc=2477)
      0.14285715 = coord(1/7)
    
    LCSH
    Computer Communication Networks
    Subject
    Computer Communication Networks
  3. TREC: experiment and evaluation in information retrieval (2005) 0.00
    0.0045070075 = product of:
      0.03154905 = sum of:
        0.03154905 = weight(_text_:standards in 636) [ClassicSimilarity], result of:
          0.03154905 = score(doc=636,freq=4.0), product of:
            0.18121246 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.04065836 = queryNorm
            0.17409979 = fieldWeight in 636, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.01953125 = fieldNorm(doc=636)
      0.14285715 = coord(1/7)
    
    Abstract
    The Text REtrieval Conference (TREC), a yearly workshop hosted by the US government's National Institute of Standards and Technology, provides the infrastructure necessary for large-scale evaluation of text retrieval methodologies. With the goal of accelerating research in this area, TREC created the first large test collections of full-text documents and standardized retrieval evaluation. The impact has been significant; since TREC's beginning in 1992, retrieval effectiveness has approximately doubled. TREC has built a variety of large test collections, including collections for such specialized retrieval tasks as cross-language retrieval and retrieval of speech. Moreover, TREC has accelerated the transfer of research ideas into commercial systems, as demonstrated in the number of retrieval techniques developed in TREC that are now used in Web search engines. This book provides a comprehensive review of TREC research, summarizing the variety of TREC results, documenting the best practices in experimental information retrieval, and suggesting areas for further research. The first part of the book describes TREC's history, test collections, and retrieval methodology. Next, the book provides "track" reports -- describing the evaluations of specific tasks, including routing and filtering, interactive retrieval, and retrieving noisy text. The final part of the book offers perspectives on TREC from such participants as Microsoft Research, University of Massachusetts, Cornell University, University of Waterloo, City University of New York, and IBM. The book will be of interest to researchers in information retrieval and related technologies, including natural language processing.
    Footnote
    Rez. in: JASIST 58(2007) no.6, S.910-911 (J.L. Vicedo u. J. Gomez): "The Text REtrieval Conference (TREC) is a yearly workshop hosted by the U.S. government's National Institute of Standards and Technology (NIST) that fosters and supports research in information retrieval as well as speeding the transfer of technology between research labs and industry. Since 1992, TREC has provided the infrastructure necessary for large-scale evaluations of different text retrieval methodologies. TREC impact has been very important and its success has been mainly supported by its continuous adaptation to the emerging information retrieval needs. Not in vain, TREC has built evaluation benchmarks for more than 20 different retrieval problems such as Web retrieval, speech retrieval, or question-answering. The large and intense trajectory of annual TREC conferences has resulted in an immense bulk of documents reflecting the different eval uation and research efforts developed. This situation makes it difficult sometimes to observe clearly how research in information retrieval (IR) has evolved over the course of TREC. TREC: Experiment and Evaluation in Information Retrieval succeeds in organizing and condensing all this research into a manageable volume that describes TREC history and summarizes the main lessons learned. The book is organized into three parts. The first part is devoted to the description of TREC's origin and history, the test collections, and the evaluation methodology developed. The second part describes a selection of the major evaluation exercises (tracks), and the third part contains contributions from research groups that had a large and remarkable participation in TREC. Finally, Karen Spark Jones, one of the main promoters of research in IR, closes the book with an epilogue that analyzes the impact of TREC on this research field.

Types