Search (2 results, page 1 of 1)

  • × type_ss:"s"
  • × classification_ss:"ST 270"
  1. Social information retrieval systems : emerging technologies and applications for searching the Web effectively (2008) 0.07
    0.07437795 = product of:
      0.111566916 = sum of:
        0.07101121 = weight(_text_:search in 4127) [ClassicSimilarity], result of:
          0.07101121 = score(doc=4127,freq=14.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.4063998 = fieldWeight in 4127, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=4127)
        0.04055571 = product of:
          0.08111142 = sum of:
            0.08111142 = weight(_text_:engines in 4127) [ClassicSimilarity], result of:
              0.08111142 = score(doc=4127,freq=4.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.31755137 = fieldWeight in 4127, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4127)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Inhalt Collaborating to search effectively in different searcher modes through cues and specialty search / Naresh Kumar Agarwal and Danny C.C. Poo -- Collaborative querying using a hybrid content and results-based approach / Chandrani Sinha Ray ... [et al.] -- Collaborative classification for group-oriented organization of search results / Keiichi Nakata and Amrish Singh -- A case study of use-centered descriptions : archival descriptions of what can be done with a collection / Richard Butterworth -- Metadata for social recommendations : storing, sharing, and reusing evaluations of learning resources / Riina Vuorikari, Nikos Manouselis, and Erik Duval -- Social network models for enhancing reference-based search engine rankings / Nikolaos Korfiatis ... [et al.] -- From PageRank to social rank : authority-based retrieval in social information spaces / Sebastian Marius Kirsch ... [et al.] -- Adaptive peer-to-peer social networks for distributed content-based Web search / Le-Shin Wu ... [et al.] -- The ethics of social information retrieval / Brendan Luyt and Chu Keong Lee -- The social context of knowledge / Daniel Memmi -- Social information seeking in digital libraries / George Buchanan and Annika Hinze -- Relevant intra-actions in networked environments / Theresa Dirndorfer Anderson -- Publication and citation analysis as a tool for information retrieval / Ronald Rousseau -- Personalized information retrieval in a semantic-based learning environment / Antonella Carbonaro and Rodolfo Ferrini -- Multi-agent tourism system (MATS) / Soe Yu Maw and Myo-Myo Naing -- Hybrid recommendation systems : a case study on the movies domain / Konstantinos Markellos ... [et al.].
    LCSH
    Web search engines
    Subject
    Web search engines
  2. TREC: experiment and evaluation in information retrieval (2005) 0.03
    0.027764294 = product of:
      0.04164644 = sum of:
        0.02372318 = weight(_text_:search in 636) [ClassicSimilarity], result of:
          0.02372318 = score(doc=636,freq=4.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.13576864 = fieldWeight in 636, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.01953125 = fieldNorm(doc=636)
        0.01792326 = product of:
          0.03584652 = sum of:
            0.03584652 = weight(_text_:engines in 636) [ClassicSimilarity], result of:
              0.03584652 = score(doc=636,freq=2.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.1403392 = fieldWeight in 636, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=636)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The Text REtrieval Conference (TREC), a yearly workshop hosted by the US government's National Institute of Standards and Technology, provides the infrastructure necessary for large-scale evaluation of text retrieval methodologies. With the goal of accelerating research in this area, TREC created the first large test collections of full-text documents and standardized retrieval evaluation. The impact has been significant; since TREC's beginning in 1992, retrieval effectiveness has approximately doubled. TREC has built a variety of large test collections, including collections for such specialized retrieval tasks as cross-language retrieval and retrieval of speech. Moreover, TREC has accelerated the transfer of research ideas into commercial systems, as demonstrated in the number of retrieval techniques developed in TREC that are now used in Web search engines. This book provides a comprehensive review of TREC research, summarizing the variety of TREC results, documenting the best practices in experimental information retrieval, and suggesting areas for further research. The first part of the book describes TREC's history, test collections, and retrieval methodology. Next, the book provides "track" reports -- describing the evaluations of specific tasks, including routing and filtering, interactive retrieval, and retrieving noisy text. The final part of the book offers perspectives on TREC from such participants as Microsoft Research, University of Massachusetts, Cornell University, University of Waterloo, City University of New York, and IBM. The book will be of interest to researchers in information retrieval and related technologies, including natural language processing.
    Content
    Enthält die Beiträge: 1. The Text REtrieval Conference - Ellen M. Voorhees and Donna K. Harman 2. The TREC Test Collections - Donna K. Harman 3. Retrieval System Evaluation - Chris Buckley and Ellen M. Voorhees 4. The TREC Ad Hoc Experiments - Donna K. Harman 5. Routing and Filtering - Stephen Robertson and Jamie Callan 6. The TREC Interactive Tracks: Putting the User into Search - Susan T. Dumais and Nicholas J. Belkin 7. Beyond English - Donna K. Harman 8. Retrieving Noisy Text - Ellen M. Voorhees and John S. Garofolo 9.The Very Large Collection and Web Tracks - David Hawking and Nick Craswell 10. Question Answering in TREC - Ellen M. Voorhees 11. The University of Massachusetts and a Dozen TRECs - James Allan, W. Bruce Croft and Jamie Callan 12. How Okapi Came to TREC - Stephen Robertson 13. The SMART Project at TREC - Chris Buckley 14. Ten Years of Ad Hoc Retrieval at TREC Using PIRCS - Kui-Lam Kwok 15. MultiText Experiments for TREC - Gordon V. Cormack, Charles L. A. Clarke, Christopher R. Palmer and Thomas R. Lynam 16. A Language-Modeling Approach to TREC - Djoerd Hiemstra and Wessel Kraaij 17. BM Research Activities at TREC - Eric W. Brown, David Carmel, Martin Franz, Abraham Ittycheriah, Tapas Kanungo, Yoelle Maarek, J. Scott McCarley, Robert L. Mack, John M. Prager, John R. Smith, Aya Soffer, Jason Y. Zien and Alan D. Marwick Epilogue: Metareflections on TREC - Karen Sparck Jones