Search (5 results, page 1 of 1)

  • × classification_ss:"025.04 / dc22"
  1. TREC: experiment and evaluation in information retrieval (2005) 0.00
    0.00365042 = product of:
      0.00730084 = sum of:
        0.00730084 = product of:
          0.02190252 = sum of:
            0.02190252 = weight(_text_:j in 636) [ClassicSimilarity], result of:
              0.02190252 = score(doc=636,freq=6.0), product of:
                0.14407988 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.04534384 = queryNorm
                0.1520165 = fieldWeight in 636, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=636)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Enthält die Beiträge: 1. The Text REtrieval Conference - Ellen M. Voorhees and Donna K. Harman 2. The TREC Test Collections - Donna K. Harman 3. Retrieval System Evaluation - Chris Buckley and Ellen M. Voorhees 4. The TREC Ad Hoc Experiments - Donna K. Harman 5. Routing and Filtering - Stephen Robertson and Jamie Callan 6. The TREC Interactive Tracks: Putting the User into Search - Susan T. Dumais and Nicholas J. Belkin 7. Beyond English - Donna K. Harman 8. Retrieving Noisy Text - Ellen M. Voorhees and John S. Garofolo 9.The Very Large Collection and Web Tracks - David Hawking and Nick Craswell 10. Question Answering in TREC - Ellen M. Voorhees 11. The University of Massachusetts and a Dozen TRECs - James Allan, W. Bruce Croft and Jamie Callan 12. How Okapi Came to TREC - Stephen Robertson 13. The SMART Project at TREC - Chris Buckley 14. Ten Years of Ad Hoc Retrieval at TREC Using PIRCS - Kui-Lam Kwok 15. MultiText Experiments for TREC - Gordon V. Cormack, Charles L. A. Clarke, Christopher R. Palmer and Thomas R. Lynam 16. A Language-Modeling Approach to TREC - Djoerd Hiemstra and Wessel Kraaij 17. BM Research Activities at TREC - Eric W. Brown, David Carmel, Martin Franz, Abraham Ittycheriah, Tapas Kanungo, Yoelle Maarek, J. Scott McCarley, Robert L. Mack, John M. Prager, John R. Smith, Aya Soffer, Jason Y. Zien and Alan D. Marwick Epilogue: Metareflections on TREC - Karen Sparck Jones
    Footnote
    Rez. in: JASIST 58(2007) no.6, S.910-911 (J.L. Vicedo u. J. Gomez): "The Text REtrieval Conference (TREC) is a yearly workshop hosted by the U.S. government's National Institute of Standards and Technology (NIST) that fosters and supports research in information retrieval as well as speeding the transfer of technology between research labs and industry. Since 1992, TREC has provided the infrastructure necessary for large-scale evaluations of different text retrieval methodologies. TREC impact has been very important and its success has been mainly supported by its continuous adaptation to the emerging information retrieval needs. Not in vain, TREC has built evaluation benchmarks for more than 20 different retrieval problems such as Web retrieval, speech retrieval, or question-answering. The large and intense trajectory of annual TREC conferences has resulted in an immense bulk of documents reflecting the different eval uation and research efforts developed. This situation makes it difficult sometimes to observe clearly how research in information retrieval (IR) has evolved over the course of TREC. TREC: Experiment and Evaluation in Information Retrieval succeeds in organizing and condensing all this research into a manageable volume that describes TREC history and summarizes the main lessons learned. The book is organized into three parts. The first part is devoted to the description of TREC's origin and history, the test collections, and the evaluation methodology developed. The second part describes a selection of the major evaluation exercises (tracks), and the third part contains contributions from research groups that had a large and remarkable participation in TREC. Finally, Karen Spark Jones, one of the main promoters of research in IR, closes the book with an epilogue that analyzes the impact of TREC on this research field.
  2. O'Connor, B.C.; Kearns, J.; Anderson, R.L.: Doing things with information : beyond indexing and abstracting (2008) 0.00
    0.0033721137 = product of:
      0.0067442274 = sum of:
        0.0067442274 = product of:
          0.020232681 = sum of:
            0.020232681 = weight(_text_:j in 4297) [ClassicSimilarity], result of:
              0.020232681 = score(doc=4297,freq=2.0), product of:
                0.14407988 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.04534384 = queryNorm
                0.14042683 = fieldWeight in 4297, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4297)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  3. Stuckenschmidt, H.; Harmelen, F. van: Information sharing on the semantic web (2005) 0.00
    0.0025769281 = product of:
      0.0051538562 = sum of:
        0.0051538562 = product of:
          0.015461569 = sum of:
            0.015461569 = weight(_text_:h in 2789) [ClassicSimilarity], result of:
              0.015461569 = score(doc=2789,freq=2.0), product of:
                0.11265446 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.04534384 = queryNorm
                0.13724773 = fieldWeight in 2789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2789)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  4. Antoniou, G.; Harmelen, F. van: ¬A semantic Web primer (2004) 0.00
    0.0012884641 = product of:
      0.0025769281 = sum of:
        0.0025769281 = product of:
          0.0077307844 = sum of:
            0.0077307844 = weight(_text_:h in 468) [ClassicSimilarity], result of:
              0.0077307844 = score(doc=468,freq=2.0), product of:
                0.11265446 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.04534384 = queryNorm
                0.06862386 = fieldWeight in 468, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=468)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: JASIST 57(2006) no.8, S.1132-1133 (H. Che): "The World Wide Web has been the main source of an important shift in the way people communicate with each other, get information, and conduct business. However, most of the current Web content is only suitable for human consumption. The main obstacle to providing better quality of service is that the meaning of Web content is not machine-accessible. The "Semantic Web" is envisioned by Tim Berners-Lee as a logical extension to the current Web that enables explicit representations of term meaning. It aims to bring the Web to its full potential via the exploration of these machine-processable metadata. To fulfill this, it pros ides some meta languages like RDF, OWL, DAML+OIL, and SHOE for expressing knowledge that has clear, unambiguous meanings. The first steps in searing the Semantic Web into the current Web are successfully underway. In the forthcoming years, these efforts still remain highly focused in the research and development community. In the next phase, the Semantic Web will respond more intelligently to user queries. The first chapter gets started with an excellent introduction to the Semantic Web vision. At first, today's Web is introduced, and problems with some current applications like search engines are also covered. Subsequently, knowledge management. business-to-consumer electronic commerce, business-to-business electronic commerce, and personal agents are used as examples to show the potential requirements for the Semantic Web. Next comes the brief description of the underpinning technologies, including metadata, ontology, logic, and agent. The differences between the Semantic Web and Artificial Intelligence are also discussed in a later subsection. In section 1.4, the famous "laser-cake" diagram is given to show a layered view of the Semantic Web. From chapter 2, the book starts addressing some of the most important technologies for constructing the Semantic Web. In chapter 2, the authors discuss XML and its related technologies such as namespaces, XPath, and XSLT. XML is a simple, very flexible text format which is often used for the exchange of a wide variety of data on the Web and elsewhere. The W3C has defined various languages on top of XML, such as RDF. Although this chapter is very well planned and written, many details are not included because of the extensiveness of the XML technologies. Many other books on XML provide more comprehensive coverage.
  5. Spink, A.; Jansen, B.J.: Web searching : public searching of the Web (2004) 0.00
    0.0012884641 = product of:
      0.0025769281 = sum of:
        0.0025769281 = product of:
          0.0077307844 = sum of:
            0.0077307844 = weight(_text_:h in 1443) [ClassicSimilarity], result of:
              0.0077307844 = score(doc=1443,freq=2.0), product of:
                0.11265446 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.04534384 = queryNorm
                0.06862386 = fieldWeight in 1443, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1443)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: Information - Wissenschaft und Praxis 56(2004) H.1, S.61-62 (D. Lewandowski): "Die Autoren des vorliegenden Bandes haben sich in den letzten Jahren durch ihre zahlreichen Veröffentlichungen zum Verhalten von Suchmaschinen-Nutzern einen guten Namen gemacht. Das nun erschienene Buch bietet eine Zusammenfassung der verstreut publizierten Aufsätze und stellt deren Ergebnisse in den Kontext eines umfassenderen Forschungsansatzes. Spink und Jansen verwenden zur Analyse des Nutzungsverhaltens query logs von Suchmaschinen. In diesen werden vom Server Informationen protokolliert, die die Anfragen an diesen Server betreffen. Daten, die aus diesen Dateien gewonnen werden können, sind unter anderem die gestellten Suchanfragen, die Adresse des Rechners, von dem aus die Anfrage gestellt wurde, sowie die aus den Trefferlisten ausgewählten Dokumente. Der klare Vorteil der Analyse von Logfiles liegt in der Möglichkeit, große Datenmengen ohne hohen personellen Aufwand erheben zu können. Die Daten einer Vielzahl anonymer Nutzer können analysiert werden; ohne dass dabei die Datenerhebung das Nutzerverhalten beeinflusst. Dies ist bei Suchmaschinen von besonderer Bedeutung, weil sie im Gegensatz zu den meisten anderen professionellen Information-Retrieval-Systemen nicht nur im beruflichen Kontext, sondern auch (und vor allem) privat genutzt werden. Das Bild des Nutzungsverhaltens wird in Umfragen und Laboruntersuchungen verfälscht, weil Nutzer ihr Anfrageverhalten falsch einschätzen oder aber die Themen ihrer Anfragen nicht nennen möchten. Hier ist vor allem an Suchanfragen, die auf medizinische oder pornographische Inhalte gerichtet sind, zu denken. Die Analyse von Logfiles ist allerdings auch mit Problemen behaftet: So sind nicht alle gewünschten Daten überhaupt in den Logfiles enthalten (es fehlen alle Informationen über den einzelnen Nutzer), es werden keine qualitativen Informationen wie etwa der Grund einer Suche erfasst und die Logfiles sind aufgrund technischer Gegebenheiten teils unvollständig. Die Autoren schließen aus den genannten Vor- und Nachteilen, dass sich Logfiles gut für die Auswertung des Nutzerverhaltens eignen, bei der Auswertung jedoch die Ergebnisse von Untersuchungen, welche andere Methoden verwenden, berücksichtigt werden sollten.

Types