Search (112 results, page 1 of 6)

  • × theme_ss:"Retrievalstudien"
  1. Hawking, D.; Craswell, N.: ¬The very large collection and Web tracks (2005) 0.01
    0.014639623 = product of:
      0.04391887 = sum of:
        0.024749206 = product of:
          0.049498413 = sum of:
            0.049498413 = weight(_text_:web in 5085) [ClassicSimilarity], result of:
              0.049498413 = score(doc=5085,freq=2.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.43268442 = fieldWeight in 5085, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5085)
          0.5 = coord(1/2)
        0.019169662 = product of:
          0.057508986 = sum of:
            0.057508986 = weight(_text_:29 in 5085) [ClassicSimilarity], result of:
              0.057508986 = score(doc=5085,freq=2.0), product of:
                0.12330827 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03505379 = queryNorm
                0.46638384 = fieldWeight in 5085, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5085)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Date
    29. 3.1996 18:16:49
  2. Dresel, R.; Hörnig, D.; Kaluza, H.; Peter, A.; Roßmann, A.; Sieber, W.: Evaluation deutscher Web-Suchwerkzeuge : Ein vergleichender Retrievaltest (2001) 0.01
    0.011999531 = product of:
      0.03599859 = sum of:
        0.023333777 = product of:
          0.046667553 = sum of:
            0.046667553 = weight(_text_:web in 261) [ClassicSimilarity], result of:
              0.046667553 = score(doc=261,freq=4.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.4079388 = fieldWeight in 261, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0625 = fieldNorm(doc=261)
          0.5 = coord(1/2)
        0.012664813 = product of:
          0.037994437 = sum of:
            0.037994437 = weight(_text_:22 in 261) [ClassicSimilarity], result of:
              0.037994437 = score(doc=261,freq=2.0), product of:
                0.1227524 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03505379 = queryNorm
                0.30952093 = fieldWeight in 261, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=261)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Die deutschen Suchmaschinen, Abacho, Acoon, Fireball und Lycos sowie die Web-Kataloge Web.de und Yahoo! werden einem Qualitätstest nach relativem Recall, Precision und Availability unterzogen. Die Methoden der Retrievaltests werden vorgestellt. Im Durchschnitt werden bei einem Cut-Off-Wert von 25 ein Recall von rund 22%, eine Precision von knapp 19% und eine Verfügbarkeit von 24% erreicht
  3. Pemberton, J.K.; Ojala, M.; Garman, N.: Head to head : searching the Web versus traditional services (1998) 0.01
    0.009721428 = product of:
      0.029164284 = sum of:
        0.01649947 = product of:
          0.03299894 = sum of:
            0.03299894 = weight(_text_:web in 3572) [ClassicSimilarity], result of:
              0.03299894 = score(doc=3572,freq=2.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.2884563 = fieldWeight in 3572, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3572)
          0.5 = coord(1/2)
        0.012664813 = product of:
          0.037994437 = sum of:
            0.037994437 = weight(_text_:22 in 3572) [ClassicSimilarity], result of:
              0.037994437 = score(doc=3572,freq=2.0), product of:
                0.1227524 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03505379 = queryNorm
                0.30952093 = fieldWeight in 3572, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3572)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Source
    Online. 22(1998) no.3, S.24-26,28
  4. ¬The Eleventh Text Retrieval Conference, TREC 2002 (2003) 0.01
    0.009721428 = product of:
      0.029164284 = sum of:
        0.01649947 = product of:
          0.03299894 = sum of:
            0.03299894 = weight(_text_:web in 4049) [ClassicSimilarity], result of:
              0.03299894 = score(doc=4049,freq=2.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.2884563 = fieldWeight in 4049, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4049)
          0.5 = coord(1/2)
        0.012664813 = product of:
          0.037994437 = sum of:
            0.037994437 = weight(_text_:22 in 4049) [ClassicSimilarity], result of:
              0.037994437 = score(doc=4049,freq=2.0), product of:
                0.1227524 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03505379 = queryNorm
                0.30952093 = fieldWeight in 4049, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4049)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Proceedings of the llth TREC-conference held in Gaithersburg, Maryland (USA), November 19-22, 2002. Aim of the conference was discussion an retrieval and related information-seeking tasks for large test collection. 93 research groups used different techniques, for information retrieval from the same large database. This procedure makes it possible to compare the results. The tasks are: Cross-language searching, filtering, interactive searching, searching for novelty, question answering, searching for video shots, and Web searching.
  5. Rijsbergen, C.J. van: ¬A test for the separation of relevant and non-relevant documents in experimental retrieval collections (1973) 0.01
    0.00848153 = product of:
      0.050889175 = sum of:
        0.050889175 = product of:
          0.07633376 = sum of:
            0.038339324 = weight(_text_:29 in 5002) [ClassicSimilarity], result of:
              0.038339324 = score(doc=5002,freq=2.0), product of:
                0.12330827 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03505379 = queryNorm
                0.31092256 = fieldWeight in 5002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5002)
            0.037994437 = weight(_text_:22 in 5002) [ClassicSimilarity], result of:
              0.037994437 = score(doc=5002,freq=2.0), product of:
                0.1227524 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03505379 = queryNorm
                0.30952093 = fieldWeight in 5002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5002)
          0.6666667 = coord(2/3)
      0.16666667 = coord(1/6)
    
    Date
    19. 3.1996 11:22:12
    Source
    Journal of documentation. 29(1973) no.3, S.251-257
  6. Ravana, S.D.; Taheri, M.S.; Rajagopal, P.: Document-based approach to improve the accuracy of pairwise comparison in evaluating information retrieval systems (2015) 0.01
    0.0060758926 = product of:
      0.018227678 = sum of:
        0.01031217 = product of:
          0.02062434 = sum of:
            0.02062434 = weight(_text_:web in 2587) [ClassicSimilarity], result of:
              0.02062434 = score(doc=2587,freq=2.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.18028519 = fieldWeight in 2587, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2587)
          0.5 = coord(1/2)
        0.007915508 = product of:
          0.023746524 = sum of:
            0.023746524 = weight(_text_:22 in 2587) [ClassicSimilarity], result of:
              0.023746524 = score(doc=2587,freq=2.0), product of:
                0.1227524 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03505379 = queryNorm
                0.19345059 = fieldWeight in 2587, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2587)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Purpose The purpose of this paper is to propose a method to have more accurate results in comparing performance of the paired information retrieval (IR) systems with reference to the current method, which is based on the mean effectiveness scores of the systems across a set of identified topics/queries. Design/methodology/approach Based on the proposed approach, instead of the classic method of using a set of topic scores, the documents level scores are considered as the evaluation unit. These document scores are the defined document's weight, which play the role of the mean average precision (MAP) score of the systems as a significance test's statics. The experiments were conducted using the TREC 9 Web track collection. Findings The p-values generated through the two types of significance tests, namely the Student's t-test and Mann-Whitney show that by using the document level scores as an evaluation unit, the difference between IR systems is more significant compared with utilizing topic scores. Originality/value Utilizing a suitable test collection is a primary prerequisite for IR systems comparative evaluation. However, in addition to reusable test collections, having an accurate statistical testing is a necessity for these evaluations. The findings of this study will assist IR researchers to evaluate their retrieval systems and algorithms more accurately.
    Date
    20. 1.2015 18:30:22
  7. Bar-Ilan, J.: ¬The Web as an information source on informetrics? : A content analysis (2000) 0.01
    0.005833444 = product of:
      0.035000663 = sum of:
        0.035000663 = product of:
          0.07000133 = sum of:
            0.07000133 = weight(_text_:web in 4587) [ClassicSimilarity], result of:
              0.07000133 = score(doc=4587,freq=16.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.6119082 = fieldWeight in 4587, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4587)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    This article addresses the question of whether the Web can serve as an information source for research. Specifically, it analyzes by way of content analysis the Web pages retrieved by the major search engines on a particular date (June 7, 1998), as a result of the query 'informetrics OR informetric'. In 807 out of the 942 retrieved pages, the search terms were mentioned in the context of information science. Over 70% of the pages contained only indirect information on the topic, in the form of hypertext links and bibliographical references without annotation. The bibliographical references extracted from the Web pages were analyzed, and lists of most productive authors, most cited authors, works, and sources were compiled. The list of reference obtained from the Web was also compared to data retrieved from commercial databases. For most cases, the list of references extracted from the Web outperformed the commercial, bibliographic databases. The results of these comparisons indicate that valuable, freely available data is hidden in the Web waiting to be extracted from the millions of Web pages
  8. Clarke, S.J.; Willett, P.: Estimating the recall performance of Web search engines (1997) 0.01
    0.005499824 = product of:
      0.03299894 = sum of:
        0.03299894 = product of:
          0.06599788 = sum of:
            0.06599788 = weight(_text_:web in 760) [ClassicSimilarity], result of:
              0.06599788 = score(doc=760,freq=8.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.5769126 = fieldWeight in 760, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0625 = fieldNorm(doc=760)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Reports a comparison of the retrieval effectiveness of the AltaVista, Excite and Lycos Web search engines. Describes a method for comparing the recall of the 3 sets of searches, despite the fact that they are carried out on non identical sets of Web pages. It is thus possible, unlike previous comparative studies of Web search engines, to consider both recall and precision when evaluating the effectiveness of search engines
  9. MacFarlane, A.: Evaluation of web search for the information practitioner (2007) 0.01
    0.0054566874 = product of:
      0.032740124 = sum of:
        0.032740124 = product of:
          0.06548025 = sum of:
            0.06548025 = weight(_text_:web in 817) [ClassicSimilarity], result of:
              0.06548025 = score(doc=817,freq=14.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.57238775 = fieldWeight in 817, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=817)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Purpose - The aim of the paper is to put forward a structured mechanism for web search evaluation. The paper seeks to point to useful scientific research and show how information practitioners can use these methods in evaluation of search on the web for their users. Design/methodology/approach - The paper puts forward an approach which utilizes traditional laboratory-based evaluation measures such as average precision/precision at N documents, augmented with diagnostic measures such as link broken, etc., which are used to show why precision measures are depressed as well as the quality of the search engines crawling mechanism. Findings - The paper shows how to use diagnostic measures in conjunction with precision in order to evaluate web search. Practical implications - The methodology presented in this paper will be useful to any information professional who regularly uses web search as part of their information seeking and needs to evaluate web search services. Originality/value - The paper argues that the use of diagnostic measures is essential in web search, as precision measures on their own do not allow a searcher to understand why search results differ between search engines.
  10. Chu, H.: Factors affecting relevance judgment : a report from TREC Legal track (2011) 0.01
    0.005300956 = product of:
      0.031805735 = sum of:
        0.031805735 = product of:
          0.0477086 = sum of:
            0.023962079 = weight(_text_:29 in 4540) [ClassicSimilarity], result of:
              0.023962079 = score(doc=4540,freq=2.0), product of:
                0.12330827 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03505379 = queryNorm
                0.19432661 = fieldWeight in 4540, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4540)
            0.023746524 = weight(_text_:22 in 4540) [ClassicSimilarity], result of:
              0.023746524 = score(doc=4540,freq=2.0), product of:
                0.1227524 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03505379 = queryNorm
                0.19345059 = fieldWeight in 4540, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4540)
          0.6666667 = coord(2/3)
      0.16666667 = coord(1/6)
    
    Date
    12. 7.2011 18:29:22
  11. Sünkler, S.: Prototypische Entwicklung einer Software für die Erfassung und Analyse explorativer Suchen in Verbindung mit Tests zur Retrievaleffektivität (2012) 0.01
    0.0050625536 = product of:
      0.03037532 = sum of:
        0.03037532 = product of:
          0.06075064 = sum of:
            0.06075064 = weight(_text_:seite in 479) [ClassicSimilarity], result of:
              0.06075064 = score(doc=479,freq=2.0), product of:
                0.19633847 = queryWeight, product of:
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.03505379 = queryNorm
                0.3094179 = fieldWeight in 479, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.601063 = idf(docFreq=443, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=479)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Gegenstand dieser Arbeit ist die Entwicklung eines funktionalen Prototyps einer Webanwendung für die Verknüpfung der Evaluierung von explorativen Suchen in Verbindung mit der Durchführung klassisches Retrievaltests. Als Grundlage für die Programmierung des Prototyps werden benutzerorientierte und systemorientierte Evalulierungsmethoden für Suchmaschinen analysiert und in einem theoretischen Modell zur Untersuchung von Informationssysteme und Suchmaschinen kombiniert. Bei der Gestaltung des Modells und des Prototyps wird gezeigt, wie sich aufgezeichnete Aktionsdaten praktisch für die Suchmaschinenevaluierung verwenden lassen, um auf der einen Seite eine Datengrundlage für Retrievaltests zu gewinnen und andererseits, um für die Auswertung von Relevanzbewertungen auch das implizierte Feedback durch Handlungen der Anwender zu berücksichtigen. Retrievaltests sind das gängige und erprobte Mittel zur Messung der Retrievaleffektiviät von Informationssystemen und Suchmaschinen, verzichten aber auf eine Berücksichtigung des tatsächlichen Nutzerverhaltens. Eine Methode für die Erfassung der Interaktionen von Suchmaschinennutzern sind protokollbasierte Tests, mit denen sich Logdateien über Benutzer einer Anwendung generieren lassen. Die im Rahmen der Arbeit umgesetzte Software bietet einen Ansatz, Retrievaltests auf Basis protokollierter Nutzerdaten in Verbindung mit kontrollierten Suchaufgaben, durchzuführen. Das Ergebnis dieser Arbeit ist ein fertiger funktionaler Prototyp, der in seinem Umfang bereits innerhalb von Suchmaschinenstudien nutzbar ist.
  12. Cooper, M.D.; Chen, H.-M.: Predicting the relevance of a library catalog search (2001) 0.00
    0.0048798746 = product of:
      0.014639623 = sum of:
        0.008249735 = product of:
          0.01649947 = sum of:
            0.01649947 = weight(_text_:web in 6519) [ClassicSimilarity], result of:
              0.01649947 = score(doc=6519,freq=2.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.14422815 = fieldWeight in 6519, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=6519)
          0.5 = coord(1/2)
        0.0063898875 = product of:
          0.019169662 = sum of:
            0.019169662 = weight(_text_:29 in 6519) [ClassicSimilarity], result of:
              0.019169662 = score(doc=6519,freq=2.0), product of:
                0.12330827 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03505379 = queryNorm
                0.15546128 = fieldWeight in 6519, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=6519)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Relevance has been a difficult concept to define, let alone measure. In this paper, a simple operational definition of relevance is proposed for a Web-based library catalog: whether or not during a search session the user saves, prints, mails, or downloads a citation. If one of those actions is performed, the session is considered relevant to the user. An analysis is presented illustrating the advantages and disadvantages of this definition. With this definition and good transaction logging, it is possible to ascertain the relevance of a session. This was done for 905,970 sessions conducted with the University of California's Melvyl online catalog. Next, a methodology was developed to try to predict the relevance of a session. A number of variables were defined that characterize a session, none of which used any demographic information about the user. The values of the variables were computed for the sessions. Principal components analysis was used to extract a new set of variables out of the original set. A stratified random sampling technique was used to form ten strata such that each new strata of 90,570 sessions contained the same proportion of relevant to nonrelevant sessions. Logistic regression was used to ascertain the regression coefficients for nine of the ten strata. Then, the coefficients were used to predict the relevance of the sessions in the missing strata. Overall, 17.85% of the sessions were determined to be relevant. The predicted number of relevant sessions for all ten strata was 11 %, a 6.85% difference. The authors believe that the methodology can be further refined and the prediction improved. This methodology could also have significant application in improving user searching and also in predicting electronic commerce buying decisions without the use of personal demographic data
    Date
    29. 9.2001 17:26:02
  13. Lazonder, A.W.; Biemans, H.J.A.; Wopereis, I.G.J.H.: Differences between novice and experienced users in searching information on the World Wide Web (2000) 0.00
    0.0046117427 = product of:
      0.027670456 = sum of:
        0.027670456 = product of:
          0.055340912 = sum of:
            0.055340912 = weight(_text_:web in 4598) [ClassicSimilarity], result of:
              0.055340912 = score(doc=4598,freq=10.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.48375595 = fieldWeight in 4598, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4598)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Searching for information on the WWW basically comes down to locating an appropriate Web site and to retrieving relevant information from that site. This study examined the effect of a user's WWW experience on both phases of the search process. 35 students from 2 schools for Dutch pre-university education were observed while performing 3 search tasks. The results indicate that subjects with WWW-experience are more proficient in locating Web sites than are novice WWW-users. The observed differences were ascribed to the experts' superior skills in operating Web search engines. However, on tasks that required subjects to locate information on specific Web sites, the performance of experienced and novice users was equivalent - a result that is in line with hypertext research. Based on these findings, implications for training and supporting students in searching for information on the WWW are identified. Finally, the role of the subjects' level of domain expertise is discussed and directions for future research are proposed
  14. TREC: experiment and evaluation in information retrieval (2005) 0.00
    0.004308094 = product of:
      0.012924281 = sum of:
        0.008930601 = product of:
          0.017861202 = sum of:
            0.017861202 = weight(_text_:web in 636) [ClassicSimilarity], result of:
              0.017861202 = score(doc=636,freq=6.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.15613155 = fieldWeight in 636, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=636)
          0.5 = coord(1/2)
        0.00399368 = product of:
          0.011981039 = sum of:
            0.011981039 = weight(_text_:29 in 636) [ClassicSimilarity], result of:
              0.011981039 = score(doc=636,freq=2.0), product of:
                0.12330827 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03505379 = queryNorm
                0.097163305 = fieldWeight in 636, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=636)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    The Text REtrieval Conference (TREC), a yearly workshop hosted by the US government's National Institute of Standards and Technology, provides the infrastructure necessary for large-scale evaluation of text retrieval methodologies. With the goal of accelerating research in this area, TREC created the first large test collections of full-text documents and standardized retrieval evaluation. The impact has been significant; since TREC's beginning in 1992, retrieval effectiveness has approximately doubled. TREC has built a variety of large test collections, including collections for such specialized retrieval tasks as cross-language retrieval and retrieval of speech. Moreover, TREC has accelerated the transfer of research ideas into commercial systems, as demonstrated in the number of retrieval techniques developed in TREC that are now used in Web search engines. This book provides a comprehensive review of TREC research, summarizing the variety of TREC results, documenting the best practices in experimental information retrieval, and suggesting areas for further research. The first part of the book describes TREC's history, test collections, and retrieval methodology. Next, the book provides "track" reports -- describing the evaluations of specific tasks, including routing and filtering, interactive retrieval, and retrieving noisy text. The final part of the book offers perspectives on TREC from such participants as Microsoft Research, University of Massachusetts, Cornell University, University of Waterloo, City University of New York, and IBM. The book will be of interest to researchers in information retrieval and related technologies, including natural language processing.
    Content
    Enthält die Beiträge: 1. The Text REtrieval Conference - Ellen M. Voorhees and Donna K. Harman 2. The TREC Test Collections - Donna K. Harman 3. Retrieval System Evaluation - Chris Buckley and Ellen M. Voorhees 4. The TREC Ad Hoc Experiments - Donna K. Harman 5. Routing and Filtering - Stephen Robertson and Jamie Callan 6. The TREC Interactive Tracks: Putting the User into Search - Susan T. Dumais and Nicholas J. Belkin 7. Beyond English - Donna K. Harman 8. Retrieving Noisy Text - Ellen M. Voorhees and John S. Garofolo 9.The Very Large Collection and Web Tracks - David Hawking and Nick Craswell 10. Question Answering in TREC - Ellen M. Voorhees 11. The University of Massachusetts and a Dozen TRECs - James Allan, W. Bruce Croft and Jamie Callan 12. How Okapi Came to TREC - Stephen Robertson 13. The SMART Project at TREC - Chris Buckley 14. Ten Years of Ad Hoc Retrieval at TREC Using PIRCS - Kui-Lam Kwok 15. MultiText Experiments for TREC - Gordon V. Cormack, Charles L. A. Clarke, Christopher R. Palmer and Thomas R. Lynam 16. A Language-Modeling Approach to TREC - Djoerd Hiemstra and Wessel Kraaij 17. BM Research Activities at TREC - Eric W. Brown, David Carmel, Martin Franz, Abraham Ittycheriah, Tapas Kanungo, Yoelle Maarek, J. Scott McCarley, Robert L. Mack, John M. Prager, John R. Smith, Aya Soffer, Jason Y. Zien and Alan D. Marwick Epilogue: Metareflections on TREC - Karen Sparck Jones
    Date
    29. 3.1996 18:16:49
    Footnote
    Rez. in: JASIST 58(2007) no.6, S.910-911 (J.L. Vicedo u. J. Gomez): "The Text REtrieval Conference (TREC) is a yearly workshop hosted by the U.S. government's National Institute of Standards and Technology (NIST) that fosters and supports research in information retrieval as well as speeding the transfer of technology between research labs and industry. Since 1992, TREC has provided the infrastructure necessary for large-scale evaluations of different text retrieval methodologies. TREC impact has been very important and its success has been mainly supported by its continuous adaptation to the emerging information retrieval needs. Not in vain, TREC has built evaluation benchmarks for more than 20 different retrieval problems such as Web retrieval, speech retrieval, or question-answering. The large and intense trajectory of annual TREC conferences has resulted in an immense bulk of documents reflecting the different eval uation and research efforts developed. This situation makes it difficult sometimes to observe clearly how research in information retrieval (IR) has evolved over the course of TREC. TREC: Experiment and Evaluation in Information Retrieval succeeds in organizing and condensing all this research into a manageable volume that describes TREC history and summarizes the main lessons learned. The book is organized into three parts. The first part is devoted to the description of TREC's origin and history, the test collections, and the evaluation methodology developed. The second part describes a selection of the major evaluation exercises (tracks), and the third part contains contributions from research groups that had a large and remarkable participation in TREC. Finally, Karen Spark Jones, one of the main promoters of research in IR, closes the book with an epilogue that analyzes the impact of TREC on this research field.
  15. Voorhees, E.M.; Harman, D.K.: ¬The Text REtrieval Conference (2005) 0.00
    0.00426989 = product of:
      0.01280967 = sum of:
        0.0072185188 = product of:
          0.0144370375 = sum of:
            0.0144370375 = weight(_text_:web in 5082) [ClassicSimilarity], result of:
              0.0144370375 = score(doc=5082,freq=2.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.12619963 = fieldWeight in 5082, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5082)
          0.5 = coord(1/2)
        0.0055911513 = product of:
          0.016773453 = sum of:
            0.016773453 = weight(_text_:29 in 5082) [ClassicSimilarity], result of:
              0.016773453 = score(doc=5082,freq=2.0), product of:
                0.12330827 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03505379 = queryNorm
                0.13602862 = fieldWeight in 5082, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5082)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Text retrieval technology targets a problem that is all too familiar: finding relevant information in large stores of electronic documents. The problem is an old one, with the first research conference devoted to the subject held in 1958 [11]. Since then the problem has continued to grow as more information is created in electronic form and more people gain electronic access. The advent of the World Wide Web, where anyone can publish so everyone must search, is a graphic illustration of the need for effective retrieval technology. The Text REtrieval Conference (TREC) is a workshop series designed to build the infrastructure necessary for the large-scale evaluation of text retrieval technology, thereby accelerating its transfer into the commercial sector. The series is sponsored by the U.S. National Institute of Standards and Technology (NIST) and the U.S. Department of Defense. At the time of this writing, there have been twelve TREC workshops and preparations for the thirteenth workshop are under way. Participants in the workshops have been drawn from the academic, commercial, and government sectors, and have included representatives from more than twenty different countries. These collective efforts have accomplished a great deal: a variety of large test collections have been built for both traditional ad hoc retrieval and related tasks such as cross-language retrieval, speech retrieval, and question answering; retrieval effectiveness has approximately doubled; and many commercial retrieval systems now contain technology first developed in TREC.
    Date
    29. 3.1996 18:16:49
  16. Hofstede, M.: Literatuur over onderwerpen zoeken in de OPC (1994) 0.00
    0.0042599253 = product of:
      0.02555955 = sum of:
        0.02555955 = product of:
          0.07667865 = sum of:
            0.07667865 = weight(_text_:29 in 5400) [ClassicSimilarity], result of:
              0.07667865 = score(doc=5400,freq=2.0), product of:
                0.12330827 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03505379 = queryNorm
                0.6218451 = fieldWeight in 5400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.125 = fieldNorm(doc=5400)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Source
    CRI bulletin. 29(1994), Sept., S.14-15
  17. Agata, T.: ¬A measure for evaluating search engines on the World Wide Web : retrieval test with ESL (Expected Search Length) (1997) 0.00
    0.0041248677 = product of:
      0.024749206 = sum of:
        0.024749206 = product of:
          0.049498413 = sum of:
            0.049498413 = weight(_text_:web in 3892) [ClassicSimilarity], result of:
              0.049498413 = score(doc=3892,freq=2.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.43268442 = fieldWeight in 3892, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3892)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
  18. Hancock-Beaulieu, M.; McKenzie, L.; Irving, A.: Evaluative protocols for searching behaviour in online library catalogues (1991) 0.00
    0.0037274342 = product of:
      0.022364605 = sum of:
        0.022364605 = product of:
          0.06709381 = sum of:
            0.06709381 = weight(_text_:29 in 347) [ClassicSimilarity], result of:
              0.06709381 = score(doc=347,freq=2.0), product of:
                0.12330827 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03505379 = queryNorm
                0.5441145 = fieldWeight in 347, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=347)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    23. 1.1999 19:52:29
  19. Harman, D.K.: ¬The TREC test collections (2005) 0.00
    0.0037274342 = product of:
      0.022364605 = sum of:
        0.022364605 = product of:
          0.06709381 = sum of:
            0.06709381 = weight(_text_:29 in 4637) [ClassicSimilarity], result of:
              0.06709381 = score(doc=4637,freq=2.0), product of:
                0.12330827 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03505379 = queryNorm
                0.5441145 = fieldWeight in 4637, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4637)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    29. 3.1996 18:16:49
  20. Buckley, C.; Voorhees, E.M.: Retrieval system evaluation (2005) 0.00
    0.0037274342 = product of:
      0.022364605 = sum of:
        0.022364605 = product of:
          0.06709381 = sum of:
            0.06709381 = weight(_text_:29 in 648) [ClassicSimilarity], result of:
              0.06709381 = score(doc=648,freq=2.0), product of:
                0.12330827 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03505379 = queryNorm
                0.5441145 = fieldWeight in 648, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=648)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    29. 3.1996 18:16:49

Languages

Types

  • a 100
  • s 7
  • m 5
  • el 3
  • x 2
  • p 1
  • r 1
  • More… Less…