Search (67 results, page 1 of 4)

  • × theme_ss:"Retrievalstudien"
  1. Robins, D.: Shifts of focus on various aspects of user information problems during interactive information retrieval (2000) 0.02
    0.022432221 = product of:
      0.05608055 = sum of:
        0.02060168 = product of:
          0.04120336 = sum of:
            0.04120336 = weight(_text_:problems in 4995) [ClassicSimilarity], result of:
              0.04120336 = score(doc=4995,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.27361554 = fieldWeight in 4995, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4995)
          0.5 = coord(1/2)
        0.03547887 = product of:
          0.07095774 = sum of:
            0.07095774 = weight(_text_:etc in 4995) [ClassicSimilarity], result of:
              0.07095774 = score(doc=4995,freq=2.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.35906604 = fieldWeight in 4995, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4995)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The author presents the results of additional analyses of shifts of focus in IR interaction. Results indicate that users and search intermediaries work toward search goals in nonlinear fashion. Twenty interactions between 20 different users and one of four different search intermediaries were examined. Analysis of discourse between the two parties during interactive information retrieval (IR) shows changes in topic occurs, on average, every seven utterances. These twenty interactions included some 9,858 utterances and 1,439 foci. Utterances are defined as any uninterrupted sound, statement, gesture, etc., made by a participant in the discourse dyad. These utterances are segmented by the researcher according to their intentional focus, i.e., the topic on which the conversation between the user and search intermediary focus until the focus changes (i.e., shifts of focus). In all but two of the 20 interactions, the search intermediary initiated a majority of shifts of focus. Six focus categories were observed. These were foci dealing with: documents; evaluation of search results; search strategies; IR system; topic of the search; and information about the user
  2. Sanderson, M.: ¬The Reuters test collection (1996) 0.02
    0.018896578 = product of:
      0.047241446 = sum of:
        0.027468907 = product of:
          0.054937813 = sum of:
            0.054937813 = weight(_text_:problems in 6971) [ClassicSimilarity], result of:
              0.054937813 = score(doc=6971,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.36482072 = fieldWeight in 6971, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6971)
          0.5 = coord(1/2)
        0.019772539 = product of:
          0.039545078 = sum of:
            0.039545078 = weight(_text_:22 in 6971) [ClassicSimilarity], result of:
              0.039545078 = score(doc=6971,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.30952093 = fieldWeight in 6971, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6971)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Describes the Reuters test collection, which at 22.173 references is significantly larger than most traditional test collections. In addition, Reuters has none of the recall calculation problems normally associated with some of the larger test collections available. Explains the method derived by D.D. Lewis to perform retrieval experiments on the Reuters collection and illustrates the use of the Reuters collection using some simple retrieval experiments that compare the performance of stemming algorithms
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  3. Ellis, D.: Progress and problems in information retrieval (1996) 0.02
    0.018896578 = product of:
      0.047241446 = sum of:
        0.027468907 = product of:
          0.054937813 = sum of:
            0.054937813 = weight(_text_:problems in 789) [ClassicSimilarity], result of:
              0.054937813 = score(doc=789,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.36482072 = fieldWeight in 789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0625 = fieldNorm(doc=789)
          0.5 = coord(1/2)
        0.019772539 = product of:
          0.039545078 = sum of:
            0.039545078 = weight(_text_:22 in 789) [ClassicSimilarity], result of:
              0.039545078 = score(doc=789,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.30952093 = fieldWeight in 789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=789)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    26. 7.2002 20:22:46
  4. Losee, R.M.: Determining information retrieval and filtering performance without experimentation (1995) 0.02
    0.016534507 = product of:
      0.04133627 = sum of:
        0.024035294 = product of:
          0.048070587 = sum of:
            0.048070587 = weight(_text_:problems in 3368) [ClassicSimilarity], result of:
              0.048070587 = score(doc=3368,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.31921813 = fieldWeight in 3368, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3368)
          0.5 = coord(1/2)
        0.017300973 = product of:
          0.034601945 = sum of:
            0.034601945 = weight(_text_:22 in 3368) [ClassicSimilarity], result of:
              0.034601945 = score(doc=3368,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.2708308 = fieldWeight in 3368, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3368)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The performance of an information retrieval or text and media filtering system may be determined through analytic methods as well as by traditional simulation or experimental methods. These analytic methods can provide precise statements about expected performance. They can thus determine which of 2 similarly performing systems is superior. For both a single query terms and for a multiple query term retrieval model, a model for comparing the performance of different probabilistic retrieval methods is developed. This method may be used in computing the average search length for a query, given only knowledge of database parameter values. Describes predictive models for inverse document frequency, binary independence, and relevance feedback based retrieval and filtering. Simulation illustrate how the single term model performs and sample performance predictions are given for single term and multiple term problems
    Date
    22. 2.1996 13:14:10
  5. Hodges, P.R.: Keyword in title indexes : effectiveness of retrieval in computer searches (1983) 0.02
    0.016534507 = product of:
      0.04133627 = sum of:
        0.024035294 = product of:
          0.048070587 = sum of:
            0.048070587 = weight(_text_:problems in 5001) [ClassicSimilarity], result of:
              0.048070587 = score(doc=5001,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.31921813 = fieldWeight in 5001, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5001)
          0.5 = coord(1/2)
        0.017300973 = product of:
          0.034601945 = sum of:
            0.034601945 = weight(_text_:22 in 5001) [ClassicSimilarity], result of:
              0.034601945 = score(doc=5001,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.2708308 = fieldWeight in 5001, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5001)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    A study was done to test the effectiveness of retrieval using title word searching. It was based on actual search profiles used in the Mechanized Information Center at Ohio State University, in order ro replicate as closely as possible actual searching conditions. Fewer than 50% of the relevant titles were retrieved by keywords in titles. The low rate of retrieval can be attributes to three sources: titles themselves, user and information specialist ignorance of the subject vocabulary in use, and to general language problems. Across fields it was found that the social sciences had the best retrieval rate, with science having the next best, and arts and humanities the lowest. Ways to enhance and supplement keyword in title searching on the computer and in printed indexes are discussed.
    Date
    14. 3.1996 13:22:21
  6. Smeaton, A.F.; Harman, D.: ¬The TREC experiments and their impact on Europe (1997) 0.02
    0.016310833 = product of:
      0.08155417 = sum of:
        0.08155417 = product of:
          0.16310833 = sum of:
            0.16310833 = weight(_text_:exercises in 7702) [ClassicSimilarity], result of:
              0.16310833 = score(doc=7702,freq=2.0), product of:
                0.25947425 = queryWeight, product of:
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.036484417 = queryNorm
                0.62861085 = fieldWeight in 7702, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7702)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Reviews the overall results of the TREC experiments in information retrieval, which differed from other information retrieval research projects in that the document collections used in the research were massive, and the groups participating in the collaborative evaluation are among the main organizations in the field. Reviews the findings of TREC, the way in which it operates and the specialist 'tracks' it supports and concentrates on european involvement in TREC, examining the participants and the emergence of European TREC like exercises
  7. Kelledy, F.; Smeaton, A.F.: Thresholding the postings lists in information retrieval : experiments on TREC data (1995) 0.01
    0.014271979 = product of:
      0.071359895 = sum of:
        0.071359895 = product of:
          0.14271979 = sum of:
            0.14271979 = weight(_text_:exercises in 5804) [ClassicSimilarity], result of:
              0.14271979 = score(doc=5804,freq=2.0), product of:
                0.25947425 = queryWeight, product of:
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.036484417 = queryNorm
                0.5500345 = fieldWeight in 5804, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5804)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    A variety of methods for speeding up the response time of information retrieval processes have been put forward, one of which is the idea of thresholding. Thresholding relies on the data in information retrieval storage structures being organised to allow cut-off points to be used during processing. These cut-off points or thresholds are designed and ised to reduce the amount of information processed and to maintain the quality or minimise the degradation of response to a user's query. TREC is an annual series of benchmarking exercises to compare indexing and retrieval techniques. Reports experiments with a portion of the TREC data where features are introduced into the retrieval process to improve response time. These features improve response time while maintaining the same level of retrieval effectiveness
  8. Sievert, M.E.; McKinin, E.J.: Why full-text misses some relevant documents : an analysis of documents not retrieved by CCML or MEDIS (1989) 0.01
    0.014172435 = product of:
      0.035431087 = sum of:
        0.02060168 = product of:
          0.04120336 = sum of:
            0.04120336 = weight(_text_:problems in 3564) [ClassicSimilarity], result of:
              0.04120336 = score(doc=3564,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.27361554 = fieldWeight in 3564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3564)
          0.5 = coord(1/2)
        0.014829405 = product of:
          0.02965881 = sum of:
            0.02965881 = weight(_text_:22 in 3564) [ClassicSimilarity], result of:
              0.02965881 = score(doc=3564,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.23214069 = fieldWeight in 3564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3564)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Searches conducted as part of the MEDLINE/Full-Text Research Project revealed that the full-text data bases of clinical medical journal articles (CCML (Comprehensive Core Medical Library) from BRS Information Technologies, and MEDIS from Mead Data Central) did not retrieve all the relevant citations. An analysis of the data indicated that 204 relevant citations were retrieved only by MEDLINE. A comparison of the strategies used on the full-text data bases with the text of the articles of these 204 citations revealed that 2 reasons contributed to these failure. The searcher often constructed a restrictive strategy which resulted in the loss of relevant documents; and as in other kinds of retrieval, the problems of natural language caused the loss of relevant documents.
    Date
    9. 1.1996 10:22:31
  9. TREC: experiment and evaluation in information retrieval (2005) 0.01
    0.013627884 = product of:
      0.03406971 = sum of:
        0.008584034 = product of:
          0.017168067 = sum of:
            0.017168067 = weight(_text_:problems in 636) [ClassicSimilarity], result of:
              0.017168067 = score(doc=636,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.114006475 = fieldWeight in 636, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=636)
          0.5 = coord(1/2)
        0.025485676 = product of:
          0.05097135 = sum of:
            0.05097135 = weight(_text_:exercises in 636) [ClassicSimilarity], result of:
              0.05097135 = score(doc=636,freq=2.0), product of:
                0.25947425 = queryWeight, product of:
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.036484417 = queryNorm
                0.19644089 = fieldWeight in 636, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=636)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Footnote
    Rez. in: JASIST 58(2007) no.6, S.910-911 (J.L. Vicedo u. J. Gomez): "The Text REtrieval Conference (TREC) is a yearly workshop hosted by the U.S. government's National Institute of Standards and Technology (NIST) that fosters and supports research in information retrieval as well as speeding the transfer of technology between research labs and industry. Since 1992, TREC has provided the infrastructure necessary for large-scale evaluations of different text retrieval methodologies. TREC impact has been very important and its success has been mainly supported by its continuous adaptation to the emerging information retrieval needs. Not in vain, TREC has built evaluation benchmarks for more than 20 different retrieval problems such as Web retrieval, speech retrieval, or question-answering. The large and intense trajectory of annual TREC conferences has resulted in an immense bulk of documents reflecting the different eval uation and research efforts developed. This situation makes it difficult sometimes to observe clearly how research in information retrieval (IR) has evolved over the course of TREC. TREC: Experiment and Evaluation in Information Retrieval succeeds in organizing and condensing all this research into a manageable volume that describes TREC history and summarizes the main lessons learned. The book is organized into three parts. The first part is devoted to the description of TREC's origin and history, the test collections, and the evaluation methodology developed. The second part describes a selection of the major evaluation exercises (tracks), and the third part contains contributions from research groups that had a large and remarkable participation in TREC. Finally, Karen Spark Jones, one of the main promoters of research in IR, closes the book with an epilogue that analyzes the impact of TREC on this research field.
  10. MacFarlane, A.: Evaluation of web search for the information practitioner (2007) 0.01
    0.007095774 = product of:
      0.03547887 = sum of:
        0.03547887 = product of:
          0.07095774 = sum of:
            0.07095774 = weight(_text_:etc in 817) [ClassicSimilarity], result of:
              0.07095774 = score(doc=817,freq=2.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.35906604 = fieldWeight in 817, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.046875 = fieldNorm(doc=817)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Purpose - The aim of the paper is to put forward a structured mechanism for web search evaluation. The paper seeks to point to useful scientific research and show how information practitioners can use these methods in evaluation of search on the web for their users. Design/methodology/approach - The paper puts forward an approach which utilizes traditional laboratory-based evaluation measures such as average precision/precision at N documents, augmented with diagnostic measures such as link broken, etc., which are used to show why precision measures are depressed as well as the quality of the search engines crawling mechanism. Findings - The paper shows how to use diagnostic measures in conjunction with precision in order to evaluate web search. Practical implications - The methodology presented in this paper will be useful to any information professional who regularly uses web search as part of their information seeking and needs to evaluate web search services. Originality/value - The paper argues that the use of diagnostic measures is essential in web search, as precision measures on their own do not allow a searcher to understand why search results differ between search engines.
  11. Li, J.; Zhang, P.; Song, D.; Wu, Y.: Understanding an enriched multidimensional user relevance model by analyzing query logs (2017) 0.01
    0.007095774 = product of:
      0.03547887 = sum of:
        0.03547887 = product of:
          0.07095774 = sum of:
            0.07095774 = weight(_text_:etc in 3961) [ClassicSimilarity], result of:
              0.07095774 = score(doc=3961,freq=2.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.35906604 = fieldWeight in 3961, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3961)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Modeling multidimensional relevance in information retrieval (IR) has attracted much attention in recent years. However, most existing studies are conducted through relatively small-scale user studies, which may not reflect a real-world and natural search scenario. In this article, we propose to study the multidimensional user relevance model (MURM) on large scale query logs, which record users' various search behaviors (e.g., query reformulations, clicks and dwelling time, etc.) in natural search settings. We advance an existing MURM model (including five dimensions: topicality, novelty, reliability, understandability, and scope) by providing two additional dimensions, that is, interest and habit. The two new dimensions represent personalized relevance judgment on retrieved documents. Further, for each dimension in the enriched MURM model, a set of computable features are formulated. By conducting extensive document ranking experiments on Bing's query logs and TREC session Track data, we systematically investigated the impact of each dimension on retrieval performance and gained a series of insightful findings which may bring benefits for the design of future IR systems.
  12. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.01
    0.0069203894 = product of:
      0.034601945 = sum of:
        0.034601945 = product of:
          0.06920389 = sum of:
            0.06920389 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.06920389 = score(doc=262,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    20.10.2000 12:22:23
  13. Tomaiuolo, N.G.; Parker, J.: Maximizing relevant retrieval : keyword and natural language searching (1998) 0.01
    0.0069203894 = product of:
      0.034601945 = sum of:
        0.034601945 = product of:
          0.06920389 = sum of:
            0.06920389 = weight(_text_:22 in 6418) [ClassicSimilarity], result of:
              0.06920389 = score(doc=6418,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.5416616 = fieldWeight in 6418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6418)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Online. 22(1998) no.6, S.57-58
  14. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.01
    0.0069203894 = product of:
      0.034601945 = sum of:
        0.034601945 = product of:
          0.06920389 = sum of:
            0.06920389 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
              0.06920389 = score(doc=6438,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.5416616 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    11. 8.2001 16:22:19
  15. Dalrymple, P.W.: Retrieval by reformulation in two library catalogs : toward a cognitive model of searching behavior (1990) 0.01
    0.0069203894 = product of:
      0.034601945 = sum of:
        0.034601945 = product of:
          0.06920389 = sum of:
            0.06920389 = weight(_text_:22 in 5089) [ClassicSimilarity], result of:
              0.06920389 = score(doc=5089,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.5416616 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 7.2006 18:43:54
  16. Tague-Sutcliffe, J.M.: Some perspectives on the evaluation of information retrieval systems (1996) 0.01
    0.0067982078 = product of:
      0.03399104 = sum of:
        0.03399104 = product of:
          0.06798208 = sum of:
            0.06798208 = weight(_text_:problems in 4163) [ClassicSimilarity], result of:
              0.06798208 = score(doc=4163,freq=4.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.4514426 = fieldWeight in 4163, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4163)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    As an introduction to the papers in this special issue, some of the major problems facing in investigators evaluating information retrieval systems are presented. These problems include the question of the necessity of using real users, as opposed to subject experts, in making relevance judgements, the possibility of evaluating individual components of the retrieval process, rather than the process as a whole, the kinds of aggregation that are appropriate for the measures used in evaluating systems, the value of an analytic or simulatory, as opposed to an experimental, approach in evaluation retrieval systems, the difficulties in evaluating interactive systems, and the kind of generalization which are possible from information retrieval tests.
  17. Keen, E.M.: Some aspects of proximity searching in text retrieval systems (1992) 0.01
    0.0054937815 = product of:
      0.027468907 = sum of:
        0.027468907 = product of:
          0.054937813 = sum of:
            0.054937813 = weight(_text_:problems in 6190) [ClassicSimilarity], result of:
              0.054937813 = score(doc=6190,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.36482072 = fieldWeight in 6190, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6190)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Describes and evaluates the proximity search facilities in external online systems and in-house retrieval software. Discusses and illustrates capabilities, syntax and circumstances of use. Presents measurements of the overheads required by proximity for storage, record input time and search time. The search strategy narrowing effect of proximity is illustrated by recall and precision test results. Usage and problems lead to a number of design ideas for better implementation: some based on existing Boolean strategies, one on the use of weighted proximity to automatically produce ranked output. A comparison of Boolean, quorum and proximate term pairs distance is included
  18. Frei, H.P.; Meienberg, S.; Schäuble, P.: ¬The perils of interpreting recall and precision values (1991) 0.01
    0.0054937815 = product of:
      0.027468907 = sum of:
        0.027468907 = product of:
          0.054937813 = sum of:
            0.054937813 = weight(_text_:problems in 786) [ClassicSimilarity], result of:
              0.054937813 = score(doc=786,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.36482072 = fieldWeight in 786, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0625 = fieldNorm(doc=786)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The traditional recall and precision measure is inappropriate when retrieval algorithms that retrieve information from Wide Area Networks are evaluated. The principle reason is that information available in WANs is dynamic and its size os orders of magnitude greater than the size of the usual test collections. To overcome these problems, a new efffectiveness measure has been developed, which we call the 'usefulness measure'
  19. Allan, J.; Callan, J.P.; Croft, W.B.; Ballesteros, L.; Broglio, J.; Xu, J.; Shu, H.: INQUERY at TREC-5 (1997) 0.00
    0.004943135 = product of:
      0.024715675 = sum of:
        0.024715675 = product of:
          0.04943135 = sum of:
            0.04943135 = weight(_text_:22 in 3103) [ClassicSimilarity], result of:
              0.04943135 = score(doc=3103,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.38690117 = fieldWeight in 3103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3103)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    27. 2.1999 20:55:22
  20. Ng, K.B.; Loewenstern, D.; Basu, C.; Hirsh, H.; Kantor, P.B.: Data fusion of machine-learning methods for the TREC5 routing tak (and other work) (1997) 0.00
    0.004943135 = product of:
      0.024715675 = sum of:
        0.024715675 = product of:
          0.04943135 = sum of:
            0.04943135 = weight(_text_:22 in 3107) [ClassicSimilarity], result of:
              0.04943135 = score(doc=3107,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.38690117 = fieldWeight in 3107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3107)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    27. 2.1999 20:59:22

Languages

  • e 61
  • d 3
  • chi 1
  • f 1
  • More… Less…

Types

  • a 60
  • s 5
  • m 4
  • r 1
  • More… Less…