Search (38 results, page 1 of 2)

  • × theme_ss:"Retrievalstudien"
  • × type_ss:"a"
  • × year_i:[1990 TO 2000}
  1. Agata, T.: ¬A measure for evaluating search engines on the World Wide Web : retrieval test with ESL (Expected Search Length) (1997) 0.06
    0.059441954 = product of:
      0.17832586 = sum of:
        0.11560701 = weight(_text_:wide in 3892) [ClassicSimilarity], result of:
          0.11560701 = score(doc=3892,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.5874411 = fieldWeight in 3892, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.09375 = fieldNorm(doc=3892)
        0.062718846 = weight(_text_:web in 3892) [ClassicSimilarity], result of:
          0.062718846 = score(doc=3892,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.43268442 = fieldWeight in 3892, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=3892)
      0.33333334 = coord(2/6)
    
  2. Wu, C.-J.: Experiments on using the Dublin Core to reduce the retrieval error ratio (1998) 0.03
    0.034674477 = product of:
      0.10402343 = sum of:
        0.067437425 = weight(_text_:wide in 5201) [ClassicSimilarity], result of:
          0.067437425 = score(doc=5201,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.342674 = fieldWeight in 5201, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5201)
        0.036585998 = weight(_text_:web in 5201) [ClassicSimilarity], result of:
          0.036585998 = score(doc=5201,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25239927 = fieldWeight in 5201, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5201)
      0.33333334 = coord(2/6)
    
    Abstract
    In order to test the power of metadata on information retrieval, an experiment was designed and conducted on a group of 7 graduate students using the Dublin Core as the cataloguing metadata. Results show that, on average, the retrieval error rate is only 2.9 per cent for the MES system (http://140.136.85.194), which utilizes the Dublin Core to describe the documents on the World Wide Web, in contrast to 20.7 per cent for the 7 famous search engines including HOTBOT, GAIS, LYCOS, EXCITE, INFOSEEK, YAHOO, and OCTOPUS. The very low error rate indicates that the users can use the information of the Dublin Core to decide whether to retrieve the documents or not
  3. Khan, K.; Locatis, C.: Searching through cyberspace : the effects of link display and link density on information retrieval from hypertext on the World Wide Web (1998) 0.03
    0.029720977 = product of:
      0.08916293 = sum of:
        0.057803504 = weight(_text_:wide in 446) [ClassicSimilarity], result of:
          0.057803504 = score(doc=446,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.29372054 = fieldWeight in 446, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=446)
        0.031359423 = weight(_text_:web in 446) [ClassicSimilarity], result of:
          0.031359423 = score(doc=446,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.21634221 = fieldWeight in 446, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=446)
      0.33333334 = coord(2/6)
    
  4. Wood, F.; Ford, N.; Miller, D.; Sobczyk, G.; Duffin, R.: Information skills, searching behaviour and cognitive styles for student-centred learning : a computer-assisted learning approach (1996) 0.03
    0.028721433 = product of:
      0.086164296 = sum of:
        0.068110935 = weight(_text_:computer in 4341) [ClassicSimilarity], result of:
          0.068110935 = score(doc=4341,freq=6.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.41961014 = fieldWeight in 4341, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=4341)
        0.01805336 = product of:
          0.03610672 = sum of:
            0.03610672 = weight(_text_:22 in 4341) [ClassicSimilarity], result of:
              0.03610672 = score(doc=4341,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.23214069 = fieldWeight in 4341, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4341)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Undergraduates were tested to establish how they searched databases, the effectiveness of their searches and their satisfaction with them. The students' cognitive and learning styles were determined by the Lancaster Approaches to Studying Inventory and Riding's Cognitive Styles Analysis tests. There were significant differences in the searching behaviour and the effectiveness of the searches carried out by students with different learning and cognitive styles. Computer-assisted learning (CAL) packages were developed for three departments. The effectiveness of the packages were evaluated. Significant differences were found in the ways students with different learning styles used the packages. Based on the experience gained, guidelines for the teaching of information skills and the production and use of packages were prepared. About 2/3 of the searches had serious weaknesses, indicating a need for effective training. It appears that choice of searching strategies, search effectiveness and use of CAL packages are all affected by the cognitive and learning styles of the searcher. Therefore, students should be made aware of their own styles and, if appropriate, how to adopt more effective strategies
    Source
    Journal of information science. 22(1996) no.2, S.79-92
    Theme
    Computer Based Training
  5. Davis, C.H.: From document retrieval to Web browsing : some universal concerns (1997) 0.03
    0.027487947 = product of:
      0.08246384 = sum of:
        0.036585998 = weight(_text_:web in 399) [ClassicSimilarity], result of:
          0.036585998 = score(doc=399,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25239927 = fieldWeight in 399, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=399)
        0.04587784 = weight(_text_:computer in 399) [ClassicSimilarity], result of:
          0.04587784 = score(doc=399,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.28263903 = fieldWeight in 399, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=399)
      0.33333334 = coord(2/6)
    
    Abstract
    Computer based systems can produce enourmous retrieval sets even when good search logic is used. Sometimes this is desirable, more often it is not. Appropriate filters can limit search results, but they represent only a partial solution. Simple ranking techniques are needed that are both effective and easily understood by the humans doing the searching. Optimal search output, whether from a traditional database or the Internet, will result when intuitive interfaces are designed that inspire confidence while making the necessary mathematics transparent. Weighted term searching using powers of 2, a technique proposed early in the history of information retrieval, can be simplifies and used in combination with modern graphics and textual input to achieve these results
  6. MacCall, S.L.; Cleveland, A.D.; Gibson, I.E.: Outline and preliminary evaluation of the classical digital library model (1999) 0.03
    0.026979826 = product of:
      0.08093948 = sum of:
        0.04816959 = weight(_text_:wide in 6541) [ClassicSimilarity], result of:
          0.04816959 = score(doc=6541,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.24476713 = fieldWeight in 6541, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6541)
        0.03276989 = weight(_text_:computer in 6541) [ClassicSimilarity], result of:
          0.03276989 = score(doc=6541,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.20188503 = fieldWeight in 6541, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6541)
      0.33333334 = coord(2/6)
    
    Abstract
    The growing number of networked information resources and services offers unprecedented opportunities for delivering high quality information to the computer desktop of a wide range of individuals. However, currently there is a reliance on a database retrieval model, in which endusers use keywords to search large collections of automatically indexed resources in order to find needed information. As an alternative to the database retrieval model, this paper outlines the classical digital library model, which is derived from traditional practices of library and information science professionals. These practices include the selection and organization of information resources for local populations of users and the integration of advanced information retrieval tools, such as databases and the Internet into these collections. To evaluate this model, library and information professionals and endusers involved with primary care medicine were asked to respond to a series of questions comparing their experiences with a digital library developed for the primary care population to their experiences with general Internet use. Preliminary results are reported
  7. Sanderson, M.: ¬The Reuters test collection (1996) 0.03
    0.02550099 = product of:
      0.07650297 = sum of:
        0.05243182 = weight(_text_:computer in 6971) [ClassicSimilarity], result of:
          0.05243182 = score(doc=6971,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.32301605 = fieldWeight in 6971, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0625 = fieldNorm(doc=6971)
        0.024071148 = product of:
          0.048142295 = sum of:
            0.048142295 = weight(_text_:22 in 6971) [ClassicSimilarity], result of:
              0.048142295 = score(doc=6971,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.30952093 = fieldWeight in 6971, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6971)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  8. Kantor, P.; Kim, M.H.; Ibraev, U.; Atasoy, K.: Estimating the number of relevant documents in enormous collections (1999) 0.02
    0.02476748 = product of:
      0.07430244 = sum of:
        0.04816959 = weight(_text_:wide in 6690) [ClassicSimilarity], result of:
          0.04816959 = score(doc=6690,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.24476713 = fieldWeight in 6690, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6690)
        0.026132854 = weight(_text_:web in 6690) [ClassicSimilarity], result of:
          0.026132854 = score(doc=6690,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.18028519 = fieldWeight in 6690, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6690)
      0.33333334 = coord(2/6)
    
    Abstract
    In assessing information retrieval systems, it is important to know not only the precision of the retrieved set, but also to compare the number of retrieved relevant items to the total number of relevant items. For large collections, such as the TREC test collections, or the World Wide Web, it is not possible to enumerate the entire set of relevant documents. If the retrieved documents are evaluated, a variant of the statistical "capture-recapture" method can be used to estimate the total number of relevant documents, providing the several retrieval systems used are sufficiently independent. We show that the underlying signal detection model supporting such an analysis can be extended in two ways. First, assuming that there are two distinct performance characteristics (corresponding to the chance of retrieving a relevant, and retrieving a given non-relevant document), we show that if there are three or more independent systems available it is possible to estimate the number of relevant documents without actually having to decide whether each individual document is relevant. We report applications of this 3-system method to the TREC data, leading to the conclusion that the independence assumptions are not satisfied. We then extend the model to a multi-system, multi-problem model, and show that it is possible to include statistical dependencies of all orders in the model, and determine the number of relevant documents for each of the problems in the set. Application to the TREC setting will be presented
  9. Pemberton, J.K.; Ojala, M.; Garman, N.: Head to head : searching the Web versus traditional services (1998) 0.02
    0.021961238 = product of:
      0.06588371 = sum of:
        0.041812565 = weight(_text_:web in 3572) [ClassicSimilarity], result of:
          0.041812565 = score(doc=3572,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.2884563 = fieldWeight in 3572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=3572)
        0.024071148 = product of:
          0.048142295 = sum of:
            0.048142295 = weight(_text_:22 in 3572) [ClassicSimilarity], result of:
              0.048142295 = score(doc=3572,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.30952093 = fieldWeight in 3572, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3572)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Source
    Online. 22(1998) no.3, S.24-26,28
  10. Crestani, F.; Rijsbergen, C.J. van: Information retrieval by imaging (1996) 0.02
    0.019125743 = product of:
      0.057377227 = sum of:
        0.039323866 = weight(_text_:computer in 6967) [ClassicSimilarity], result of:
          0.039323866 = score(doc=6967,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.24226204 = fieldWeight in 6967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=6967)
        0.01805336 = product of:
          0.03610672 = sum of:
            0.03610672 = weight(_text_:22 in 6967) [ClassicSimilarity], result of:
              0.03610672 = score(doc=6967,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.23214069 = fieldWeight in 6967, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6967)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  11. Belkin, N.J.: ¬An overview of results from Rutgers' investigations of interactive information retrieval (1998) 0.02
    0.015938118 = product of:
      0.047814354 = sum of:
        0.03276989 = weight(_text_:computer in 2339) [ClassicSimilarity], result of:
          0.03276989 = score(doc=2339,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.20188503 = fieldWeight in 2339, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2339)
        0.0150444675 = product of:
          0.030088935 = sum of:
            0.030088935 = weight(_text_:22 in 2339) [ClassicSimilarity], result of:
              0.030088935 = score(doc=2339,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.19345059 = fieldWeight in 2339, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2339)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Over the last 4 years, the Information Interaction Laboratory at Rutgers' School of communication, Information and Library Studies has performed a series of investigations concerned with various aspects of people's interactions with advanced information retrieval (IR) systems. We have benn especially concerned with understanding not just what people do, and why, and with what effect, but also with what they would like to do, and how they attempt to accomplish it, and with what difficulties. These investigations have led to some quite interesting conclusions about the nature and structure of people's interactions with information, about support for cooperative human-computer interaction in query reformulation, and about the value of visualization of search results for supporting various forms of interaction with information. In this discussion, I give an overview of the research program and its projects, present representative results from the projects, and discuss some implications of these results for support of subject searching in information retrieval systems
    Date
    22. 9.1997 19:16:05
  12. Clarke, S.J.; Willett, P.: Estimating the recall performance of Web search engines (1997) 0.01
    0.013937522 = product of:
      0.08362513 = sum of:
        0.08362513 = weight(_text_:web in 760) [ClassicSimilarity], result of:
          0.08362513 = score(doc=760,freq=8.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.5769126 = fieldWeight in 760, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=760)
      0.16666667 = coord(1/6)
    
    Abstract
    Reports a comparison of the retrieval effectiveness of the AltaVista, Excite and Lycos Web search engines. Describes a method for comparing the recall of the 3 sets of searches, despite the fact that they are carried out on non identical sets of Web pages. It is thus possible, unlike previous comparative studies of Web search engines, to consider both recall and precision when evaluating the effectiveness of search engines
  13. Frei, H.P.; Meienberg, S.; Schäuble, P.: ¬The perils of interpreting recall and precision values (1991) 0.01
    0.012845224 = product of:
      0.07707134 = sum of:
        0.07707134 = weight(_text_:wide in 786) [ClassicSimilarity], result of:
          0.07707134 = score(doc=786,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.3916274 = fieldWeight in 786, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0625 = fieldNorm(doc=786)
      0.16666667 = coord(1/6)
    
    Abstract
    The traditional recall and precision measure is inappropriate when retrieval algorithms that retrieve information from Wide Area Networks are evaluated. The principle reason is that information available in WANs is dynamic and its size os orders of magnitude greater than the size of the usual test collections. To overcome these problems, a new efffectiveness measure has been developed, which we call the 'usefulness measure'
  14. Harman, D.: ¬The Text REtrieval Conferences (TRECs) : providing a test-bed for information retrieval systems (1998) 0.01
    0.0112395715 = product of:
      0.067437425 = sum of:
        0.067437425 = weight(_text_:wide in 1314) [ClassicSimilarity], result of:
          0.067437425 = score(doc=1314,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.342674 = fieldWeight in 1314, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1314)
      0.16666667 = coord(1/6)
    
    Abstract
    The Text REtrieval Conference (TREC) workshop series encourages research in information retrieval from large text applications by providing a large test collection, uniform scoring procedures and a forum for organizations interested in comparing their results. Now in its seventh year, the conference has become the major experimental effort in the field. Participants in the TREC conferences have examined a wide variety of retrieval techniques, including methods using automatic thesauri, sophisticated term weighting, natural language techniques, relevance feedback and advanced pattern matching. The TREC conference series is co-sponsored by the National Institute of Standards and Technology (NIST) and the Information Technology Office of the Defense Advanced Research Projects Agency (DARPA)
  15. Yiqun, W.; Zhonghui, Z.; Li, Z.: Experimental study of machine factors and cognitive abilities of users (1998) 0.01
    0.010994123 = product of:
      0.065964736 = sum of:
        0.065964736 = product of:
          0.13192947 = sum of:
            0.13192947 = weight(_text_:programs in 5225) [ClassicSimilarity], result of:
              0.13192947 = score(doc=5225,freq=2.0), product of:
                0.25748047 = queryWeight, product of:
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.044416238 = queryNorm
                0.5123863 = fieldWeight in 5225, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5225)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Describes 3 experiments undertaken to research how machine factors influence the cognitive abilities of users. The experiments dealt with presentation programs of retrieval results and user selectivity, thesaurus construction and accuracy of the subject words ascertained by the user, and menu layout and time taken by the user. Discusses findings and the satisfaction of users with present databases
  16. Dunlop, M.D.; Johnson, C.W.; Reid, J.: Exploring the layers of information retrieval evaluation (1998) 0.01
    0.010813512 = product of:
      0.06488107 = sum of:
        0.06488107 = weight(_text_:computer in 3762) [ClassicSimilarity], result of:
          0.06488107 = score(doc=3762,freq=4.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.39971197 = fieldWeight in 3762, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3762)
      0.16666667 = coord(1/6)
    
    Abstract
    Presents current work on modelling interactive information retrieval systems and users' interactions with them. Analyzes the papers in this special issue in the context of evaluation in information retrieval (IR) by examining the different layers at which IR use could be evaluated. IR poses the double evaluation problem of evaluating both the underlying system effectiveness and the overall ability of the system to aid users. The papers look at different issues in combining human-computer interaction (HCI) research with IR research and provide insights into the problem of evaluating the information seeking process
    Footnote
    Contribution to a special section of articles related to human-computer interaction and information retrieval
  17. Tonta, Y.: Analysis of search failures in document retrieval systems : a review (1992) 0.01
    0.008738637 = product of:
      0.05243182 = sum of:
        0.05243182 = weight(_text_:computer in 4611) [ClassicSimilarity], result of:
          0.05243182 = score(doc=4611,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.32301605 = fieldWeight in 4611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0625 = fieldNorm(doc=4611)
      0.16666667 = coord(1/6)
    
    Source
    Public-access computer systems review. 3(1992) no.1, S.4-53
  18. Gilchrist, A.: Research and consultancy (1998) 0.01
    0.008738637 = product of:
      0.05243182 = sum of:
        0.05243182 = weight(_text_:computer in 1394) [ClassicSimilarity], result of:
          0.05243182 = score(doc=1394,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.32301605 = fieldWeight in 1394, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0625 = fieldNorm(doc=1394)
      0.16666667 = coord(1/6)
    
    Abstract
    State of the art review of literature published about research and consultancy in library and information science (LIS). Issues covered include: scope and definitions of what constitutes research and consultancy; funding of research and development; national LIS research and the funding agencies; electronic libraries; document delivery; multimedia document delivery; the Z39.50 standard for client server computer architecture, the Internet and WWW; electronic publishing; information retrieval; evaluation and evaluation techniques; the Text Retrieval Conferences (TREC); the user domain; management issues; decision support systems; information politics and organizational culture; and value for money issues
  19. Keen, E.M.: Aspects of computer-based indexing languages (1991) 0.01
    0.008738637 = product of:
      0.05243182 = sum of:
        0.05243182 = weight(_text_:computer in 5072) [ClassicSimilarity], result of:
          0.05243182 = score(doc=5072,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.32301605 = fieldWeight in 5072, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0625 = fieldNorm(doc=5072)
      0.16666667 = coord(1/6)
    
  20. Harter, S.P.: Variations in relevance assessments and the measurement of retrieval effectiveness (1996) 0.01
    0.008028265 = product of:
      0.04816959 = sum of:
        0.04816959 = weight(_text_:wide in 3004) [ClassicSimilarity], result of:
          0.04816959 = score(doc=3004,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.24476713 = fieldWeight in 3004, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3004)
      0.16666667 = coord(1/6)
    
    Abstract
    The purpose of this article is to bring attention to the problem of variations in relevance assessments and the effects that these may have on measures of retrieval effectiveness. Through an analytical review of the literature, I show that despite known wide variations in relevance assessments in experimental test collections, their effects on the measurement of retrieval performance are almost completely unstudied. I will further argue that what we know about tha many variables that have been found to affect relevance assessments under experimental conditions, as well as our new understanding of psychological, situational, user-based relevance, point to a single conclusion. We can no longer rest the evaluation of information retrieval systems on the assumption that such variations do not significantly affect the measurement of information retrieval performance. A series of thourough, rigorous, and extensive tests is needed, of precisely how, and under what conditions, variations in relevance assessments do, and do not, affect measures of retrieval performance. We need to develop approaches to evaluation that are sensitive to these variations and to human factors and individual differences more generally. Our approaches to evaluation must reflect the real world of real users

Languages

  • e 33
  • chi 2
  • d 1
  • f 1
  • ja 1
  • More… Less…