Search (80 results, page 1 of 4)

  • × theme_ss:"Retrievalstudien"
  • × year_i:[1990 TO 2000}
  1. Lespinasse, K.: TREC: une conference pour l'evaluation des systemes de recherche d'information (1997) 0.06
    0.05908383 = product of:
      0.11816766 = sum of:
        0.11816766 = sum of:
          0.061605897 = weight(_text_:systems in 744) [ClassicSimilarity], result of:
            0.061605897 = score(doc=744,freq=4.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.38414678 = fieldWeight in 744, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0625 = fieldNorm(doc=744)
          0.056561764 = weight(_text_:22 in 744) [ClassicSimilarity], result of:
            0.056561764 = score(doc=744,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.30952093 = fieldWeight in 744, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=744)
      0.5 = coord(1/2)
    
    Abstract
    TREC ia an annual conference held in the USA devoted to electronic systems for large full text information searching. The conference deals with evaluation and comparison techniques developed since 1992 by participants from the research and industrial fields. The work of the conference is destined for designers (rather than users) of systems which access full text information. Describes the context, objectives, organization, evaluation methods and limits of TREC
    Date
    1. 8.1996 22:01:00
  2. Ellis, D.: Progress and problems in information retrieval (1996) 0.06
    0.05908383 = product of:
      0.11816766 = sum of:
        0.11816766 = sum of:
          0.061605897 = weight(_text_:systems in 789) [ClassicSimilarity], result of:
            0.061605897 = score(doc=789,freq=4.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.38414678 = fieldWeight in 789, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0625 = fieldNorm(doc=789)
          0.056561764 = weight(_text_:22 in 789) [ClassicSimilarity], result of:
            0.056561764 = score(doc=789,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.30952093 = fieldWeight in 789, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=789)
      0.5 = coord(1/2)
    
    Abstract
    An introduction to the principal generic approaches to information retrieval research with their associated concepts, models and systems, this text is designed to keep the information professional up to date with the major themes and developments that have preoccupied researchers in recent month in relation to textual and documentary retrieval systems.
    Date
    26. 7.2002 20:22:46
  3. Sanderson, M.: ¬The Reuters test collection (1996) 0.05
    0.050061855 = product of:
      0.10012371 = sum of:
        0.10012371 = sum of:
          0.043561947 = weight(_text_:systems in 6971) [ClassicSimilarity], result of:
            0.043561947 = score(doc=6971,freq=2.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.2716328 = fieldWeight in 6971, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0625 = fieldNorm(doc=6971)
          0.056561764 = weight(_text_:22 in 6971) [ClassicSimilarity], result of:
            0.056561764 = score(doc=6971,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.30952093 = fieldWeight in 6971, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=6971)
      0.5 = coord(1/2)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  4. Smithson, S.: Information retrieval evaluation in practice : a case study approach (1994) 0.04
    0.043804124 = product of:
      0.08760825 = sum of:
        0.08760825 = sum of:
          0.038116705 = weight(_text_:systems in 7302) [ClassicSimilarity], result of:
            0.038116705 = score(doc=7302,freq=2.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.23767869 = fieldWeight in 7302, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0546875 = fieldNorm(doc=7302)
          0.049491543 = weight(_text_:22 in 7302) [ClassicSimilarity], result of:
            0.049491543 = score(doc=7302,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.2708308 = fieldWeight in 7302, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=7302)
      0.5 = coord(1/2)
    
    Abstract
    The evaluation of information retrieval systems is an important yet difficult operation. This paper describes an exploratory evaluation study that takes an interpretive approach to evaluation. The longitudinal study examines evaluation through the information-seeking behaviour of 22 case studies of 'real' users. The eclectic approach to data collection produced behavioral data that is compared with relevance judgements and satisfaction ratings. The study demonstrates considerable variations among the cases, among different evaluation measures within the same case, and among the same measures at different stages within a single case. It is argued that those involved in evaluation should be aware of the difficulties, and base any evaluation on a good understanding of the cases in question
  5. Blair, D.C.: STAIRS Redux : thoughts on the STAIRS evaluation, ten years after (1996) 0.04
    0.043804124 = product of:
      0.08760825 = sum of:
        0.08760825 = sum of:
          0.038116705 = weight(_text_:systems in 3002) [ClassicSimilarity], result of:
            0.038116705 = score(doc=3002,freq=2.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.23767869 = fieldWeight in 3002, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3002)
          0.049491543 = weight(_text_:22 in 3002) [ClassicSimilarity], result of:
            0.049491543 = score(doc=3002,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.2708308 = fieldWeight in 3002, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3002)
      0.5 = coord(1/2)
    
    Abstract
    The test of retrieval effectiveness performed on IBM's STAIRS and reported in 'Communications of the ACM' 10 years ago, continues to be cited frequently in the information retrieval literature. The reasons for the study's continuing pertinence to today's research are discussed, and the political, legal, and commercial aspects of the study are presented. In addition, the method of calculating recall that was used in the STAIRS study is discussed in some detail, especially how it reduces the 5 major types of uncertainty in recall estimations. It is also suggested that this method of recall estimation may serve as the basis for recall estimations that might be truly comparable between systems
    Source
    Journal of the American Society for Information Science. 47(1996) no.1, S.4-22
  6. Losee, R.M.: Determining information retrieval and filtering performance without experimentation (1995) 0.04
    0.043804124 = product of:
      0.08760825 = sum of:
        0.08760825 = sum of:
          0.038116705 = weight(_text_:systems in 3368) [ClassicSimilarity], result of:
            0.038116705 = score(doc=3368,freq=2.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.23767869 = fieldWeight in 3368, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3368)
          0.049491543 = weight(_text_:22 in 3368) [ClassicSimilarity], result of:
            0.049491543 = score(doc=3368,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.2708308 = fieldWeight in 3368, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3368)
      0.5 = coord(1/2)
    
    Abstract
    The performance of an information retrieval or text and media filtering system may be determined through analytic methods as well as by traditional simulation or experimental methods. These analytic methods can provide precise statements about expected performance. They can thus determine which of 2 similarly performing systems is superior. For both a single query terms and for a multiple query term retrieval model, a model for comparing the performance of different probabilistic retrieval methods is developed. This method may be used in computing the average search length for a query, given only knowledge of database parameter values. Describes predictive models for inverse document frequency, binary independence, and relevance feedback based retrieval and filtering. Simulation illustrate how the single term model performs and sample performance predictions are given for single term and multiple term problems
    Date
    22. 2.1996 13:14:10
  7. Crestani, F.; Rijsbergen, C.J. van: Information retrieval by imaging (1996) 0.04
    0.03754639 = product of:
      0.07509278 = sum of:
        0.07509278 = sum of:
          0.03267146 = weight(_text_:systems in 6967) [ClassicSimilarity], result of:
            0.03267146 = score(doc=6967,freq=2.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.2037246 = fieldWeight in 6967, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.046875 = fieldNorm(doc=6967)
          0.042421322 = weight(_text_:22 in 6967) [ClassicSimilarity], result of:
            0.042421322 = score(doc=6967,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.23214069 = fieldWeight in 6967, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=6967)
      0.5 = coord(1/2)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  8. Belkin, N.J.: ¬An overview of results from Rutgers' investigations of interactive information retrieval (1998) 0.04
    0.036927395 = product of:
      0.07385479 = sum of:
        0.07385479 = sum of:
          0.038503684 = weight(_text_:systems in 2339) [ClassicSimilarity], result of:
            0.038503684 = score(doc=2339,freq=4.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.24009174 = fieldWeight in 2339, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2339)
          0.0353511 = weight(_text_:22 in 2339) [ClassicSimilarity], result of:
            0.0353511 = score(doc=2339,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.19345059 = fieldWeight in 2339, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2339)
      0.5 = coord(1/2)
    
    Abstract
    Over the last 4 years, the Information Interaction Laboratory at Rutgers' School of communication, Information and Library Studies has performed a series of investigations concerned with various aspects of people's interactions with advanced information retrieval (IR) systems. We have benn especially concerned with understanding not just what people do, and why, and with what effect, but also with what they would like to do, and how they attempt to accomplish it, and with what difficulties. These investigations have led to some quite interesting conclusions about the nature and structure of people's interactions with information, about support for cooperative human-computer interaction in query reformulation, and about the value of visualization of search results for supporting various forms of interaction with information. In this discussion, I give an overview of the research program and its projects, present representative results from the projects, and discuss some implications of these results for support of subject searching in information retrieval systems
    Date
    22. 9.1997 19:16:05
  9. Tomaiuolo, N.G.; Parker, J.: Maximizing relevant retrieval : keyword and natural language searching (1998) 0.02
    0.024745772 = product of:
      0.049491543 = sum of:
        0.049491543 = product of:
          0.09898309 = sum of:
            0.09898309 = weight(_text_:22 in 6418) [ClassicSimilarity], result of:
              0.09898309 = score(doc=6418,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.5416616 = fieldWeight in 6418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6418)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Online. 22(1998) no.6, S.57-58
  10. Dalrymple, P.W.: Retrieval by reformulation in two library catalogs : toward a cognitive model of searching behavior (1990) 0.02
    0.024745772 = product of:
      0.049491543 = sum of:
        0.049491543 = product of:
          0.09898309 = sum of:
            0.09898309 = weight(_text_:22 in 5089) [ClassicSimilarity], result of:
              0.09898309 = score(doc=5089,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.5416616 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 18:43:54
  11. Harter, S.P.; Hert, C.A.: Evaluation of information retrieval systems : approaches, issues, and methods (1997) 0.02
    0.023341617 = product of:
      0.046683233 = sum of:
        0.046683233 = product of:
          0.09336647 = sum of:
            0.09336647 = weight(_text_:systems in 2264) [ClassicSimilarity], result of:
              0.09336647 = score(doc=2264,freq=12.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.58219147 = fieldWeight in 2264, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2264)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    State of the art review of information retrieval systems, defined as systems retrieving documents a sopposed to numerical data. Explains the classic Cranfield studies that have served as a standard for retrieval testing since the 1960s and discusses the Cranfield model and its relevance based measures of retrieval effectiveness. Details sosme of the problems with the Cranfield instruments and issues of validity and reliability, generalizability, usefulness and basic concepts. Discusses the evaluation of the Internet search engines in light of the Cranfield model, noting the very real differences between batch systems (Cranfield) and interactive systems (Internet). Because the Internet collection is not fixed, it is impossible to determine recall as a measure of retrieval effectiveness. considers future directions in evaluating information retrieval systems
  12. Evaluation of information retrieval systems : special topic issue (1996) 0.02
    0.02310221 = product of:
      0.04620442 = sum of:
        0.04620442 = product of:
          0.09240884 = sum of:
            0.09240884 = weight(_text_:systems in 6812) [ClassicSimilarity], result of:
              0.09240884 = score(doc=6812,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.57622015 = fieldWeight in 6812, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6812)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A special issue devoted to the topic of evaluation of information retrieval systems
  13. Tonta, Y.: Analysis of search failures in document retrieval systems : a review (1992) 0.02
    0.021780973 = product of:
      0.043561947 = sum of:
        0.043561947 = product of:
          0.08712389 = sum of:
            0.08712389 = weight(_text_:systems in 4611) [ClassicSimilarity], result of:
              0.08712389 = score(doc=4611,freq=8.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.5432656 = fieldWeight in 4611, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4611)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper examines search failures in document retrieval systems. Since search failures are closely related to overall document retrieval system performance, the paper briefly discusses retrieval effectiveness measures such as precision and recall. It examines 4 methods used to study retrieval failures: retrieval effectiveness measures, user satisfaction measures, transaction log analysis, and the critical incident technique. It summarizes the findings of major failure anaylsis studies and identifies the types of failures that usually occur in document retrieval systems
    Source
    Public-access computer systems review. 3(1992) no.1, S.4-53
  14. Feldman, S.: Testing natural language : comparing DIALOG, TARGET, and DR-LINK (1996) 0.02
    0.021780973 = product of:
      0.043561947 = sum of:
        0.043561947 = product of:
          0.08712389 = sum of:
            0.08712389 = weight(_text_:systems in 7463) [ClassicSimilarity], result of:
              0.08712389 = score(doc=7463,freq=8.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.5432656 = fieldWeight in 7463, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7463)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Compares online searching of DIALOG (a traditional Boolean system), TARGET (a relevance ranking system) and DR-LINK (an advanced intelligent text processing system), in order to establish the differing strengths of traditional and natural language processing search systems. Details example search queries used in comparison and how each of the systems performed. Considers the implications of the findings for professional information searchers and end users. Natural language processing systems are useful because they develop an wider understanding of queries that use of traditional systems may not
  15. Tague-Sutcliffe, J.M.: Some perspectives on the evaluation of information retrieval systems (1996) 0.02
    0.021307886 = product of:
      0.04261577 = sum of:
        0.04261577 = product of:
          0.08523154 = sum of:
            0.08523154 = weight(_text_:systems in 4163) [ClassicSimilarity], result of:
              0.08523154 = score(doc=4163,freq=10.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.5314657 = fieldWeight in 4163, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4163)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    As an introduction to the papers in this special issue, some of the major problems facing in investigators evaluating information retrieval systems are presented. These problems include the question of the necessity of using real users, as opposed to subject experts, in making relevance judgements, the possibility of evaluating individual components of the retrieval process, rather than the process as a whole, the kinds of aggregation that are appropriate for the measures used in evaluating systems, the value of an analytic or simulatory, as opposed to an experimental, approach in evaluation retrieval systems, the difficulties in evaluating interactive systems, and the kind of generalization which are possible from information retrieval tests.
  16. Hersh, W.R.; Hickam, D.H.: ¬An evaluation of interactive Boolean and natural language searching with an online medical textbook (1995) 0.02
    0.019058352 = product of:
      0.038116705 = sum of:
        0.038116705 = product of:
          0.07623341 = sum of:
            0.07623341 = weight(_text_:systems in 2651) [ClassicSimilarity], result of:
              0.07623341 = score(doc=2651,freq=8.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.47535738 = fieldWeight in 2651, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2651)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Few studies have compared the interactive use of Boolean and natural language search systems. Studies the use of 3 retrieval systems by senior medical students searching on queries generated by actual physicians in a clinical setting. The searchers were randomized to search on 2 or 3 different retrieval systems: a Boolean system, a word-based natural language system, and a concept-based natural language system. Results showed no statistically significant differences in recall or precision among the 3 systems. Likewise, there is no user preference for any system over the other. The study revealed problems with traditional measures of retrieval evaluation when applied to the interactive search setting
  17. Allan, J.; Callan, J.P.; Croft, W.B.; Ballesteros, L.; Broglio, J.; Xu, J.; Shu, H.: INQUERY at TREC-5 (1997) 0.02
    0.01767555 = product of:
      0.0353511 = sum of:
        0.0353511 = product of:
          0.0707022 = sum of:
            0.0707022 = weight(_text_:22 in 3103) [ClassicSimilarity], result of:
              0.0707022 = score(doc=3103,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.38690117 = fieldWeight in 3103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3103)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 2.1999 20:55:22
  18. Ng, K.B.; Loewenstern, D.; Basu, C.; Hirsh, H.; Kantor, P.B.: Data fusion of machine-learning methods for the TREC5 routing tak (and other work) (1997) 0.02
    0.01767555 = product of:
      0.0353511 = sum of:
        0.0353511 = product of:
          0.0707022 = sum of:
            0.0707022 = weight(_text_:22 in 3107) [ClassicSimilarity], result of:
              0.0707022 = score(doc=3107,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.38690117 = fieldWeight in 3107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3107)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 2.1999 20:59:22
  19. Allen, B.: Logical reasoning and retrieval performance (1993) 0.02
    0.016505018 = product of:
      0.033010036 = sum of:
        0.033010036 = product of:
          0.06602007 = sum of:
            0.06602007 = weight(_text_:systems in 5093) [ClassicSimilarity], result of:
              0.06602007 = score(doc=5093,freq=6.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.41167158 = fieldWeight in 5093, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5093)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Tests the logical reasoning ability of end users of a CD-ROM index and assesses associations between different levels of this ability and aspects of retrieval performance. Users' selection of vocabulary and their selection of citations for further examination are both influenced by this ability. The designs of information systems should address the effects of logical reasoning on search behaviour. People with lower levels of logical reasoning ability may experience difficulty using systems in which user selectivity plays an important role. Other systems, such as those with ranked output, may decrease the need for users to make selections and would be easier to use for people with lower levels of logical reasoning ability
  20. Barker, A.L.: Non-Boolean searching on commercial online systems : optimising use of Dialog TARGET and ESA/IRS QUESTQUORUM (1995) 0.02
    0.016505018 = product of:
      0.033010036 = sum of:
        0.033010036 = product of:
          0.06602007 = sum of:
            0.06602007 = weight(_text_:systems in 3853) [ClassicSimilarity], result of:
              0.06602007 = score(doc=3853,freq=6.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.41167158 = fieldWeight in 3853, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3853)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Considers 2 non-Boolean searching systems available on commercial online systems. QUESTQUORUM, based on coordination level searching, was introduced by ESA/IRS in Dec. 85. TARGET, which employs partial match probabilistic retrieval was introduced by DIALOG in Dec 93. 6 subject searches were carried out on databases available on both Dialog and ESA/IRS to compare TARGET and QUESTQUORUM with Boolean searching. Outlines the main advantages of these tools, and their disadvantages. Suggests when their use may be preferable

Languages

  • e 76
  • chi 2
  • d 1
  • f 1
  • More… Less…

Types

  • a 74
  • m 3
  • s 3
  • el 1
  • More… Less…