Search (202 results, page 2 of 11)

  • × theme_ss:"Retrievalstudien"
  • × year_i:[1990 TO 2000}
  1. Barker, A.L.: Non-Boolean searching on commercial online systems : optimising use of Dialog TARGET and ESA/IRS QUESTQUORUM (1995) 0.02
    0.024177555 = product of:
      0.07253266 = sum of:
        0.07253266 = product of:
          0.10879899 = sum of:
            0.07269186 = weight(_text_:online in 3853) [ClassicSimilarity], result of:
              0.07269186 = score(doc=3853,freq=8.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.46943733 = fieldWeight in 3853, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3853)
            0.03610713 = weight(_text_:retrieval in 3853) [ClassicSimilarity], result of:
              0.03610713 = score(doc=3853,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23394634 = fieldWeight in 3853, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3853)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Considers 2 non-Boolean searching systems available on commercial online systems. QUESTQUORUM, based on coordination level searching, was introduced by ESA/IRS in Dec. 85. TARGET, which employs partial match probabilistic retrieval was introduced by DIALOG in Dec 93. 6 subject searches were carried out on databases available on both Dialog and ESA/IRS to compare TARGET and QUESTQUORUM with Boolean searching. Outlines the main advantages of these tools, and their disadvantages. Suggests when their use may be preferable
    Source
    Online information 95: Proceedings of the 19th International online information meeting, London, 5-7 December 1995. Ed.: D.I. Raitt u. B. Jeapes
  2. Tibbo, H.R.: ¬The epic struggle : subject retrieval from large bibliographic databases (1994) 0.02
    0.022301702 = product of:
      0.0669051 = sum of:
        0.0669051 = product of:
          0.10035765 = sum of:
            0.031153653 = weight(_text_:online in 2179) [ClassicSimilarity], result of:
              0.031153653 = score(doc=2179,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.20118743 = fieldWeight in 2179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2179)
            0.069204 = weight(_text_:retrieval in 2179) [ClassicSimilarity], result of:
              0.069204 = score(doc=2179,freq=10.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.44838852 = fieldWeight in 2179, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2179)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Discusses a retrieval study that focused on collection level archival records in the OCLC OLUC, made accessible through the EPIC online search system. Data were also collected from the local OPAC at North Carolina University at Chapel Hill (UNC-CH) in which UNC-CH produced OCLC records are loaded. The chief objective was to explore the retrieval environments in which a random sample of USMARC AMC records produced at UNC-CH were found: specifically to obtain a picture of the density of these databases in regard to each subject heading applied and, more generally, for each records. Key questions were: how many records would be retrieved for each subject heading attached to each of the records; and what was the nature of these subject headings vis a vis the numer of hits associated with them. Results show that large retrieval sets are a potential problem with national bibliographic utilities and that the local and national retrieval environments can vary greatly. The need for specifity in indexing is emphasized
  3. Borgman, C.L.: Why are online catalogs still hard to use? (1996) 0.02
    0.022224266 = product of:
      0.066672795 = sum of:
        0.066672795 = product of:
          0.10000919 = sum of:
            0.058743894 = weight(_text_:online in 4380) [ClassicSimilarity], result of:
              0.058743894 = score(doc=4380,freq=16.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.37936267 = fieldWeight in 4380, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4380)
            0.041265294 = weight(_text_:retrieval in 4380) [ClassicSimilarity], result of:
              0.041265294 = score(doc=4380,freq=8.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.26736724 = fieldWeight in 4380, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4380)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    We return to arguments made 10 years ago that online catalogs are difficult to use because their design does not incorporate sufficient understanding of searching behavior. The earlier article examined studies of information retrieval system searching for their implications for online catalog design; this article examines the implications of card catalog design for online catalogs. With this analysis, we hope to contribute to a better understanding of user behavior and to lay to rest the card catalog design model for online catalogs. We discuss the problems with query matching systems, which were designed for skilled search intermediaries rather than end-users, and the knowledge and skills they require in the information-seeking process, illustrated with examples of searching card and online catalogs. Searching requires conceptual knowledge of the information retrieval process - translating an information need into a searchable query; semantic knowledge of how to implement a query in a given system - the how and when to use system features; and technical skills in executing the query - basic computing skills and the syntax of entering queries as specific search statements. In the short term, we can help make online catalogs easier to use through improved training and documentation that is based on information-seeking bahavior, with the caveat that good training is not a substitute for good system design. Our long term goal should be to design intuitive systems that require a minimum of instruction. Given the complexity of the information retrieval problem and the limited capabilities of today's systems, we are far from achieving that goal. If libraries are to provide primary information services for the networked world, they need to put research results on the information-seeking process into practice in designing the next generation of online public access information retrieval systems
  4. Bates, M.J.: Document familiarity, relevance, and Bradford's law : the Getty Online Searching Project report; no.5 (1996) 0.02
    0.022224266 = product of:
      0.066672795 = sum of:
        0.066672795 = product of:
          0.10000919 = sum of:
            0.058743894 = weight(_text_:online in 6978) [ClassicSimilarity], result of:
              0.058743894 = score(doc=6978,freq=4.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.37936267 = fieldWeight in 6978, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6978)
            0.041265294 = weight(_text_:retrieval in 6978) [ClassicSimilarity], result of:
              0.041265294 = score(doc=6978,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.26736724 = fieldWeight in 6978, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6978)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The Getty Online Searching Project studied the end user searching behaviour of 27 humanities scholars over a 2 year period. A number of scholars anticipated that they were already familiar with a percentage of records their searches retrieved. High document familiarity can be a significant factor in searching: Draws implications regarding the impact of high document familiarity on relevance and information retrieval theory. Makes speculations regarding high document familiarity and Bradford's law
  5. Keen, E.M.: Some aspects of proximity searching in text retrieval systems (1992) 0.02
    0.022199143 = product of:
      0.066597424 = sum of:
        0.066597424 = product of:
          0.09989613 = sum of:
            0.0415382 = weight(_text_:online in 6190) [ClassicSimilarity], result of:
              0.0415382 = score(doc=6190,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.2682499 = fieldWeight in 6190, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6190)
            0.058357935 = weight(_text_:retrieval in 6190) [ClassicSimilarity], result of:
              0.058357935 = score(doc=6190,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.37811437 = fieldWeight in 6190, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6190)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Describes and evaluates the proximity search facilities in external online systems and in-house retrieval software. Discusses and illustrates capabilities, syntax and circumstances of use. Presents measurements of the overheads required by proximity for storage, record input time and search time. The search strategy narrowing effect of proximity is illustrated by recall and precision test results. Usage and problems lead to a number of design ideas for better implementation: some based on existing Boolean strategies, one on the use of weighted proximity to automatically produce ranked output. A comparison of Boolean, quorum and proximate term pairs distance is included
  6. Smithson, S.: Information retrieval evaluation in practice : a case study approach (1994) 0.02
    0.022100737 = product of:
      0.06630221 = sum of:
        0.06630221 = product of:
          0.09945331 = sum of:
            0.05106319 = weight(_text_:retrieval in 7302) [ClassicSimilarity], result of:
              0.05106319 = score(doc=7302,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.33085006 = fieldWeight in 7302, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7302)
            0.048390117 = weight(_text_:22 in 7302) [ClassicSimilarity], result of:
              0.048390117 = score(doc=7302,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.2708308 = fieldWeight in 7302, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7302)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The evaluation of information retrieval systems is an important yet difficult operation. This paper describes an exploratory evaluation study that takes an interpretive approach to evaluation. The longitudinal study examines evaluation through the information-seeking behaviour of 22 case studies of 'real' users. The eclectic approach to data collection produced behavioral data that is compared with relevance judgements and satisfaction ratings. The study demonstrates considerable variations among the cases, among different evaluation measures within the same case, and among the same measures at different stages within a single case. It is argued that those involved in evaluation should be aware of the difficulties, and base any evaluation on a good understanding of the cases in question
  7. Blair, D.C.: STAIRS Redux : thoughts on the STAIRS evaluation, ten years after (1996) 0.02
    0.022100737 = product of:
      0.06630221 = sum of:
        0.06630221 = product of:
          0.09945331 = sum of:
            0.05106319 = weight(_text_:retrieval in 3002) [ClassicSimilarity], result of:
              0.05106319 = score(doc=3002,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.33085006 = fieldWeight in 3002, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3002)
            0.048390117 = weight(_text_:22 in 3002) [ClassicSimilarity], result of:
              0.048390117 = score(doc=3002,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.2708308 = fieldWeight in 3002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3002)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The test of retrieval effectiveness performed on IBM's STAIRS and reported in 'Communications of the ACM' 10 years ago, continues to be cited frequently in the information retrieval literature. The reasons for the study's continuing pertinence to today's research are discussed, and the political, legal, and commercial aspects of the study are presented. In addition, the method of calculating recall that was used in the STAIRS study is discussed in some detail, especially how it reduces the 5 major types of uncertainty in recall estimations. It is also suggested that this method of recall estimation may serve as the basis for recall estimations that might be truly comparable between systems
    Source
    Journal of the American Society for Information Science. 47(1996) no.1, S.4-22
  8. Harman, D.: Overview of the first Text Retrieval Conference (1993) 0.02
    0.021974515 = product of:
      0.06592354 = sum of:
        0.06592354 = product of:
          0.09888531 = sum of:
            0.03634593 = weight(_text_:online in 548) [ClassicSimilarity], result of:
              0.03634593 = score(doc=548,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23471867 = fieldWeight in 548, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=548)
            0.062539384 = weight(_text_:retrieval in 548) [ClassicSimilarity], result of:
              0.062539384 = score(doc=548,freq=6.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.40520695 = fieldWeight in 548, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=548)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The first Text Retrieval Conference (TREC-1) was held in early November and was attended by about 100 people working in the 25 participating groups. The goal of the conference was to bring research gropus together to discuss their work on a new large test collection. There was a large variety of retrieval techniques reported on, including methods using automatic thesauri, sophisticated term weighting, natural language techniques, relevance feedback, and advanced pattern matching. As results had been run through a common evaluation package, groups were able to compare the effectiveness of different techniques, and discuss how differences among the systems affected performance
    Source
    Proceedings of the 14th National Online Meeting 1993, New York, 4-6 May 1993. Ed.: M.E. Williams
  9. Hersh, W.R.; Hickam, D.H.: ¬An evaluation of interactive Boolean and natural language searching with an online medical textbook (1995) 0.02
    0.021974515 = product of:
      0.06592354 = sum of:
        0.06592354 = product of:
          0.09888531 = sum of:
            0.03634593 = weight(_text_:online in 2651) [ClassicSimilarity], result of:
              0.03634593 = score(doc=2651,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23471867 = fieldWeight in 2651, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2651)
            0.062539384 = weight(_text_:retrieval in 2651) [ClassicSimilarity], result of:
              0.062539384 = score(doc=2651,freq=6.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.40520695 = fieldWeight in 2651, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2651)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Few studies have compared the interactive use of Boolean and natural language search systems. Studies the use of 3 retrieval systems by senior medical students searching on queries generated by actual physicians in a clinical setting. The searchers were randomized to search on 2 or 3 different retrieval systems: a Boolean system, a word-based natural language system, and a concept-based natural language system. Results showed no statistically significant differences in recall or precision among the 3 systems. Likewise, there is no user preference for any system over the other. The study revealed problems with traditional measures of retrieval evaluation when applied to the interactive search setting
  10. Pemberton, J.K.; Ojala, M.; Garman, N.: Head to head : searching the Web versus traditional services (1998) 0.02
    0.021520268 = product of:
      0.0645608 = sum of:
        0.0645608 = product of:
          0.096841194 = sum of:
            0.0415382 = weight(_text_:online in 3572) [ClassicSimilarity], result of:
              0.0415382 = score(doc=3572,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.2682499 = fieldWeight in 3572, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3572)
            0.055302992 = weight(_text_:22 in 3572) [ClassicSimilarity], result of:
              0.055302992 = score(doc=3572,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.30952093 = fieldWeight in 3572, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3572)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Source
    Online. 22(1998) no.3, S.24-26,28
  11. Lespinasse, K.: TREC: une conference pour l'evaluation des systemes de recherche d'information (1997) 0.02
    0.02145962 = product of:
      0.06437886 = sum of:
        0.06437886 = product of:
          0.09656829 = sum of:
            0.041265294 = weight(_text_:retrieval in 744) [ClassicSimilarity], result of:
              0.041265294 = score(doc=744,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.26736724 = fieldWeight in 744, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=744)
            0.055302992 = weight(_text_:22 in 744) [ClassicSimilarity], result of:
              0.055302992 = score(doc=744,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.30952093 = fieldWeight in 744, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=744)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Date
    1. 8.1996 22:01:00
    Footnote
    Übers. d. Titels: TREC: the Text REtrieval Conference
  12. Hallet, K.S.: Separate but equal? : A system comparison study of MEDLINE's controlled vocabulary MeSH (1998) 0.02
    0.019516973 = product of:
      0.058550917 = sum of:
        0.058550917 = product of:
          0.08782637 = sum of:
            0.04405792 = weight(_text_:online in 3553) [ClassicSimilarity], result of:
              0.04405792 = score(doc=3553,freq=4.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.284522 = fieldWeight in 3553, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3553)
            0.043768454 = weight(_text_:retrieval in 3553) [ClassicSimilarity], result of:
              0.043768454 = score(doc=3553,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.2835858 = fieldWeight in 3553, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3553)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports results of a study to test the effect of controlled vocabulary search feature implementation on 2 online systems. Specifically, the study examined retrieval rates using 4 unique controlled vocabulary search features (Explode, major descriptor, descriptor, subheadings). 2 questions were addressed; what, if any, are the general differences between controlled vocabulary system implementations in DIALOG and Ovid; and what, if any are the impacts of each on the differing controlled vocabulary search features upon retrieval rates? Each search feature was applied to to 9 search queries obtained from a medical reference librarian. The same queires were searched in the complete MEDLINE file on the DIALOG and Ovid online host systems. The unique records (those records retrieved in only 1 of the 2 systems) were identified and analyzed. DIALOG produced equal or more records than Ovid in nearly 20% of the queries. Concludes than users need to be aware of system specific designs that may require differing input strategies across different systems for the same unique controlled vocabulary search features. Making recommendations and suggestions for future research
  13. Meadows, C.J.: ¬A study of user performance and attitudes with information retrieval interfaces (1995) 0.02
    0.0188353 = product of:
      0.0565059 = sum of:
        0.0565059 = product of:
          0.08475885 = sum of:
            0.031153653 = weight(_text_:online in 2674) [ClassicSimilarity], result of:
              0.031153653 = score(doc=2674,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.20118743 = fieldWeight in 2674, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2674)
            0.05360519 = weight(_text_:retrieval in 2674) [ClassicSimilarity], result of:
              0.05360519 = score(doc=2674,freq=6.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.34732026 = fieldWeight in 2674, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2674)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports on a project undertaken to compare the behaviour of 2 types of users with 2 types of information retrieval interfaces. The user types were search process specialists and subject matter domain specialists with no prior online database search experience. The interfaces were native DIALOG, which uses a procedural language, and OAK, a largely menu based, hence non procedural language interface communicating with DIALOG. 3 types of data were recorded: logs automatically recorded by computer moitoring of all searches, results of structured interviews with subjects at the time of the searches, and results of focus group discussions after all project tasks were completed. The type of user was determined by a combination of prior training, objective in searching, and subject domain knowledge. The results show that the type of interface does affect performance and users adapt their behaviour to interfaces differently. Different combinations of search experience and domain knowledge will lead to different behaviour in use of an information retrieval system. Different kinds of users can best be served with different kinds of interfaces
  14. Belkin, N.J.: ¬An overview of results from Rutgers' investigations of interactive information retrieval (1998) 0.02
    0.017607858 = product of:
      0.052823573 = sum of:
        0.052823573 = product of:
          0.07923536 = sum of:
            0.04467099 = weight(_text_:retrieval in 2339) [ClassicSimilarity], result of:
              0.04467099 = score(doc=2339,freq=6.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.28943354 = fieldWeight in 2339, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2339)
            0.03456437 = weight(_text_:22 in 2339) [ClassicSimilarity], result of:
              0.03456437 = score(doc=2339,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.19345059 = fieldWeight in 2339, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2339)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Over the last 4 years, the Information Interaction Laboratory at Rutgers' School of communication, Information and Library Studies has performed a series of investigations concerned with various aspects of people's interactions with advanced information retrieval (IR) systems. We have benn especially concerned with understanding not just what people do, and why, and with what effect, but also with what they would like to do, and how they attempt to accomplish it, and with what difficulties. These investigations have led to some quite interesting conclusions about the nature and structure of people's interactions with information, about support for cooperative human-computer interaction in query reformulation, and about the value of visualization of search results for supporting various forms of interaction with information. In this discussion, I give an overview of the research program and its projects, present representative results from the projects, and discuss some implications of these results for support of subject searching in information retrieval systems
    Date
    22. 9.1997 19:16:05
  15. Lepsky, K.; Siepmann, J.; Zimmermann, A.: Automatische Indexierung für Online-Kataloge : Ergebnisse eines Retrievaltests (1996) 0.02
    0.01610068 = product of:
      0.04830204 = sum of:
        0.04830204 = product of:
          0.07245306 = sum of:
            0.03634593 = weight(_text_:online in 3251) [ClassicSimilarity], result of:
              0.03634593 = score(doc=3251,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23471867 = fieldWeight in 3251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3251)
            0.03610713 = weight(_text_:retrieval in 3251) [ClassicSimilarity], result of:
              0.03610713 = score(doc=3251,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23394634 = fieldWeight in 3251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3251)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Examines the effectiveness of automated indexing and presents the results of a study of information retrieval from a segment (40.000 items) of the ULB Düsseldorf database. The segment was selected randomly and all the documents included were indexed automatically. The search topics included 50 subject areas ranging from economic growth to alternative energy sources. While there were 876 relevant documents in the database segment for each of the 50 search topics, the recall ranged from 1 to 244 references, with the average being 17.52 documents per topic. Therefore it seems that, in the immediate future, automatic indexing should be used in combination with intellectual indexing
  16. Dalrymple, P.W.; Cox, R.: ¬An examination of the effects of non-Boolean enhancements to an information retrieval system (1992) 0.02
    0.015696082 = product of:
      0.047088247 = sum of:
        0.047088247 = product of:
          0.07063237 = sum of:
            0.025961377 = weight(_text_:online in 2939) [ClassicSimilarity], result of:
              0.025961377 = score(doc=2939,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16765618 = fieldWeight in 2939, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2939)
            0.04467099 = weight(_text_:retrieval in 2939) [ClassicSimilarity], result of:
              0.04467099 = score(doc=2939,freq=6.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.28943354 = fieldWeight in 2939, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2939)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    One of the problems in information retrieval (IR) research is that few of the non-Boolean features of experimental systems developed by IR researchers have been adopted in commercially available systems which can be evaluated using real users with actual information needs. Without the opportunity to examnine how these features perform with actual bibliographic files and how they affect users in their information-seeking tasks, our understanding of information retrieval remains limited, and system development fails to advance. The research describes here compared two CD-ROM MEDLINE systems for the Macintosh, one of which incorporates many of the features previously identified by research as central to sound and innovative IR design such as: elimination of the need for Boolean logical connectors, acceptance of a natural language query, and ranked output. The other is more traditional in its design. Two groups of search topics selected from the National Library of Medicine's test queries in clinical medicine were searched using both a natural language strategy and a strategy based on MeSH vocabulary. results were compared on the following variables: search input and processing times, set size, overlap between sets produced by the two systems, and evaluative judgements made by subject experts. The findings indicate the these systems differ on these dimensions, and greater variance occurs in the natural language searches
    Source
    Proceedings of the 13th National Online Meeting. Ed.: M.E. Williams
  17. Bollmann-Sdorra, P.: Probleme der Validität bei Retrievaltests (1990) 0.02
    0.015015274 = product of:
      0.045045823 = sum of:
        0.045045823 = weight(_text_:im in 5113) [ClassicSimilarity], result of:
          0.045045823 = score(doc=5113,freq=2.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.3123187 = fieldWeight in 5113, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.078125 = fieldNorm(doc=5113)
      0.33333334 = coord(1/3)
    
    Abstract
    In diesem Beitrag werden beispielhaft Probleme der Validität bei Retrievaltests behandelt. Die externe Validität wird im Zusammenhang von Ähnlichkeitsmaßen und Bewertungsmaßen diskutiert. Die interne Validität wird am Beispiel der Mittelwertbildung diskutiert. Es zeigt sich, daß die Forderung nach Validität die zur Auswahl stehenden Methoden einschränkt
  18. Sachse, E.; Liebig, M.; Gödert, W.: Automatische Indexierung unter Einbeziehung semantischer Relationen : Ergebnisse des Retrievaltests zum MILOS II-Projekt (1998) 0.01
    0.014864365 = product of:
      0.044593092 = sum of:
        0.044593092 = weight(_text_:im in 3577) [ClassicSimilarity], result of:
          0.044593092 = score(doc=3577,freq=4.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.30917975 = fieldWeight in 3577, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3577)
      0.33333334 = coord(1/3)
    
    Abstract
    Im Rahmen von MILOS II wurde das erste MILOS-Projekt zur automatischen Indexierung von Titeldaten um eine semantischer Komponente erweitert, indem Thesaurusrelationen der Schlagwortnormdatei eingebunden wurden. Der abschließend zur Evaluierung durchgeführte Retrievaltest und seine Ergebnisse stehen im Mittelpunkt dieses Texts. Zusätzlich wird ein Überblick über bereits durchgeführte Retrievaltests (vorwiegend des anglo-amerikanischen Raums) gegeben und es wird erläutert, welche grundlegenden Fragestellungen bei der praktischen Durchführung eines Retrievaltests zu beachten sind
  19. Tonta, Y.: Analysis of search failures in document retrieval systems : a review (1992) 0.01
    0.0121308565 = product of:
      0.03639257 = sum of:
        0.03639257 = product of:
          0.1091777 = sum of:
            0.1091777 = weight(_text_:retrieval in 4611) [ClassicSimilarity], result of:
              0.1091777 = score(doc=4611,freq=14.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.7073872 = fieldWeight in 4611, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4611)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper examines search failures in document retrieval systems. Since search failures are closely related to overall document retrieval system performance, the paper briefly discusses retrieval effectiveness measures such as precision and recall. It examines 4 methods used to study retrieval failures: retrieval effectiveness measures, user satisfaction measures, transaction log analysis, and the critical incident technique. It summarizes the findings of major failure anaylsis studies and identifies the types of failures that usually occur in document retrieval systems
  20. Knorz, G.: Testverfahren für intelligente Indexierungs- und Retrievalsysteme anhand deutsch-sprachiger sozialwissenschaftlicher Fachinformation (GIRT) : Bericht über einen Workshop am 12. September 1997 im IZ Sozialwissenschaften, Bonn (1998) 0.01
    0.01201222 = product of:
      0.03603666 = sum of:
        0.03603666 = weight(_text_:im in 5080) [ClassicSimilarity], result of:
          0.03603666 = score(doc=5080,freq=2.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.24985497 = fieldWeight in 5080, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.0625 = fieldNorm(doc=5080)
      0.33333334 = coord(1/3)
    

Languages

Types

  • a 187
  • s 7
  • m 3
  • r 3
  • el 2
  • x 1
  • More… Less…