Search (353 results, page 1 of 18)

  • × theme_ss:"Retrievalstudien"
  • × type_ss:"a"
  1. Dalrymple, P.W.: Retrieval by reformulation in two library catalogs : toward a cognitive model of searching behavior (1990) 0.04
    0.042121883 = product of:
      0.06318282 = sum of:
        0.014646845 = weight(_text_:in in 5089) [ClassicSimilarity], result of:
          0.014646845 = score(doc=5089,freq=2.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.21040362 = fieldWeight in 5089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.109375 = fieldNorm(doc=5089)
        0.04853598 = product of:
          0.09707196 = sum of:
            0.09707196 = weight(_text_:22 in 5089) [ClassicSimilarity], result of:
              0.09707196 = score(doc=5089,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.5416616 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    22. 7.2006 18:43:54
  2. Lazonder, A.W.; Biemans, H.J.A.; Wopereis, I.G.J.H.: Differences between novice and experienced users in searching information on the World Wide Web (2000) 0.03
    0.03445773 = product of:
      0.05168659 = sum of:
        0.014036291 = weight(_text_:in in 4598) [ClassicSimilarity], result of:
          0.014036291 = score(doc=4598,freq=10.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.20163295 = fieldWeight in 4598, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4598)
        0.0376503 = product of:
          0.0753006 = sum of:
            0.0753006 = weight(_text_:education in 4598) [ClassicSimilarity], result of:
              0.0753006 = score(doc=4598,freq=2.0), product of:
                0.24110512 = queryWeight, product of:
                  4.7112455 = idf(docFreq=1080, maxDocs=44218)
                  0.051176514 = queryNorm
                0.3123144 = fieldWeight in 4598, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7112455 = idf(docFreq=1080, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4598)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Searching for information on the WWW basically comes down to locating an appropriate Web site and to retrieving relevant information from that site. This study examined the effect of a user's WWW experience on both phases of the search process. 35 students from 2 schools for Dutch pre-university education were observed while performing 3 search tasks. The results indicate that subjects with WWW-experience are more proficient in locating Web sites than are novice WWW-users. The observed differences were ascribed to the experts' superior skills in operating Web search engines. However, on tasks that required subjects to locate information on specific Web sites, the performance of experienced and novice users was equivalent - a result that is in line with hypertext research. Based on these findings, implications for training and supporting students in searching for information on the WWW are identified. Finally, the role of the subjects' level of domain expertise is discussed and directions for future research are proposed
  3. Hider, P.: ¬The search value added by professional indexing to a bibliographic database (2017) 0.03
    0.033068217 = product of:
      0.049602322 = sum of:
        0.0052310163 = weight(_text_:in in 3868) [ClassicSimilarity], result of:
          0.0052310163 = score(doc=3868,freq=2.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.07514416 = fieldWeight in 3868, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3868)
        0.044371307 = product of:
          0.088742614 = sum of:
            0.088742614 = weight(_text_:education in 3868) [ClassicSimilarity], result of:
              0.088742614 = score(doc=3868,freq=4.0), product of:
                0.24110512 = queryWeight, product of:
                  4.7112455 = idf(docFreq=1080, maxDocs=44218)
                  0.051176514 = queryNorm
                0.36806607 = fieldWeight in 3868, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.7112455 = idf(docFreq=1080, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3868)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Gross et al. (2015) have demonstrated that about a quarter of hits would typically be lost to keyword searchers if contemporary academic library catalogs dropped their controlled subject headings. This paper reports on an analysis of the loss levels that would result if a bibliographic database, namely the Australian Education Index (AEI), were missing the subject descriptors and identifiers assigned by its professional indexers, employing the methodology developed by Gross and Taylor (2005), and later by Gross et al. (2015). The results indicate that AEI users would lose a similar proportion of hits per query to that experienced by library catalog users: on average, 27% of the resources found by a sample of keyword queries on the AEI database would not have been found without the subject indexing, based on the Australian Thesaurus of Education Descriptors (ATED). The paper also discusses the methodological limitations of these studies, pointing out that real-life users might still find some of the resources missed by a particular query through follow-up searches, while additional resources might also be found through iterative searching on the subject vocabulary. The paper goes on to describe a new research design, based on a before - and - after experiment, which addresses some of these limitations. It is argued that this alternative design will provide a more realistic picture of the value that professionally assigned subject indexing and controlled subject vocabularies can add to literature searching of a more scholarly and thorough kind.
    Content
    Beitrag bei: NASKO 2017: Visualizing Knowledge Organization: Bringing Focus to Abstract Realities. The sixth North American Symposium on Knowledge Organization (NASKO 2017), June 15-16, 2017, in Champaign, IL, USA.
  4. Saracevic, T.: On a method for studying the structure and nature of requests in information retrieval (1983) 0.03
    0.032976072 = product of:
      0.049464107 = sum of:
        0.014795548 = weight(_text_:in in 2417) [ClassicSimilarity], result of:
          0.014795548 = score(doc=2417,freq=4.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.21253976 = fieldWeight in 2417, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=2417)
        0.034668557 = product of:
          0.069337115 = sum of:
            0.069337115 = weight(_text_:22 in 2417) [ClassicSimilarity], result of:
              0.069337115 = score(doc=2417,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.38690117 = fieldWeight in 2417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2417)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Pages
    S.22-25
    Source
    Productivity in the information age : proceedings of the 46th ASIS annual meeting, 1983. Ed.: Raymond F Vondra
  5. Hodges, P.R.: Keyword in title indexes : effectiveness of retrieval in computer searches (1983) 0.03
    0.02998784 = product of:
      0.04498176 = sum of:
        0.020713769 = weight(_text_:in in 5001) [ClassicSimilarity], result of:
          0.020713769 = score(doc=5001,freq=16.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.29755569 = fieldWeight in 5001, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5001)
        0.02426799 = product of:
          0.04853598 = sum of:
            0.04853598 = weight(_text_:22 in 5001) [ClassicSimilarity], result of:
              0.04853598 = score(doc=5001,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.2708308 = fieldWeight in 5001, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5001)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    A study was done to test the effectiveness of retrieval using title word searching. It was based on actual search profiles used in the Mechanized Information Center at Ohio State University, in order ro replicate as closely as possible actual searching conditions. Fewer than 50% of the relevant titles were retrieved by keywords in titles. The low rate of retrieval can be attributes to three sources: titles themselves, user and information specialist ignorance of the subject vocabulary in use, and to general language problems. Across fields it was found that the social sciences had the best retrieval rate, with science having the next best, and arts and humanities the lowest. Ways to enhance and supplement keyword in title searching on the computer and in printed indexes are discussed.
    Date
    14. 3.1996 13:22:21
  6. Blair, D.C.: STAIRS Redux : thoughts on the STAIRS evaluation, ten years after (1996) 0.03
    0.028137762 = product of:
      0.04220664 = sum of:
        0.017938651 = weight(_text_:in in 3002) [ClassicSimilarity], result of:
          0.017938651 = score(doc=3002,freq=12.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.2576908 = fieldWeight in 3002, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3002)
        0.02426799 = product of:
          0.04853598 = sum of:
            0.04853598 = weight(_text_:22 in 3002) [ClassicSimilarity], result of:
              0.04853598 = score(doc=3002,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.2708308 = fieldWeight in 3002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3002)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The test of retrieval effectiveness performed on IBM's STAIRS and reported in 'Communications of the ACM' 10 years ago, continues to be cited frequently in the information retrieval literature. The reasons for the study's continuing pertinence to today's research are discussed, and the political, legal, and commercial aspects of the study are presented. In addition, the method of calculating recall that was used in the STAIRS study is discussed in some detail, especially how it reduces the 5 major types of uncertainty in recall estimations. It is also suggested that this method of recall estimation may serve as the basis for recall estimations that might be truly comparable between systems
    Source
    Journal of the American Society for Information Science. 47(1996) no.1, S.4-22
  7. Brown, M.E.: By any other name : accounting for failure in the naming of subject categories (1995) 0.03
    0.025943225 = product of:
      0.038914837 = sum of:
        0.014646845 = weight(_text_:in in 5598) [ClassicSimilarity], result of:
          0.014646845 = score(doc=5598,freq=8.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.21040362 = fieldWeight in 5598, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5598)
        0.02426799 = product of:
          0.04853598 = sum of:
            0.04853598 = weight(_text_:22 in 5598) [ClassicSimilarity], result of:
              0.04853598 = score(doc=5598,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.2708308 = fieldWeight in 5598, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5598)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Research shows that 65-80% of subject search terms fail to match the appropriate subject heading and one third to one half of subject searches result in no references being retrieved. Examines the subject search terms geberated by 82 school and college students in Princeton, NJ, evaluated the match between the named terms and the expected subject headings, proposes an explanation for match failures in relation to 3 invariant properties common to all search terms: concreteness, complexity, and syndeticity. Suggests that match failure is a consequence of developmental naming patterns and that these patterns can be overcome through the use of metacognitive naming skills
    Date
    2.11.1996 13:08:22
  8. Hider, P.: ¬The search value added by professional indexing to a bibliographic database (2018) 0.03
    0.025848685 = product of:
      0.038773026 = sum of:
        0.007397774 = weight(_text_:in in 4300) [ClassicSimilarity], result of:
          0.007397774 = score(doc=4300,freq=4.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.10626988 = fieldWeight in 4300, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4300)
        0.03137525 = product of:
          0.0627505 = sum of:
            0.0627505 = weight(_text_:education in 4300) [ClassicSimilarity], result of:
              0.0627505 = score(doc=4300,freq=2.0), product of:
                0.24110512 = queryWeight, product of:
                  4.7112455 = idf(docFreq=1080, maxDocs=44218)
                  0.051176514 = queryNorm
                0.260262 = fieldWeight in 4300, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7112455 = idf(docFreq=1080, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4300)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Gross et al. (2015) have demonstrated that about a quarter of hits would typically be lost to keyword searchers if contemporary academic library catalogs dropped their controlled subject headings. This article reports on an investigation of the search value that subject descriptors and identifiers assigned by professional indexers add to a bibliographic database, namely the Australian Education Index (AEI). First, a similar methodology to that developed by Gross et al. (2015) was applied, with keyword searches representing a range of educational topics run on the AEI database with and without its subject indexing. The results indicated that AEI users would also lose, on average, about a quarter of hits per query. Second, an alternative research design was applied in which an experienced literature searcher was asked to find resources on a set of educational topics on an AEI database stripped of its subject indexing and then asked to search for additional resources on the same topics after the subject indexing had been reinserted. In this study, the proportion of additional resources that would have been lost had it not been for the subject indexing was again found to be about a quarter of the total resources found for each topic, on average.
  9. Blagden, J.F.: How much noise in a role-free and link-free co-ordinate indexing system? (1966) 0.02
    0.024635023 = product of:
      0.036952533 = sum of:
        0.012684541 = weight(_text_:in in 2718) [ClassicSimilarity], result of:
          0.012684541 = score(doc=2718,freq=6.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.1822149 = fieldWeight in 2718, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2718)
        0.02426799 = product of:
          0.04853598 = sum of:
            0.04853598 = weight(_text_:22 in 2718) [ClassicSimilarity], result of:
              0.04853598 = score(doc=2718,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.2708308 = fieldWeight in 2718, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2718)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    A study of the number of irrelevant documents retrieved in a co-ordinate indexing system that does not employ eitherr roles or links. These tests were based on one hundred actual inquiries received in the library and therefore an evaluation of recall efficiency is not included. Over half the enquiries produced no noise, but the mean average percentage niose figure was approximately 33 per cent based on a total average retireval figure of eighteen documents per search. Details of the size of the indexed collection, methods of indexing, and an analysis of the reasons for the retrieval of irrelevant documents are discussed, thereby providing information officers who are thinking of installing such a system with some evidence on which to base a decision as to whether or not to utilize these devices
    Source
    Journal of documentation. 22(1966), S.203-209
  10. Smithson, S.: Information retrieval evaluation in practice : a case study approach (1994) 0.02
    0.024635023 = product of:
      0.036952533 = sum of:
        0.012684541 = weight(_text_:in in 7302) [ClassicSimilarity], result of:
          0.012684541 = score(doc=7302,freq=6.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.1822149 = fieldWeight in 7302, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7302)
        0.02426799 = product of:
          0.04853598 = sum of:
            0.04853598 = weight(_text_:22 in 7302) [ClassicSimilarity], result of:
              0.04853598 = score(doc=7302,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.2708308 = fieldWeight in 7302, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7302)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The evaluation of information retrieval systems is an important yet difficult operation. This paper describes an exploratory evaluation study that takes an interpretive approach to evaluation. The longitudinal study examines evaluation through the information-seeking behaviour of 22 case studies of 'real' users. The eclectic approach to data collection produced behavioral data that is compared with relevance judgements and satisfaction ratings. The study demonstrates considerable variations among the cases, among different evaluation measures within the same case, and among the same measures at different stages within a single case. It is argued that those involved in evaluation should be aware of the difficulties, and base any evaluation on a good understanding of the cases in question
  11. Rijsbergen, C.J. van: ¬A test for the separation of relevant and non-relevant documents in experimental retrieval collections (1973) 0.02
    0.024069648 = product of:
      0.03610447 = sum of:
        0.008369626 = weight(_text_:in in 5002) [ClassicSimilarity], result of:
          0.008369626 = score(doc=5002,freq=2.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.120230645 = fieldWeight in 5002, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=5002)
        0.027734846 = product of:
          0.05546969 = sum of:
            0.05546969 = weight(_text_:22 in 5002) [ClassicSimilarity], result of:
              0.05546969 = score(doc=5002,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.30952093 = fieldWeight in 5002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5002)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    19. 3.1996 11:22:12
  12. Sanderson, M.: ¬The Reuters test collection (1996) 0.02
    0.024069648 = product of:
      0.03610447 = sum of:
        0.008369626 = weight(_text_:in in 6971) [ClassicSimilarity], result of:
          0.008369626 = score(doc=6971,freq=2.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.120230645 = fieldWeight in 6971, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=6971)
        0.027734846 = product of:
          0.05546969 = sum of:
            0.05546969 = weight(_text_:22 in 6971) [ClassicSimilarity], result of:
              0.05546969 = score(doc=6971,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.30952093 = fieldWeight in 6971, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6971)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Describes the Reuters test collection, which at 22.173 references is significantly larger than most traditional test collections. In addition, Reuters has none of the recall calculation problems normally associated with some of the larger test collections available. Explains the method derived by D.D. Lewis to perform retrieval experiments on the Reuters collection and illustrates the use of the Reuters collection using some simple retrieval experiments that compare the performance of stemming algorithms
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  13. Lespinasse, K.: TREC: une conference pour l'evaluation des systemes de recherche d'information (1997) 0.02
    0.024069648 = product of:
      0.03610447 = sum of:
        0.008369626 = weight(_text_:in in 744) [ClassicSimilarity], result of:
          0.008369626 = score(doc=744,freq=2.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.120230645 = fieldWeight in 744, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=744)
        0.027734846 = product of:
          0.05546969 = sum of:
            0.05546969 = weight(_text_:22 in 744) [ClassicSimilarity], result of:
              0.05546969 = score(doc=744,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.30952093 = fieldWeight in 744, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=744)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    TREC ia an annual conference held in the USA devoted to electronic systems for large full text information searching. The conference deals with evaluation and comparison techniques developed since 1992 by participants from the research and industrial fields. The work of the conference is destined for designers (rather than users) of systems which access full text information. Describes the context, objectives, organization, evaluation methods and limits of TREC
    Date
    1. 8.1996 22:01:00
  14. Dresel, R.; Hörnig, D.; Kaluza, H.; Peter, A.; Roßmann, A.; Sieber, W.: Evaluation deutscher Web-Suchwerkzeuge : Ein vergleichender Retrievaltest (2001) 0.02
    0.024069648 = product of:
      0.03610447 = sum of:
        0.008369626 = weight(_text_:in in 261) [ClassicSimilarity], result of:
          0.008369626 = score(doc=261,freq=2.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.120230645 = fieldWeight in 261, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=261)
        0.027734846 = product of:
          0.05546969 = sum of:
            0.05546969 = weight(_text_:22 in 261) [ClassicSimilarity], result of:
              0.05546969 = score(doc=261,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.30952093 = fieldWeight in 261, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=261)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Die deutschen Suchmaschinen, Abacho, Acoon, Fireball und Lycos sowie die Web-Kataloge Web.de und Yahoo! werden einem Qualitätstest nach relativem Recall, Precision und Availability unterzogen. Die Methoden der Retrievaltests werden vorgestellt. Im Durchschnitt werden bei einem Cut-Off-Wert von 25 ein Recall von rund 22%, eine Precision von knapp 19% und eine Verfügbarkeit von 24% erreicht
    Footnote
    Vgl. auch den Bericht in: nfd 53(2002) H.2, S.71
  15. Iivonen, M.: Consistency in the selection of search concepts and search terms (1995) 0.02
    0.02322495 = product of:
      0.034837425 = sum of:
        0.014036291 = weight(_text_:in in 1757) [ClassicSimilarity], result of:
          0.014036291 = score(doc=1757,freq=10.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.20163295 = fieldWeight in 1757, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1757)
        0.020801133 = product of:
          0.041602265 = sum of:
            0.041602265 = weight(_text_:22 in 1757) [ClassicSimilarity], result of:
              0.041602265 = score(doc=1757,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.23214069 = fieldWeight in 1757, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1757)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Considers intersearcher and intrasearcher consistency in the selection of search terms. Based on an empirical study where 22 searchers from 4 different types of search environments analyzed altogether 12 search requests of 4 different types in 2 separate test situations between which 2 months elapsed. Statistically very significant differences in consistency were found according to the types of search environments and search requests. Consistency was also considered according to the extent of the scope of search concept. At level I search terms were compared character by character. At level II different search terms were accepted as the same search concept with a rather simple evaluation of linguistic expressions. At level III, in addition to level II, the hierarchical approach of the search request was also controlled. At level IV different search terms were accepted as the same search concept with a broad interpretation of the search concept. Both intersearcher and intrasearcher consistency grew most immediately after a rather simple evaluation of linguistic impressions
  16. Reichert, S.; Mayr, P.: Untersuchung von Relevanzeigenschaften in einem kontrollierten Eyetracking-Experiment (2012) 0.02
    0.02322495 = product of:
      0.034837425 = sum of:
        0.014036291 = weight(_text_:in in 328) [ClassicSimilarity], result of:
          0.014036291 = score(doc=328,freq=10.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.20163295 = fieldWeight in 328, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=328)
        0.020801133 = product of:
          0.041602265 = sum of:
            0.041602265 = weight(_text_:22 in 328) [ClassicSimilarity], result of:
              0.041602265 = score(doc=328,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.23214069 = fieldWeight in 328, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=328)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In diesem Artikel wird ein Eyetracking-Experiment beschrieben, bei dem untersucht wurde, wann und auf Basis welcher Informationen Relevanzentscheidungen bei der themenbezogenen Dokumentenbewertung fallen und welche Faktoren auf die Relevanzentscheidung einwirken. Nach einer kurzen Einführung werden relevante Studien aufgeführt, in denen Eyetracking als Untersuchungsmethode für Interaktionsverhalten mit Ergebnislisten (Information Seeking Behavior) verwendet wurde. Nutzerverhalten wird hierbei vor allem durch unterschiedliche Aufgaben-Typen, dargestellte Informationen und durch das Ranking eines Ergebnisses beeinflusst. Durch EyetrackingUntersuchungen lassen sich Nutzer außerdem in verschiedene Klassen von Bewertungs- und Lesetypen einordnen. Diese Informationen können als implizites Feedback genutzt werden, um so die Suche zu personalisieren und um die Relevanz von Suchergebnissen ohne aktives Zutun des Users zu erhöhen. In einem explorativen Eyetracking-Experiment mit 12 Studenten der Hochschule Darmstadt werden anhand der Länge der Gesamtbewertung, Anzahl der Fixationen, Anzahl der besuchten Metadatenelemente und Länge des Scanpfades zwei typische Bewertungstypen identifiziert. Das Metadatenfeld Abstract wird im Experiment zuverlässig als wichtigste Dokumenteigenschaft für die Zuordnung von Relevanz ermittelt.
    Date
    22. 7.2012 19:25:54
  17. Wildemuth, B.; Freund, L.; Toms, E.G.: Untangling search task complexity and difficulty in the context of interactive information retrieval studies (2014) 0.02
    0.021419886 = product of:
      0.032129828 = sum of:
        0.014795548 = weight(_text_:in in 1786) [ClassicSimilarity], result of:
          0.014795548 = score(doc=1786,freq=16.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.21253976 = fieldWeight in 1786, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1786)
        0.017334279 = product of:
          0.034668557 = sum of:
            0.034668557 = weight(_text_:22 in 1786) [ClassicSimilarity], result of:
              0.034668557 = score(doc=1786,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.19345059 = fieldWeight in 1786, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1786)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose - One core element of interactive information retrieval (IIR) experiments is the assignment of search tasks. The purpose of this paper is to provide an analytical review of current practice in developing those search tasks to test, observe or control task complexity and difficulty. Design/methodology/approach - Over 100 prior studies of IIR were examined in terms of how each defined task complexity and/or difficulty (or related concepts) and subsequently interpreted those concepts in the development of the assigned search tasks. Findings - Search task complexity is found to include three dimensions: multiplicity of subtasks or steps, multiplicity of facets, and indeterminability. Search task difficulty is based on an interaction between the search task and the attributes of the searcher or the attributes of the search situation. The paper highlights the anomalies in our use of these two concepts, concluding with suggestions for future methodological research related to search task complexity and difficulty. Originality/value - By analyzing and synthesizing current practices, this paper provides guidance for future experiments in IIR that involve these two constructs.
    Content
    Beitrag in einem Special Issue: Festschrift in honour of Nigel Ford
    Date
    6. 4.2015 19:31:22
  18. Leininger, K.: Interindexer consistency in PsychINFO (2000) 0.02
    0.021115731 = product of:
      0.031673595 = sum of:
        0.010872464 = weight(_text_:in in 2552) [ClassicSimilarity], result of:
          0.010872464 = score(doc=2552,freq=6.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.1561842 = fieldWeight in 2552, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2552)
        0.020801133 = product of:
          0.041602265 = sum of:
            0.041602265 = weight(_text_:22 in 2552) [ClassicSimilarity], result of:
              0.041602265 = score(doc=2552,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.23214069 = fieldWeight in 2552, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2552)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Reports results of a study to examine interindexer consistency (the degree to which indexers, when assigning terms to a chosen record, will choose the same terms to reflect that record) in the PsycINFO database using 60 records that were inadvertently processed twice between 1996 and 1998. Five aspects of interindexer consistency were analysed. Two methods were used to calculate interindexer consistency: one posited by Hooper (1965) and the other by Rollin (1981). Aspects analysed were: checktag consistency (66.24% using Hooper's calculation and 77.17% using Rollin's); major-to-all term consistency (49.31% and 62.59% respectively); overall indexing consistency (49.02% and 63.32%); classification code consistency (44.17% and 45.00%); and major-to-major term consistency (43.24% and 56.09%). The average consistency across all categories was 50.4% using Hooper's method and 60.83% using Rollin's. Although comparison with previous studies is difficult due to methodological variations in the overall study of indexing consistency and the specific characteristics of the database, results generally support previous findings when trends and similar studies are analysed.
    Date
    9. 2.1997 18:44:22
  19. Losee, R.M.: Determining information retrieval and filtering performance without experimentation (1995) 0.02
    0.021060942 = product of:
      0.03159141 = sum of:
        0.0073234225 = weight(_text_:in in 3368) [ClassicSimilarity], result of:
          0.0073234225 = score(doc=3368,freq=2.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.10520181 = fieldWeight in 3368, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3368)
        0.02426799 = product of:
          0.04853598 = sum of:
            0.04853598 = weight(_text_:22 in 3368) [ClassicSimilarity], result of:
              0.04853598 = score(doc=3368,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.2708308 = fieldWeight in 3368, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3368)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The performance of an information retrieval or text and media filtering system may be determined through analytic methods as well as by traditional simulation or experimental methods. These analytic methods can provide precise statements about expected performance. They can thus determine which of 2 similarly performing systems is superior. For both a single query terms and for a multiple query term retrieval model, a model for comparing the performance of different probabilistic retrieval methods is developed. This method may be used in computing the average search length for a query, given only knowledge of database parameter values. Describes predictive models for inverse document frequency, binary independence, and relevance feedback based retrieval and filtering. Simulation illustrate how the single term model performs and sample performance predictions are given for single term and multiple term problems
    Date
    22. 2.1996 13:14:10
  20. King, D.W.: Blazing new trails : in celebration of an audacious career (2000) 0.02
    0.020782834 = product of:
      0.03117425 = sum of:
        0.01383997 = weight(_text_:in in 1184) [ClassicSimilarity], result of:
          0.01383997 = score(doc=1184,freq=14.0), product of:
            0.069613084 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.051176514 = queryNorm
            0.19881277 = fieldWeight in 1184, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1184)
        0.017334279 = product of:
          0.034668557 = sum of:
            0.034668557 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
              0.034668557 = score(doc=1184,freq=2.0), product of:
                0.17921144 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051176514 = queryNorm
                0.19345059 = fieldWeight in 1184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1184)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    I had the distinct pleasure of working with Pauline Atherton (Cochrane) during the 1960s, a period that can be considered the heyday of automated information system design and evaluation in the United States. I first met Pauline at the 1962 American Documentation Institute annual meeting in North Hollywood, Florida. My company, Westat Research Analysts, had recently been awarded a contract by the U.S. Patent Office to provide statistical support for the design of experiments with automated information retrieval systems. I was asked to attend the meeting to learn more about information retrieval systems and to begin informing others of U.S. Patent Office activities in this area. At one session, Pauline and I questioned a speaker about the research that he presented. Pauline's questions concerned the logic of their approach and mine, the statistical aspects. After the session, she came over to talk to me and we began a professional and personal friendship that continues to this day. During the 1960s, Pauline was involved in several important information-retrieval projects including a series of studies for the American Institute of Physics, a dissertation examining the relevance of retrieved documents, and development and evaluation of an online information-retrieval system. I had the opportunity to work with Pauline and her colleagues an four of those projects and will briefly describe her work in the 1960s.
    Date
    22. 9.1997 19:16:05
    Source
    Saving the time of the library user through subject access innovation: Papers in honor of Pauline Atherton Cochrane. Ed.: W.J. Wheeler

Languages