Search (83 results, page 1 of 5)

  • × theme_ss:"Retrievalstudien"
  • × year_i:[1990 TO 2000}
  1. Clarke, S.J.; Willett, P.: Estimating the recall performance of Web search engines (1997) 0.15
    0.14804515 = product of:
      0.22206771 = sum of:
        0.10735885 = weight(_text_:search in 760) [ClassicSimilarity], result of:
          0.10735885 = score(doc=760,freq=8.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.6144187 = fieldWeight in 760, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0625 = fieldNorm(doc=760)
        0.11470887 = product of:
          0.22941774 = sum of:
            0.22941774 = weight(_text_:engines in 760) [ClassicSimilarity], result of:
              0.22941774 = score(doc=760,freq=8.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.8981709 = fieldWeight in 760, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0625 = fieldNorm(doc=760)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Reports a comparison of the retrieval effectiveness of the AltaVista, Excite and Lycos Web search engines. Describes a method for comparing the recall of the 3 sets of searches, despite the fact that they are carried out on non identical sets of Web pages. It is thus possible, unlike previous comparative studies of Web search engines, to consider both recall and precision when evaluating the effectiveness of search engines
  2. Agata, T.: ¬A measure for evaluating search engines on the World Wide Web : retrieval test with ESL (Expected Search Length) (1997) 0.13
    0.13326861 = product of:
      0.1999029 = sum of:
        0.113871254 = weight(_text_:search in 3892) [ClassicSimilarity], result of:
          0.113871254 = score(doc=3892,freq=4.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.6516894 = fieldWeight in 3892, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.09375 = fieldNorm(doc=3892)
        0.08603165 = product of:
          0.1720633 = sum of:
            0.1720633 = weight(_text_:engines in 3892) [ClassicSimilarity], result of:
              0.1720633 = score(doc=3892,freq=2.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.67362815 = fieldWeight in 3892, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3892)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
  3. Iivonen, M.: Consistency in the selection of search concepts and search terms (1995) 0.12
    0.11757234 = product of:
      0.1763585 = sum of:
        0.15592465 = weight(_text_:search in 1757) [ClassicSimilarity], result of:
          0.15592465 = score(doc=1757,freq=30.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.89236253 = fieldWeight in 1757, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=1757)
        0.020433856 = product of:
          0.040867712 = sum of:
            0.040867712 = weight(_text_:22 in 1757) [ClassicSimilarity], result of:
              0.040867712 = score(doc=1757,freq=2.0), product of:
                0.17604718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05027291 = queryNorm
                0.23214069 = fieldWeight in 1757, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1757)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Considers intersearcher and intrasearcher consistency in the selection of search terms. Based on an empirical study where 22 searchers from 4 different types of search environments analyzed altogether 12 search requests of 4 different types in 2 separate test situations between which 2 months elapsed. Statistically very significant differences in consistency were found according to the types of search environments and search requests. Consistency was also considered according to the extent of the scope of search concept. At level I search terms were compared character by character. At level II different search terms were accepted as the same search concept with a rather simple evaluation of linguistic expressions. At level III, in addition to level II, the hierarchical approach of the search request was also controlled. At level IV different search terms were accepted as the same search concept with a broad interpretation of the search concept. Both intersearcher and intrasearcher consistency grew most immediately after a rather simple evaluation of linguistic impressions
  4. Brown, M.E.: By any other name : accounting for failure in the naming of subject categories (1995) 0.07
    0.07012871 = product of:
      0.10519306 = sum of:
        0.08135357 = weight(_text_:search in 5598) [ClassicSimilarity], result of:
          0.08135357 = score(doc=5598,freq=6.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.46558946 = fieldWeight in 5598, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5598)
        0.0238395 = product of:
          0.047679 = sum of:
            0.047679 = weight(_text_:22 in 5598) [ClassicSimilarity], result of:
              0.047679 = score(doc=5598,freq=2.0), product of:
                0.17604718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05027291 = queryNorm
                0.2708308 = fieldWeight in 5598, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5598)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Research shows that 65-80% of subject search terms fail to match the appropriate subject heading and one third to one half of subject searches result in no references being retrieved. Examines the subject search terms geberated by 82 school and college students in Princeton, NJ, evaluated the match between the named terms and the expected subject headings, proposes an explanation for match failures in relation to 3 invariant properties common to all search terms: concreteness, complexity, and syndeticity. Suggests that match failure is a consequence of developmental naming patterns and that these patterns can be overcome through the use of metacognitive naming skills
    Date
    2.11.1996 13:08:22
  5. Harter, S.P.; Hert, C.A.: Evaluation of information retrieval systems : approaches, issues, and methods (1997) 0.06
    0.06476976 = product of:
      0.09715463 = sum of:
        0.0469695 = weight(_text_:search in 2264) [ClassicSimilarity], result of:
          0.0469695 = score(doc=2264,freq=2.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.2688082 = fieldWeight in 2264, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2264)
        0.05018513 = product of:
          0.10037026 = sum of:
            0.10037026 = weight(_text_:engines in 2264) [ClassicSimilarity], result of:
              0.10037026 = score(doc=2264,freq=2.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.39294976 = fieldWeight in 2264, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2264)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    State of the art review of information retrieval systems, defined as systems retrieving documents a sopposed to numerical data. Explains the classic Cranfield studies that have served as a standard for retrieval testing since the 1960s and discusses the Cranfield model and its relevance based measures of retrieval effectiveness. Details sosme of the problems with the Cranfield instruments and issues of validity and reliability, generalizability, usefulness and basic concepts. Discusses the evaluation of the Internet search engines in light of the Cranfield model, noting the very real differences between batch systems (Cranfield) and interactive systems (Internet). Because the Internet collection is not fixed, it is impossible to determine recall as a measure of retrieval effectiveness. considers future directions in evaluating information retrieval systems
  6. Wu, C.-J.: Experiments on using the Dublin Core to reduce the retrieval error ratio (1998) 0.06
    0.06476976 = product of:
      0.09715463 = sum of:
        0.0469695 = weight(_text_:search in 5201) [ClassicSimilarity], result of:
          0.0469695 = score(doc=5201,freq=2.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.2688082 = fieldWeight in 5201, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5201)
        0.05018513 = product of:
          0.10037026 = sum of:
            0.10037026 = weight(_text_:engines in 5201) [ClassicSimilarity], result of:
              0.10037026 = score(doc=5201,freq=2.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.39294976 = fieldWeight in 5201, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5201)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In order to test the power of metadata on information retrieval, an experiment was designed and conducted on a group of 7 graduate students using the Dublin Core as the cataloguing metadata. Results show that, on average, the retrieval error rate is only 2.9 per cent for the MES system (http://140.136.85.194), which utilizes the Dublin Core to describe the documents on the World Wide Web, in contrast to 20.7 per cent for the 7 famous search engines including HOTBOT, GAIS, LYCOS, EXCITE, INFOSEEK, YAHOO, and OCTOPUS. The very low error rate indicates that the users can use the information of the Dublin Core to decide whether to retrieve the documents or not
  7. Pemberton, J.K.; Ojala, M.; Garman, N.: Head to head : searching the Web versus traditional services (1998) 0.05
    0.053949714 = product of:
      0.08092457 = sum of:
        0.053679425 = weight(_text_:search in 3572) [ClassicSimilarity], result of:
          0.053679425 = score(doc=3572,freq=2.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.30720934 = fieldWeight in 3572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0625 = fieldNorm(doc=3572)
        0.027245143 = product of:
          0.054490287 = sum of:
            0.054490287 = weight(_text_:22 in 3572) [ClassicSimilarity], result of:
              0.054490287 = score(doc=3572,freq=2.0), product of:
                0.17604718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05027291 = queryNorm
                0.30952093 = fieldWeight in 3572, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3572)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Describes of 3 searches on the topic of virtual communities done on the WWW using HotBot and traditional databases using LEXIS-NEXIS and ABI/Inform. Concludes that the WWW is a good starting place for a broad concept search but the traditional services are better for more precise topics
    Source
    Online. 22(1998) no.3, S.24-26,28
  8. Losee, R.M.: Determining information retrieval and filtering performance without experimentation (1995) 0.05
    0.047206 = product of:
      0.070809 = sum of:
        0.0469695 = weight(_text_:search in 3368) [ClassicSimilarity], result of:
          0.0469695 = score(doc=3368,freq=2.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.2688082 = fieldWeight in 3368, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3368)
        0.0238395 = product of:
          0.047679 = sum of:
            0.047679 = weight(_text_:22 in 3368) [ClassicSimilarity], result of:
              0.047679 = score(doc=3368,freq=2.0), product of:
                0.17604718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05027291 = queryNorm
                0.2708308 = fieldWeight in 3368, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3368)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The performance of an information retrieval or text and media filtering system may be determined through analytic methods as well as by traditional simulation or experimental methods. These analytic methods can provide precise statements about expected performance. They can thus determine which of 2 similarly performing systems is superior. For both a single query terms and for a multiple query term retrieval model, a model for comparing the performance of different probabilistic retrieval methods is developed. This method may be used in computing the average search length for a query, given only knowledge of database parameter values. Describes predictive models for inverse document frequency, binary independence, and relevance feedback based retrieval and filtering. Simulation illustrate how the single term model performs and sample performance predictions are given for single term and multiple term problems
    Date
    22. 2.1996 13:14:10
  9. Qiu, L.: Analytical searching vs. browsing in hypertext information retrieval systems (1993) 0.04
    0.044283267 = product of:
      0.1328498 = sum of:
        0.1328498 = weight(_text_:search in 7416) [ClassicSimilarity], result of:
          0.1328498 = score(doc=7416,freq=16.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.76030433 = fieldWeight in 7416, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7416)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports an experiment conducted to study search behaviour of different user groups in a hypertext information retrieval system. A three-way analysis of variance test was conducted to study the effects of gender, search task, and search experience on search option (analytical searching versus browsing), as measured by the proportion of nodes reached through analytical searching. The search task factor influenced search option in that a general task caused more browsing and specific task more analytical searching. Gender or search experience alone did not affect the search option. These findings are discussed in light of evaluation of existing systems and implications for future design
  10. Kristensen, J.: Expanding end-users' query statements for free text searching with a search-aid thesaurus (1993) 0.04
    0.04382907 = product of:
      0.1314872 = sum of:
        0.1314872 = weight(_text_:search in 6621) [ClassicSimilarity], result of:
          0.1314872 = score(doc=6621,freq=12.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.75250614 = fieldWeight in 6621, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0625 = fieldNorm(doc=6621)
      0.33333334 = coord(1/3)
    
    Abstract
    Tests the effectiveness of a thesaurus as a search-aid in free text searching of a full text database. A set of queries was searched against a large full text database of newspaper articles. The thesaurus contained equivalence, hierarchical and associative relationships. Each query was searched in five modes: basic search, synonym search, narrower term search, related term search, and union of all previous searches. The searches were analyzed in terms of relative recall and precision
  11. Iivonen, M.: Factors lowering the consistency in online searching (1995) 0.04
    0.04142321 = product of:
      0.12426962 = sum of:
        0.12426962 = weight(_text_:search in 3869) [ClassicSimilarity], result of:
          0.12426962 = score(doc=3869,freq=14.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.71119964 = fieldWeight in 3869, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3869)
      0.33333334 = coord(1/3)
    
    Abstract
    Considers factors lowering both intersearcher and intrasearcher consistency in online searching. 32 searchers with different backgrounds first analyzed 12 search requests, and after 2 months 8 of the same search requests, and formulated query statements from them for a search. Intersearcher consistency was the results of more than one factor. There were more differences between searchers ion the selection of search terms than in the selection of search concepts. The most important factor lowering intrasearcher consistency was that the same searcher selected different search terms to describe the same search concepts on various occasions
  12. Spink, A.; Goodrum, A.; Robins, D.: Search intermediary elicitations during mediated online searching (1995) 0.04
    0.04142321 = product of:
      0.12426962 = sum of:
        0.12426962 = weight(_text_:search in 3872) [ClassicSimilarity], result of:
          0.12426962 = score(doc=3872,freq=14.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.71119964 = fieldWeight in 3872, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3872)
      0.33333334 = coord(1/3)
    
    Abstract
    Investigates search intermediary elicitations during mediated online searching. A study of 40 online reference interviews involving 1.557 search intermediary elicitation, found 15 different types of search intermediary elicitation to users. The elicitation purpose included search terms and strategies, database selection, relevance of retrieved items, users' knowledge and previous information seeking. Analysis of the patterns in the types and sequencing of elicitation showed significant strings of multiple elicitation regarding search terms and strategies, and relevance judgements. Discusses the implications of the findings for training search intermediaries and the design of interfaces eliciting information from end users
  13. Wood, F.; Ford, N.; Miller, D.; Sobczyk, G.; Duffin, R.: Information skills, searching behaviour and cognitive styles for student-centred learning : a computer-assisted learning approach (1996) 0.04
    0.040462285 = product of:
      0.060693428 = sum of:
        0.04025957 = weight(_text_:search in 4341) [ClassicSimilarity], result of:
          0.04025957 = score(doc=4341,freq=2.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.230407 = fieldWeight in 4341, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=4341)
        0.020433856 = product of:
          0.040867712 = sum of:
            0.040867712 = weight(_text_:22 in 4341) [ClassicSimilarity], result of:
              0.040867712 = score(doc=4341,freq=2.0), product of:
                0.17604718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05027291 = queryNorm
                0.23214069 = fieldWeight in 4341, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4341)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Undergraduates were tested to establish how they searched databases, the effectiveness of their searches and their satisfaction with them. The students' cognitive and learning styles were determined by the Lancaster Approaches to Studying Inventory and Riding's Cognitive Styles Analysis tests. There were significant differences in the searching behaviour and the effectiveness of the searches carried out by students with different learning and cognitive styles. Computer-assisted learning (CAL) packages were developed for three departments. The effectiveness of the packages were evaluated. Significant differences were found in the ways students with different learning styles used the packages. Based on the experience gained, guidelines for the teaching of information skills and the production and use of packages were prepared. About 2/3 of the searches had serious weaknesses, indicating a need for effective training. It appears that choice of searching strategies, search effectiveness and use of CAL packages are all affected by the cognitive and learning styles of the searcher. Therefore, students should be made aware of their own styles and, if appropriate, how to adopt more effective strategies
    Source
    Journal of information science. 22(1996) no.2, S.79-92
  14. Qiu, L.: Markov models of search state patterns in a hypertext information retrieval system (1993) 0.04
    0.038350433 = product of:
      0.1150513 = sum of:
        0.1150513 = weight(_text_:search in 5296) [ClassicSimilarity], result of:
          0.1150513 = score(doc=5296,freq=12.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.65844285 = fieldWeight in 5296, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5296)
      0.33333334 = coord(1/3)
    
    Abstract
    The objective of this research is to discover the search state patterns through which users retrieve information in hypertext systems. The Markov model is used to describe users' search behavior. As determined by the log-linear model test, the second-order Markov model is the best model. Search patterns of different user groups were studied by comparing the corresponding transition probability matrices. The comparisons were made based on the following factors: gender, search experience, search task, and the user's academic background. The statistical tests revealed that there were significant differences between all the groups being compared
  15. Marchionini, G.: Information seeking in full-text end-user-oriented search system : the roles of domain and search expertise (1993) 0.04
    0.035786286 = product of:
      0.10735885 = sum of:
        0.10735885 = weight(_text_:search in 5100) [ClassicSimilarity], result of:
          0.10735885 = score(doc=5100,freq=8.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.6144187 = fieldWeight in 5100, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0625 = fieldNorm(doc=5100)
      0.33333334 = coord(1/3)
    
    Abstract
    Presents a study that identifies and examines the roles that information-seeking expertise and domain expertise play in information seeking in full text, and user search systems. This forms part of an investigation to characterise information seeking and to determine how it is affected by interactive electronic access to primary information. Distinguishes between the approaches of search experts and domain experts. Makes recommendations for systems design
  16. Wildemuth, B.M.; Jacob, E.K.; Fullington, A.;; Bliek, R. de; Friedman, C.P.: ¬A detailed analysis of end-user search behaviours (1991) 0.04
    0.035364427 = product of:
      0.10609328 = sum of:
        0.10609328 = weight(_text_:search in 2423) [ClassicSimilarity], result of:
          0.10609328 = score(doc=2423,freq=20.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.60717577 = fieldWeight in 2423, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2423)
      0.33333334 = coord(1/3)
    
    Abstract
    Search statements in this revision process can be viewed as a 'move' in the overall search strategy. Very little is known about how end users develop and revise their search strategies. A study was conducted to analyse the moves made in 244 data base searches conducted by 26 medical students at the University of North Carolina at Chapel Hill. Students search INQUIRER, a data base of facts and concepts in microbiology. The searches were conducted during a 3-week period in spring 1990 and were recorded by the INQUIRER system. Each search statement was categorised, using Fidel's online searching moves (S. Online review 9(1985) S.61-74) and Bates' search tactics (s. JASIS 30(1979) S.205-214). Further analyses indicated that the most common moves were Browse/Specity, Select Exhaust, Intersect, and Vary, and that selection of moves varied by student and by problem. Analysis of search tactics (combinations of moves) identified 5 common search approaches. The results of this study have implcations for future research on search behaviours, for thedesign of system interfaces and data base structures, and for the training of end users
  17. Peters, T.A.; Kurth, M.: Controlled and uncontrolled vocabulary subject searching in an academic library online catalog (1991) 0.04
    0.035008997 = product of:
      0.10502698 = sum of:
        0.10502698 = weight(_text_:search in 2348) [ClassicSimilarity], result of:
          0.10502698 = score(doc=2348,freq=10.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.6010733 = fieldWeight in 2348, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2348)
      0.33333334 = coord(1/3)
    
    Abstract
    An analysis of transaction logs from an academic library online catalog describes instances in which users have tried both controlled and uncontrolled (title keyword) vocabulary subject access during the same search session. Eight hypotheses were tested. Over 6.6% of all dial access search sessions contained both methods of subject access. Over 58% of the isolated sessions began with an uncontrolled vocabulary attempt. Over 76% contained only one vocabulary shift. On average, user persistence was greater during controlled vocabulary search logs, but search output was greater during uncontrolled vocabulary search logs. Several recommendations regarding catalog design and instruction are made.
  18. Belkin, N.J.: ¬An overview of results from Rutgers' investigations of interactive information retrieval (1998) 0.03
    0.03371857 = product of:
      0.050577857 = sum of:
        0.03354964 = weight(_text_:search in 2339) [ClassicSimilarity], result of:
          0.03354964 = score(doc=2339,freq=2.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.19200584 = fieldWeight in 2339, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2339)
        0.017028214 = product of:
          0.03405643 = sum of:
            0.03405643 = weight(_text_:22 in 2339) [ClassicSimilarity], result of:
              0.03405643 = score(doc=2339,freq=2.0), product of:
                0.17604718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05027291 = queryNorm
                0.19345059 = fieldWeight in 2339, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2339)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Over the last 4 years, the Information Interaction Laboratory at Rutgers' School of communication, Information and Library Studies has performed a series of investigations concerned with various aspects of people's interactions with advanced information retrieval (IR) systems. We have benn especially concerned with understanding not just what people do, and why, and with what effect, but also with what they would like to do, and how they attempt to accomplish it, and with what difficulties. These investigations have led to some quite interesting conclusions about the nature and structure of people's interactions with information, about support for cooperative human-computer interaction in query reformulation, and about the value of visualization of search results for supporting various forms of interaction with information. In this discussion, I give an overview of the research program and its projects, present representative results from the projects, and discuss some implications of these results for support of subject searching in information retrieval systems
    Date
    22. 9.1997 19:16:05
  19. Johnson, K.E.: OPAC missing record retrieval (1996) 0.03
    0.0328718 = product of:
      0.0986154 = sum of:
        0.0986154 = weight(_text_:search in 6735) [ClassicSimilarity], result of:
          0.0986154 = score(doc=6735,freq=12.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.5643796 = fieldWeight in 6735, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=6735)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports results of a study, conducted at Rhode Island University Library, to determine whether cataloguing records known to be missing from a library consortium OPAC database could be identified using the database search features. Attempts to create lists of bibliographic records held by other libraries in the consortium using Boolean searching features failed due to search feature limitations. Samples of search logic were created, collections of records based on this logic were assembled manually and then compared with card catalogue of the single library. Results suggest that use of the Boolean OR operator to conduct the broadest possible search could find 56.000 of the library's missing records that were held by other libraries. Use of the Boolean AND operator to conduct the narrowest search found 85.000 missing records. A specific library search made of the records of the most likely consortium library to have overlaid the single library's holdings found that 80.000 of the single library's missing records were held by a specific library
  20. Hallet, K.S.: Separate but equal? : A system comparison study of MEDLINE's controlled vocabulary MeSH (1998) 0.03
    0.0328718 = product of:
      0.0986154 = sum of:
        0.0986154 = weight(_text_:search in 3553) [ClassicSimilarity], result of:
          0.0986154 = score(doc=3553,freq=12.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.5643796 = fieldWeight in 3553, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=3553)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports results of a study to test the effect of controlled vocabulary search feature implementation on 2 online systems. Specifically, the study examined retrieval rates using 4 unique controlled vocabulary search features (Explode, major descriptor, descriptor, subheadings). 2 questions were addressed; what, if any, are the general differences between controlled vocabulary system implementations in DIALOG and Ovid; and what, if any are the impacts of each on the differing controlled vocabulary search features upon retrieval rates? Each search feature was applied to to 9 search queries obtained from a medical reference librarian. The same queires were searched in the complete MEDLINE file on the DIALOG and Ovid online host systems. The unique records (those records retrieved in only 1 of the 2 systems) were identified and analyzed. DIALOG produced equal or more records than Ovid in nearly 20% of the queries. Concludes than users need to be aware of system specific designs that may require differing input strategies across different systems for the same unique controlled vocabulary search features. Making recommendations and suggestions for future research

Languages

Types

  • a 79
  • m 2
  • s 2
  • el 1
  • More… Less…