Search (167 results, page 1 of 9)

  • × theme_ss:"Retrievalstudien"
  1. Iivonen, M.: Consistency in the selection of search concepts and search terms (1995) 0.10
    0.10177215 = product of:
      0.2035443 = sum of:
        0.2035443 = sum of:
          0.16127442 = weight(_text_:search in 1757) [ClassicSimilarity], result of:
            0.16127442 = score(doc=1757,freq=30.0), product of:
              0.18072747 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.051997773 = queryNorm
              0.89236253 = fieldWeight in 1757, product of:
                5.477226 = tf(freq=30.0), with freq of:
                  30.0 = termFreq=30.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.046875 = fieldNorm(doc=1757)
          0.04226988 = weight(_text_:22 in 1757) [ClassicSimilarity], result of:
            0.04226988 = score(doc=1757,freq=2.0), product of:
              0.18208735 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051997773 = queryNorm
              0.23214069 = fieldWeight in 1757, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1757)
      0.5 = coord(1/2)
    
    Abstract
    Considers intersearcher and intrasearcher consistency in the selection of search terms. Based on an empirical study where 22 searchers from 4 different types of search environments analyzed altogether 12 search requests of 4 different types in 2 separate test situations between which 2 months elapsed. Statistically very significant differences in consistency were found according to the types of search environments and search requests. Consistency was also considered according to the extent of the scope of search concept. At level I search terms were compared character by character. At level II different search terms were accepted as the same search concept with a rather simple evaluation of linguistic expressions. At level III, in addition to level II, the hierarchical approach of the search request was also controlled. At level IV different search terms were accepted as the same search concept with a broad interpretation of the search concept. Both intersearcher and intrasearcher consistency grew most immediately after a rather simple evaluation of linguistic impressions
  2. Wildemuth, B.; Freund, L.; Toms, E.G.: Untangling search task complexity and difficulty in the context of interactive information retrieval studies (2014) 0.07
    0.06966355 = product of:
      0.1393271 = sum of:
        0.1393271 = sum of:
          0.10410219 = weight(_text_:search in 1786) [ClassicSimilarity], result of:
            0.10410219 = score(doc=1786,freq=18.0), product of:
              0.18072747 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.051997773 = queryNorm
              0.5760175 = fieldWeight in 1786, product of:
                4.2426405 = tf(freq=18.0), with freq of:
                  18.0 = termFreq=18.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1786)
          0.035224903 = weight(_text_:22 in 1786) [ClassicSimilarity], result of:
            0.035224903 = score(doc=1786,freq=2.0), product of:
              0.18208735 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051997773 = queryNorm
              0.19345059 = fieldWeight in 1786, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1786)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - One core element of interactive information retrieval (IIR) experiments is the assignment of search tasks. The purpose of this paper is to provide an analytical review of current practice in developing those search tasks to test, observe or control task complexity and difficulty. Design/methodology/approach - Over 100 prior studies of IIR were examined in terms of how each defined task complexity and/or difficulty (or related concepts) and subsequently interpreted those concepts in the development of the assigned search tasks. Findings - Search task complexity is found to include three dimensions: multiplicity of subtasks or steps, multiplicity of facets, and indeterminability. Search task difficulty is based on an interaction between the search task and the attributes of the searcher or the attributes of the search situation. The paper highlights the anomalies in our use of these two concepts, concluding with suggestions for future methodological research related to search task complexity and difficulty. Originality/value - By analyzing and synthesizing current practices, this paper provides guidance for future experiments in IIR that involve these two constructs.
    Date
    6. 4.2015 19:31:22
  3. Brown, M.E.: By any other name : accounting for failure in the naming of subject categories (1995) 0.07
    0.06672983 = product of:
      0.13345966 = sum of:
        0.13345966 = sum of:
          0.0841448 = weight(_text_:search in 5598) [ClassicSimilarity], result of:
            0.0841448 = score(doc=5598,freq=6.0), product of:
              0.18072747 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.051997773 = queryNorm
              0.46558946 = fieldWeight in 5598, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5598)
          0.049314864 = weight(_text_:22 in 5598) [ClassicSimilarity], result of:
            0.049314864 = score(doc=5598,freq=2.0), product of:
              0.18208735 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051997773 = queryNorm
              0.2708308 = fieldWeight in 5598, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5598)
      0.5 = coord(1/2)
    
    Abstract
    Research shows that 65-80% of subject search terms fail to match the appropriate subject heading and one third to one half of subject searches result in no references being retrieved. Examines the subject search terms geberated by 82 school and college students in Princeton, NJ, evaluated the match between the named terms and the expected subject headings, proposes an explanation for match failures in relation to 3 invariant properties common to all search terms: concreteness, complexity, and syndeticity. Suggests that match failure is a consequence of developmental naming patterns and that these patterns can be overcome through the use of metacognitive naming skills
    Date
    2.11.1996 13:08:22
  4. Pemberton, J.K.; Ojala, M.; Garman, N.: Head to head : searching the Web versus traditional services (1998) 0.06
    0.05594051 = product of:
      0.11188102 = sum of:
        0.11188102 = sum of:
          0.055521168 = weight(_text_:search in 3572) [ClassicSimilarity], result of:
            0.055521168 = score(doc=3572,freq=2.0), product of:
              0.18072747 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.051997773 = queryNorm
              0.30720934 = fieldWeight in 3572, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.0625 = fieldNorm(doc=3572)
          0.056359846 = weight(_text_:22 in 3572) [ClassicSimilarity], result of:
            0.056359846 = score(doc=3572,freq=2.0), product of:
              0.18208735 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051997773 = queryNorm
              0.30952093 = fieldWeight in 3572, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=3572)
      0.5 = coord(1/2)
    
    Abstract
    Describes of 3 searches on the topic of virtual communities done on the WWW using HotBot and traditional databases using LEXIS-NEXIS and ABI/Inform. Concludes that the WWW is a good starting place for a broad concept search but the traditional services are better for more precise topics
    Source
    Online. 22(1998) no.3, S.24-26,28
  5. Blagden, J.F.: How much noise in a role-free and link-free co-ordinate indexing system? (1966) 0.05
    0.048947945 = product of:
      0.09789589 = sum of:
        0.09789589 = sum of:
          0.048581023 = weight(_text_:search in 2718) [ClassicSimilarity], result of:
            0.048581023 = score(doc=2718,freq=2.0), product of:
              0.18072747 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.051997773 = queryNorm
              0.2688082 = fieldWeight in 2718, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2718)
          0.049314864 = weight(_text_:22 in 2718) [ClassicSimilarity], result of:
            0.049314864 = score(doc=2718,freq=2.0), product of:
              0.18208735 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051997773 = queryNorm
              0.2708308 = fieldWeight in 2718, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2718)
      0.5 = coord(1/2)
    
    Abstract
    A study of the number of irrelevant documents retrieved in a co-ordinate indexing system that does not employ eitherr roles or links. These tests were based on one hundred actual inquiries received in the library and therefore an evaluation of recall efficiency is not included. Over half the enquiries produced no noise, but the mean average percentage niose figure was approximately 33 per cent based on a total average retireval figure of eighteen documents per search. Details of the size of the indexed collection, methods of indexing, and an analysis of the reasons for the retrieval of irrelevant documents are discussed, thereby providing information officers who are thinking of installing such a system with some evidence on which to base a decision as to whether or not to utilize these devices
    Source
    Journal of documentation. 22(1966), S.203-209
  6. Losee, R.M.: Determining information retrieval and filtering performance without experimentation (1995) 0.05
    0.048947945 = product of:
      0.09789589 = sum of:
        0.09789589 = sum of:
          0.048581023 = weight(_text_:search in 3368) [ClassicSimilarity], result of:
            0.048581023 = score(doc=3368,freq=2.0), product of:
              0.18072747 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.051997773 = queryNorm
              0.2688082 = fieldWeight in 3368, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3368)
          0.049314864 = weight(_text_:22 in 3368) [ClassicSimilarity], result of:
            0.049314864 = score(doc=3368,freq=2.0), product of:
              0.18208735 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051997773 = queryNorm
              0.2708308 = fieldWeight in 3368, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3368)
      0.5 = coord(1/2)
    
    Abstract
    The performance of an information retrieval or text and media filtering system may be determined through analytic methods as well as by traditional simulation or experimental methods. These analytic methods can provide precise statements about expected performance. They can thus determine which of 2 similarly performing systems is superior. For both a single query terms and for a multiple query term retrieval model, a model for comparing the performance of different probabilistic retrieval methods is developed. This method may be used in computing the average search length for a query, given only knowledge of database parameter values. Describes predictive models for inverse document frequency, binary independence, and relevance feedback based retrieval and filtering. Simulation illustrate how the single term model performs and sample performance predictions are given for single term and multiple term problems
    Date
    22. 2.1996 13:14:10
  7. Hodges, P.R.: Keyword in title indexes : effectiveness of retrieval in computer searches (1983) 0.05
    0.048947945 = product of:
      0.09789589 = sum of:
        0.09789589 = sum of:
          0.048581023 = weight(_text_:search in 5001) [ClassicSimilarity], result of:
            0.048581023 = score(doc=5001,freq=2.0), product of:
              0.18072747 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.051997773 = queryNorm
              0.2688082 = fieldWeight in 5001, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5001)
          0.049314864 = weight(_text_:22 in 5001) [ClassicSimilarity], result of:
            0.049314864 = score(doc=5001,freq=2.0), product of:
              0.18208735 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051997773 = queryNorm
              0.2708308 = fieldWeight in 5001, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5001)
      0.5 = coord(1/2)
    
    Abstract
    A study was done to test the effectiveness of retrieval using title word searching. It was based on actual search profiles used in the Mechanized Information Center at Ohio State University, in order ro replicate as closely as possible actual searching conditions. Fewer than 50% of the relevant titles were retrieved by keywords in titles. The low rate of retrieval can be attributes to three sources: titles themselves, user and information specialist ignorance of the subject vocabulary in use, and to general language problems. Across fields it was found that the social sciences had the best retrieval rate, with science having the next best, and arts and humanities the lowest. Ways to enhance and supplement keyword in title searching on the computer and in printed indexes are discussed.
    Date
    14. 3.1996 13:22:21
  8. Wood, F.; Ford, N.; Miller, D.; Sobczyk, G.; Duffin, R.: Information skills, searching behaviour and cognitive styles for student-centred learning : a computer-assisted learning approach (1996) 0.04
    0.041955378 = product of:
      0.083910756 = sum of:
        0.083910756 = sum of:
          0.041640874 = weight(_text_:search in 4341) [ClassicSimilarity], result of:
            0.041640874 = score(doc=4341,freq=2.0), product of:
              0.18072747 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.051997773 = queryNorm
              0.230407 = fieldWeight in 4341, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.046875 = fieldNorm(doc=4341)
          0.04226988 = weight(_text_:22 in 4341) [ClassicSimilarity], result of:
            0.04226988 = score(doc=4341,freq=2.0), product of:
              0.18208735 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051997773 = queryNorm
              0.23214069 = fieldWeight in 4341, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4341)
      0.5 = coord(1/2)
    
    Abstract
    Undergraduates were tested to establish how they searched databases, the effectiveness of their searches and their satisfaction with them. The students' cognitive and learning styles were determined by the Lancaster Approaches to Studying Inventory and Riding's Cognitive Styles Analysis tests. There were significant differences in the searching behaviour and the effectiveness of the searches carried out by students with different learning and cognitive styles. Computer-assisted learning (CAL) packages were developed for three departments. The effectiveness of the packages were evaluated. Significant differences were found in the ways students with different learning styles used the packages. Based on the experience gained, guidelines for the teaching of information skills and the production and use of packages were prepared. About 2/3 of the searches had serious weaknesses, indicating a need for effective training. It appears that choice of searching strategies, search effectiveness and use of CAL packages are all affected by the cognitive and learning styles of the searcher. Therefore, students should be made aware of their own styles and, if appropriate, how to adopt more effective strategies
    Source
    Journal of information science. 22(1996) no.2, S.79-92
  9. Balog, K.; Schuth, A.; Dekker, P.; Tavakolpoursaleh, N.; Schaer, P.; Chuang, P.-Y.: Overview of the TREC 2016 Open Search track Academic Search Edition (2016) 0.04
    0.039259393 = product of:
      0.078518786 = sum of:
        0.078518786 = product of:
          0.15703757 = sum of:
            0.15703757 = weight(_text_:search in 43) [ClassicSimilarity], result of:
              0.15703757 = score(doc=43,freq=16.0), product of:
                0.18072747 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.051997773 = queryNorm
                0.86891925 = fieldWeight in 43, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0625 = fieldNorm(doc=43)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We present the TREC Open Search track, which represents a new evaluation paradigm for information retrieval. It offers the possibility for researchers to evaluate their approaches in a live setting, with real, unsuspecting users of an existing search engine. The first edition of the track focuses on the academic search domain and features the ad-hoc scientific literature search task. We report on experiments with three different academic search engines: Cite-SeerX, SSOAR, and Microsoft Academic Search.
  10. Belkin, N.J.: ¬An overview of results from Rutgers' investigations of interactive information retrieval (1998) 0.03
    0.034962818 = product of:
      0.069925636 = sum of:
        0.069925636 = sum of:
          0.03470073 = weight(_text_:search in 2339) [ClassicSimilarity], result of:
            0.03470073 = score(doc=2339,freq=2.0), product of:
              0.18072747 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.051997773 = queryNorm
              0.19200584 = fieldWeight in 2339, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2339)
          0.035224903 = weight(_text_:22 in 2339) [ClassicSimilarity], result of:
            0.035224903 = score(doc=2339,freq=2.0), product of:
              0.18208735 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051997773 = queryNorm
              0.19345059 = fieldWeight in 2339, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2339)
      0.5 = coord(1/2)
    
    Abstract
    Over the last 4 years, the Information Interaction Laboratory at Rutgers' School of communication, Information and Library Studies has performed a series of investigations concerned with various aspects of people's interactions with advanced information retrieval (IR) systems. We have benn especially concerned with understanding not just what people do, and why, and with what effect, but also with what they would like to do, and how they attempt to accomplish it, and with what difficulties. These investigations have led to some quite interesting conclusions about the nature and structure of people's interactions with information, about support for cooperative human-computer interaction in query reformulation, and about the value of visualization of search results for supporting various forms of interaction with information. In this discussion, I give an overview of the research program and its projects, present representative results from the projects, and discuss some implications of these results for support of subject searching in information retrieval systems
    Date
    22. 9.1997 19:16:05
  11. Chu, H.: Factors affecting relevance judgment : a report from TREC Legal track (2011) 0.03
    0.034962818 = product of:
      0.069925636 = sum of:
        0.069925636 = sum of:
          0.03470073 = weight(_text_:search in 4540) [ClassicSimilarity], result of:
            0.03470073 = score(doc=4540,freq=2.0), product of:
              0.18072747 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.051997773 = queryNorm
              0.19200584 = fieldWeight in 4540, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4540)
          0.035224903 = weight(_text_:22 in 4540) [ClassicSimilarity], result of:
            0.035224903 = score(doc=4540,freq=2.0), product of:
              0.18208735 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051997773 = queryNorm
              0.19345059 = fieldWeight in 4540, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4540)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - This study intends to identify factors that affect relevance judgment of retrieved information as part of the 2007 TREC Legal track interactive task. Design/methodology/approach - Data were gathered and analyzed from the participants of the 2007 TREC Legal track interactive task using a questionnaire which includes not only a list of 80 relevance factors identified in prior research, but also a space for expressing their thoughts on relevance judgment in the process. Findings - This study finds that topicality remains a primary criterion, out of various options, for determining relevance, while specificity of the search request, task, or retrieved results also helps greatly in relevance judgment. Research limitations/implications - Relevance research should focus on the topicality and specificity of what is being evaluated as well as conducted in real environments. Practical implications - If multiple relevance factors are presented to assessors, the total number in a list should be below ten to take account of the limited processing capacity of human beings' short-term memory. Otherwise, the assessors might either completely ignore or inadequately consider some of the relevance factors when making judgment decisions. Originality/value - This study presents a method for reducing the artificiality of relevance research design, an apparent limitation in many related studies. Specifically, relevance judgment was made in this research as part of the 2007 TREC Legal track interactive task rather than a study devised for the sake of it. The assessors also served as searchers so that their searching experience would facilitate their subsequent relevance judgments.
    Date
    12. 7.2011 18:29:22
  12. Qiu, L.: Analytical searching vs. browsing in hypertext information retrieval systems (1993) 0.03
    0.034351967 = product of:
      0.068703935 = sum of:
        0.068703935 = product of:
          0.13740787 = sum of:
            0.13740787 = weight(_text_:search in 7416) [ClassicSimilarity], result of:
              0.13740787 = score(doc=7416,freq=16.0), product of:
                0.18072747 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.051997773 = queryNorm
                0.76030433 = fieldWeight in 7416, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7416)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Reports an experiment conducted to study search behaviour of different user groups in a hypertext information retrieval system. A three-way analysis of variance test was conducted to study the effects of gender, search task, and search experience on search option (analytical searching versus browsing), as measured by the proportion of nodes reached through analytical searching. The search task factor influenced search option in that a general task caused more browsing and specific task more analytical searching. Gender or search experience alone did not affect the search option. These findings are discussed in light of evaluation of existing systems and implications for future design
  13. Kristensen, J.: Expanding end-users' query statements for free text searching with a search-aid thesaurus (1993) 0.03
    0.033999633 = product of:
      0.067999266 = sum of:
        0.067999266 = product of:
          0.13599853 = sum of:
            0.13599853 = weight(_text_:search in 6621) [ClassicSimilarity], result of:
              0.13599853 = score(doc=6621,freq=12.0), product of:
                0.18072747 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.051997773 = queryNorm
                0.75250614 = fieldWeight in 6621, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6621)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Tests the effectiveness of a thesaurus as a search-aid in free text searching of a full text database. A set of queries was searched against a large full text database of newspaper articles. The thesaurus contained equivalence, hierarchical and associative relationships. Each query was searched in five modes: basic search, synonym search, narrower term search, related term search, and union of all previous searches. The searches were analyzed in terms of relative recall and precision
  14. Palmquist, R.A.; Kim, K.-S.: Cognitive style and on-line database search experience as predictors of Web search performance (2000) 0.03
    0.032920003 = product of:
      0.065840006 = sum of:
        0.065840006 = product of:
          0.13168001 = sum of:
            0.13168001 = weight(_text_:search in 4605) [ClassicSimilarity], result of:
              0.13168001 = score(doc=4605,freq=20.0), product of:
                0.18072747 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.051997773 = queryNorm
                0.72861093 = fieldWeight in 4605, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4605)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study sought to investigate the effects of cognitive style (field dependent and field independent) and on-line database search experience (novice and experienced) on the WWW search performance of undergraduate college students (n=48). It also attempted to find user factors that could be used to predict search efficiency. search performance, the dependent variable was defined in 2 ways: (1) time required for retrieving a relevant information item, and (2) the number of nodes traversed for retrieving a relevant information item. the search tasks required were carried out on a University Web site, and included a factual task and a topical search task of interest to the participant. Results indicated that while cognitive style (FD/FI) significantly influenced the search performance of novice searchers, the influence was greatly reduced in those searchers who had on-line database search experience. Based on the findings, suggestions for possible changes to the design of the current Web interface and to user training programs are provided
  15. MacFarlane, A.: Evaluation of web search for the information practitioner (2007) 0.03
    0.032920003 = product of:
      0.065840006 = sum of:
        0.065840006 = product of:
          0.13168001 = sum of:
            0.13168001 = weight(_text_:search in 817) [ClassicSimilarity], result of:
              0.13168001 = score(doc=817,freq=20.0), product of:
                0.18072747 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.051997773 = queryNorm
                0.72861093 = fieldWeight in 817, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.046875 = fieldNorm(doc=817)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - The aim of the paper is to put forward a structured mechanism for web search evaluation. The paper seeks to point to useful scientific research and show how information practitioners can use these methods in evaluation of search on the web for their users. Design/methodology/approach - The paper puts forward an approach which utilizes traditional laboratory-based evaluation measures such as average precision/precision at N documents, augmented with diagnostic measures such as link broken, etc., which are used to show why precision measures are depressed as well as the quality of the search engines crawling mechanism. Findings - The paper shows how to use diagnostic measures in conjunction with precision in order to evaluate web search. Practical implications - The methodology presented in this paper will be useful to any information professional who regularly uses web search as part of their information seeking and needs to evaluate web search services. Originality/value - The paper argues that the use of diagnostic measures is essential in web search, as precision measures on their own do not allow a searcher to understand why search results differ between search engines.
  16. Iivonen, M.: Factors lowering the consistency in online searching (1995) 0.03
    0.032133326 = product of:
      0.06426665 = sum of:
        0.06426665 = product of:
          0.1285333 = sum of:
            0.1285333 = weight(_text_:search in 3869) [ClassicSimilarity], result of:
              0.1285333 = score(doc=3869,freq=14.0), product of:
                0.18072747 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.051997773 = queryNorm
                0.71119964 = fieldWeight in 3869, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3869)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Considers factors lowering both intersearcher and intrasearcher consistency in online searching. 32 searchers with different backgrounds first analyzed 12 search requests, and after 2 months 8 of the same search requests, and formulated query statements from them for a search. Intersearcher consistency was the results of more than one factor. There were more differences between searchers ion the selection of search terms than in the selection of search concepts. The most important factor lowering intrasearcher consistency was that the same searcher selected different search terms to describe the same search concepts on various occasions
  17. Spink, A.; Goodrum, A.; Robins, D.: Search intermediary elicitations during mediated online searching (1995) 0.03
    0.032133326 = product of:
      0.06426665 = sum of:
        0.06426665 = product of:
          0.1285333 = sum of:
            0.1285333 = weight(_text_:search in 3872) [ClassicSimilarity], result of:
              0.1285333 = score(doc=3872,freq=14.0), product of:
                0.18072747 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.051997773 = queryNorm
                0.71119964 = fieldWeight in 3872, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3872)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Investigates search intermediary elicitations during mediated online searching. A study of 40 online reference interviews involving 1.557 search intermediary elicitation, found 15 different types of search intermediary elicitation to users. The elicitation purpose included search terms and strategies, database selection, relevance of retrieved items, users' knowledge and previous information seeking. Analysis of the patterns in the types and sequencing of elicitation showed significant strings of multiple elicitation regarding search terms and strategies, and relevance judgements. Discusses the implications of the findings for training search intermediaries and the design of interfaces eliciting information from end users
  18. Bar-Ilan, J.: Methods for measuring search engine performance over time (2002) 0.03
    0.031037275 = product of:
      0.06207455 = sum of:
        0.06207455 = product of:
          0.1241491 = sum of:
            0.1241491 = weight(_text_:search in 305) [ClassicSimilarity], result of:
              0.1241491 = score(doc=305,freq=10.0), product of:
                0.18072747 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.051997773 = queryNorm
                0.68694097 = fieldWeight in 305, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0625 = fieldNorm(doc=305)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study introduces methods for evaluating search engine performance over a time period. Several measures are defined, which as a whole describe search engine functionality over time. The necessary setup for such studies is described, and the use of these measures is illustrated through a specific example. The set of measures introduced here may serve as a guideline for the search engines for testing and improving their functionality. We recommend setting up a standard suite of measures for evaluating search engine performance.
  19. Qiu, L.: Markov models of search state patterns in a hypertext information retrieval system (1993) 0.03
    0.029749677 = product of:
      0.059499353 = sum of:
        0.059499353 = product of:
          0.11899871 = sum of:
            0.11899871 = weight(_text_:search in 5296) [ClassicSimilarity], result of:
              0.11899871 = score(doc=5296,freq=12.0), product of:
                0.18072747 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.051997773 = queryNorm
                0.65844285 = fieldWeight in 5296, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5296)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The objective of this research is to discover the search state patterns through which users retrieve information in hypertext systems. The Markov model is used to describe users' search behavior. As determined by the log-linear model test, the second-order Markov model is the best model. Search patterns of different user groups were studied by comparing the corresponding transition probability matrices. The comparisons were made based on the following factors: gender, search experience, search task, and the user's academic background. The statistical tests revealed that there were significant differences between all the groups being compared
  20. Robins, D.: Shifts of focus on various aspects of user information problems during interactive information retrieval (2000) 0.03
    0.029444544 = product of:
      0.058889087 = sum of:
        0.058889087 = product of:
          0.117778175 = sum of:
            0.117778175 = weight(_text_:search in 4995) [ClassicSimilarity], result of:
              0.117778175 = score(doc=4995,freq=16.0), product of:
                0.18072747 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.051997773 = queryNorm
                0.6516894 = fieldWeight in 4995, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4995)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The author presents the results of additional analyses of shifts of focus in IR interaction. Results indicate that users and search intermediaries work toward search goals in nonlinear fashion. Twenty interactions between 20 different users and one of four different search intermediaries were examined. Analysis of discourse between the two parties during interactive information retrieval (IR) shows changes in topic occurs, on average, every seven utterances. These twenty interactions included some 9,858 utterances and 1,439 foci. Utterances are defined as any uninterrupted sound, statement, gesture, etc., made by a participant in the discourse dyad. These utterances are segmented by the researcher according to their intentional focus, i.e., the topic on which the conversation between the user and search intermediary focus until the focus changes (i.e., shifts of focus). In all but two of the 20 interactions, the search intermediary initiated a majority of shifts of focus. Six focus categories were observed. These were foci dealing with: documents; evaluation of search results; search strategies; IR system; topic of the search; and information about the user

Languages

Types

  • a 156
  • s 8
  • m 4
  • el 3
  • p 1
  • More… Less…