Search (85 results, page 1 of 5)

  • × theme_ss:"Retrievalstudien"
  • × year_i:[1990 TO 2000}
  1. Clarke, S.J.; Willett, P.: Estimating the recall performance of Web search engines (1997) 0.10
    0.09933082 = product of:
      0.19866164 = sum of:
        0.093082644 = weight(_text_:web in 760) [ClassicSimilarity], result of:
          0.093082644 = score(doc=760,freq=8.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.5769126 = fieldWeight in 760, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=760)
        0.105578996 = weight(_text_:search in 760) [ClassicSimilarity], result of:
          0.105578996 = score(doc=760,freq=8.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.6144187 = fieldWeight in 760, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0625 = fieldNorm(doc=760)
      0.5 = coord(2/4)
    
    Abstract
    Reports a comparison of the retrieval effectiveness of the AltaVista, Excite and Lycos Web search engines. Describes a method for comparing the recall of the 3 sets of searches, despite the fact that they are carried out on non identical sets of Web pages. It is thus possible, unlike previous comparative studies of Web search engines, to consider both recall and precision when evaluating the effectiveness of search engines
  2. Pemberton, J.K.; Ojala, M.; Garman, N.: Head to head : searching the Web versus traditional services (1998) 0.09
    0.09459321 = product of:
      0.12612428 = sum of:
        0.046541322 = weight(_text_:web in 3572) [ClassicSimilarity], result of:
          0.046541322 = score(doc=3572,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.2884563 = fieldWeight in 3572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=3572)
        0.052789498 = weight(_text_:search in 3572) [ClassicSimilarity], result of:
          0.052789498 = score(doc=3572,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.30720934 = fieldWeight in 3572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0625 = fieldNorm(doc=3572)
        0.026793454 = product of:
          0.053586908 = sum of:
            0.053586908 = weight(_text_:22 in 3572) [ClassicSimilarity], result of:
              0.053586908 = score(doc=3572,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.30952093 = fieldWeight in 3572, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3572)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Describes of 3 searches on the topic of virtual communities done on the WWW using HotBot and traditional databases using LEXIS-NEXIS and ABI/Inform. Concludes that the WWW is a good starting place for a broad concept search but the traditional services are better for more precise topics
    Source
    Online. 22(1998) no.3, S.24-26,28
  3. Agata, T.: ¬A measure for evaluating search engines on the World Wide Web : retrieval test with ESL (Expected Search Length) (1997) 0.09
    0.09089771 = product of:
      0.18179542 = sum of:
        0.06981198 = weight(_text_:web in 3892) [ClassicSimilarity], result of:
          0.06981198 = score(doc=3892,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.43268442 = fieldWeight in 3892, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=3892)
        0.11198343 = weight(_text_:search in 3892) [ClassicSimilarity], result of:
          0.11198343 = score(doc=3892,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.6516894 = fieldWeight in 3892, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.09375 = fieldNorm(doc=3892)
      0.5 = coord(2/4)
    
  4. Iivonen, M.: Consistency in the selection of search concepts and search terms (1995) 0.09
    0.08671737 = product of:
      0.17343473 = sum of:
        0.15333964 = weight(_text_:search in 1757) [ClassicSimilarity], result of:
          0.15333964 = score(doc=1757,freq=30.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.89236253 = fieldWeight in 1757, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=1757)
        0.02009509 = product of:
          0.04019018 = sum of:
            0.04019018 = weight(_text_:22 in 1757) [ClassicSimilarity], result of:
              0.04019018 = score(doc=1757,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.23214069 = fieldWeight in 1757, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1757)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Considers intersearcher and intrasearcher consistency in the selection of search terms. Based on an empirical study where 22 searchers from 4 different types of search environments analyzed altogether 12 search requests of 4 different types in 2 separate test situations between which 2 months elapsed. Statistically very significant differences in consistency were found according to the types of search environments and search requests. Consistency was also considered according to the extent of the scope of search concept. At level I search terms were compared character by character. At level II different search terms were accepted as the same search concept with a rather simple evaluation of linguistic expressions. At level III, in addition to level II, the hierarchical approach of the search request was also controlled. At level IV different search terms were accepted as the same search concept with a broad interpretation of the search concept. Both intersearcher and intrasearcher consistency grew most immediately after a rather simple evaluation of linguistic impressions
  5. Khan, K.; Locatis, C.: Searching through cyberspace : the effects of link display and link density on information retrieval from hypertext on the World Wide Web (1998) 0.06
    0.061718337 = product of:
      0.123436674 = sum of:
        0.03490599 = weight(_text_:web in 446) [ClassicSimilarity], result of:
          0.03490599 = score(doc=446,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.21634221 = fieldWeight in 446, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=446)
        0.08853068 = weight(_text_:search in 446) [ClassicSimilarity], result of:
          0.08853068 = score(doc=446,freq=10.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.51520574 = fieldWeight in 446, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=446)
      0.5 = coord(2/4)
    
    Abstract
    This study investigated information retrieval from hypertext on the WWW. Significant main and interaction effects were found for both link density (number of links per display) and display format (in paragraphs or lists) on search performance. Low link densities displayed in list format produced the best overall results, in terms of search accuracy, search time, number of links explored, and search task prioritization. Lower densities affected user ability to prioritize search tasks and produced more accurate searches, while list displays positively affected all aspects of searching except task prioritization. The performance of novices and experts, in terms of their previous experience browsing hypertext on the WWW, was compared. Experts performed better, mostly because of their superior task prioritization
  6. Davis, C.H.: From document retrieval to Web browsing : some universal concerns (1997) 0.06
    0.06036425 = product of:
      0.1207285 = sum of:
        0.04072366 = weight(_text_:web in 399) [ClassicSimilarity], result of:
          0.04072366 = score(doc=399,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.25239927 = fieldWeight in 399, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=399)
        0.08000484 = weight(_text_:search in 399) [ClassicSimilarity], result of:
          0.08000484 = score(doc=399,freq=6.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.46558946 = fieldWeight in 399, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=399)
      0.5 = coord(2/4)
    
    Abstract
    Computer based systems can produce enourmous retrieval sets even when good search logic is used. Sometimes this is desirable, more often it is not. Appropriate filters can limit search results, but they represent only a partial solution. Simple ranking techniques are needed that are both effective and easily understood by the humans doing the searching. Optimal search output, whether from a traditional database or the Internet, will result when intuitive interfaces are designed that inspire confidence while making the necessary mathematics transparent. Weighted term searching using powers of 2, a technique proposed early in the history of information retrieval, can be simplifies and used in combination with modern graphics and textual input to achieve these results
  7. Cross-language information retrieval (1998) 0.05
    0.05378817 = product of:
      0.07171756 = sum of:
        0.014544163 = weight(_text_:web in 6299) [ClassicSimilarity], result of:
          0.014544163 = score(doc=6299,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.09014259 = fieldWeight in 6299, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=6299)
        0.023329884 = weight(_text_:search in 6299) [ClassicSimilarity], result of:
          0.023329884 = score(doc=6299,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.13576864 = fieldWeight in 6299, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.01953125 = fieldNorm(doc=6299)
        0.03384351 = product of:
          0.06768702 = sum of:
            0.06768702 = weight(_text_:engine in 6299) [ClassicSimilarity], result of:
              0.06768702 = score(doc=6299,freq=6.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.25592852 = fieldWeight in 6299, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=6299)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Footnote
    Rez. in: Machine translation review: 1999, no.10, S.26-27 (D. Lewis): "Cross Language Information Retrieval (CLIR) addresses the growing need to access large volumes of data across language boundaries. The typical requirement is for the user to input a free form query, usually a brief description of a topic, into a search or retrieval engine which returns a list, in ranked order, of documents or web pages that are relevant to the topic. The search engine matches the terms in the query to indexed terms, usually keywords previously derived from the target documents. Unlike monolingual information retrieval, CLIR requires query terms in one language to be matched to indexed terms in another. Matching can be done by bilingual dictionary lookup, full machine translation, or by applying statistical methods. A query's success is measured in terms of recall (how many potentially relevant target documents are found) and precision (what proportion of documents found are relevant). Issues in CLIR are how to translate query terms into index terms, how to eliminate alternative translations (e.g. to decide that French 'traitement' in a query means 'treatment' and not 'salary'), and how to rank or weight translation alternatives that are retained (e.g. how to order the French terms 'aventure', 'business', 'affaire', and 'liaison' as relevant translations of English 'affair'). Grefenstette provides a lucid and useful overview of the field and the problems. The volume brings together a number of experiments and projects in CLIR. Mark Davies (New Mexico State University) describes Recuerdo, a Spanish retrieval engine which reduces translation ambiguities by scanning indexes for parallel texts; it also uses either a bilingual dictionary or direct equivalents from a parallel corpus in order to compare results for queries on parallel texts. Lisa Ballesteros and Bruce Croft (University of Massachusetts) use a 'local feedback' technique which automatically enhances a query by adding extra terms to it both before and after translation; such terms can be derived from documents known to be relevant to the query.
  8. Brown, M.E.: By any other name : accounting for failure in the naming of subject categories (1995) 0.05
    0.051724557 = product of:
      0.10344911 = sum of:
        0.08000484 = weight(_text_:search in 5598) [ClassicSimilarity], result of:
          0.08000484 = score(doc=5598,freq=6.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.46558946 = fieldWeight in 5598, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5598)
        0.023444273 = product of:
          0.046888545 = sum of:
            0.046888545 = weight(_text_:22 in 5598) [ClassicSimilarity], result of:
              0.046888545 = score(doc=5598,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.2708308 = fieldWeight in 5598, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5598)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Research shows that 65-80% of subject search terms fail to match the appropriate subject heading and one third to one half of subject searches result in no references being retrieved. Examines the subject search terms geberated by 82 school and college students in Princeton, NJ, evaluated the match between the named terms and the expected subject headings, proposes an explanation for match failures in relation to 3 invariant properties common to all search terms: concreteness, complexity, and syndeticity. Suggests that match failure is a consequence of developmental naming patterns and that these patterns can be overcome through the use of metacognitive naming skills
    Date
    2.11.1996 13:08:22
  9. Wu, C.-J.: Experiments on using the Dublin Core to reduce the retrieval error ratio (1998) 0.04
    0.043457236 = product of:
      0.08691447 = sum of:
        0.04072366 = weight(_text_:web in 5201) [ClassicSimilarity], result of:
          0.04072366 = score(doc=5201,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.25239927 = fieldWeight in 5201, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5201)
        0.046190813 = weight(_text_:search in 5201) [ClassicSimilarity], result of:
          0.046190813 = score(doc=5201,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.2688082 = fieldWeight in 5201, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5201)
      0.5 = coord(2/4)
    
    Abstract
    In order to test the power of metadata on information retrieval, an experiment was designed and conducted on a group of 7 graduate students using the Dublin Core as the cataloguing metadata. Results show that, on average, the retrieval error rate is only 2.9 per cent for the MES system (http://140.136.85.194), which utilizes the Dublin Core to describe the documents on the World Wide Web, in contrast to 20.7 per cent for the 7 famous search engines including HOTBOT, GAIS, LYCOS, EXCITE, INFOSEEK, YAHOO, and OCTOPUS. The very low error rate indicates that the users can use the information of the Dublin Core to decide whether to retrieve the documents or not
  10. Smith, M.P.; Pollitt, A.S.: Ranking and relevance feedback extensions to a view-based searching system (1995) 0.04
    0.04324353 = product of:
      0.08648706 = sum of:
        0.03959212 = weight(_text_:search in 3855) [ClassicSimilarity], result of:
          0.03959212 = score(doc=3855,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.230407 = fieldWeight in 3855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=3855)
        0.04689494 = product of:
          0.09378988 = sum of:
            0.09378988 = weight(_text_:engine in 3855) [ClassicSimilarity], result of:
              0.09378988 = score(doc=3855,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.35462496 = fieldWeight in 3855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3855)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The University of Huddersfield, UK, is researching ways of incorporating ranking and relevance feedback techniques into a thesaurus based searching system. The INSPEC database on STN International was searched using the VUSE (View-based Search Engine) interface. Thesaurus terms from documents judged to be relevant by users were used to query INSPEC and create a ranking of documents based on probabilistic methods. An evaluation was carried out to establish whether or not it would be better for the user to continue searching with the thesaurus based front end or to use relevance feedback, looking at the ranked list of documents it would produce. Also looks at the amount of effort the user had to expend to get relevant documents in terms of the number of non relevant documents seen between relevant documents
  11. Losee, R.M.: Determining information retrieval and filtering performance without experimentation (1995) 0.03
    0.034817543 = product of:
      0.069635086 = sum of:
        0.046190813 = weight(_text_:search in 3368) [ClassicSimilarity], result of:
          0.046190813 = score(doc=3368,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.2688082 = fieldWeight in 3368, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3368)
        0.023444273 = product of:
          0.046888545 = sum of:
            0.046888545 = weight(_text_:22 in 3368) [ClassicSimilarity], result of:
              0.046888545 = score(doc=3368,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.2708308 = fieldWeight in 3368, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3368)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The performance of an information retrieval or text and media filtering system may be determined through analytic methods as well as by traditional simulation or experimental methods. These analytic methods can provide precise statements about expected performance. They can thus determine which of 2 similarly performing systems is superior. For both a single query terms and for a multiple query term retrieval model, a model for comparing the performance of different probabilistic retrieval methods is developed. This method may be used in computing the average search length for a query, given only knowledge of database parameter values. Describes predictive models for inverse document frequency, binary independence, and relevance feedback based retrieval and filtering. Simulation illustrate how the single term model performs and sample performance predictions are given for single term and multiple term problems
    Date
    22. 2.1996 13:14:10
  12. Qiu, L.: Analytical searching vs. browsing in hypertext information retrieval systems (1993) 0.03
    0.032661837 = product of:
      0.13064735 = sum of:
        0.13064735 = weight(_text_:search in 7416) [ClassicSimilarity], result of:
          0.13064735 = score(doc=7416,freq=16.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.76030433 = fieldWeight in 7416, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7416)
      0.25 = coord(1/4)
    
    Abstract
    Reports an experiment conducted to study search behaviour of different user groups in a hypertext information retrieval system. A three-way analysis of variance test was conducted to study the effects of gender, search task, and search experience on search option (analytical searching versus browsing), as measured by the proportion of nodes reached through analytical searching. The search task factor influenced search option in that a general task caused more browsing and specific task more analytical searching. Gender or search experience alone did not affect the search option. These findings are discussed in light of evaluation of existing systems and implications for future design
  13. Kristensen, J.: Expanding end-users' query statements for free text searching with a search-aid thesaurus (1993) 0.03
    0.032326832 = product of:
      0.12930733 = sum of:
        0.12930733 = weight(_text_:search in 6621) [ClassicSimilarity], result of:
          0.12930733 = score(doc=6621,freq=12.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.75250614 = fieldWeight in 6621, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0625 = fieldNorm(doc=6621)
      0.25 = coord(1/4)
    
    Abstract
    Tests the effectiveness of a thesaurus as a search-aid in free text searching of a full text database. A set of queries was searched against a large full text database of newspaper articles. The thesaurus contained equivalence, hierarchical and associative relationships. Each query was searched in five modes: basic search, synonym search, narrower term search, related term search, and union of all previous searches. The searches were analyzed in terms of relative recall and precision
  14. Iivonen, M.: Factors lowering the consistency in online searching (1995) 0.03
    0.030552352 = product of:
      0.12220941 = sum of:
        0.12220941 = weight(_text_:search in 3869) [ClassicSimilarity], result of:
          0.12220941 = score(doc=3869,freq=14.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.71119964 = fieldWeight in 3869, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3869)
      0.25 = coord(1/4)
    
    Abstract
    Considers factors lowering both intersearcher and intrasearcher consistency in online searching. 32 searchers with different backgrounds first analyzed 12 search requests, and after 2 months 8 of the same search requests, and formulated query statements from them for a search. Intersearcher consistency was the results of more than one factor. There were more differences between searchers ion the selection of search terms than in the selection of search concepts. The most important factor lowering intrasearcher consistency was that the same searcher selected different search terms to describe the same search concepts on various occasions
  15. Spink, A.; Goodrum, A.; Robins, D.: Search intermediary elicitations during mediated online searching (1995) 0.03
    0.030552352 = product of:
      0.12220941 = sum of:
        0.12220941 = weight(_text_:search in 3872) [ClassicSimilarity], result of:
          0.12220941 = score(doc=3872,freq=14.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.71119964 = fieldWeight in 3872, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3872)
      0.25 = coord(1/4)
    
    Abstract
    Investigates search intermediary elicitations during mediated online searching. A study of 40 online reference interviews involving 1.557 search intermediary elicitation, found 15 different types of search intermediary elicitation to users. The elicitation purpose included search terms and strategies, database selection, relevance of retrieved items, users' knowledge and previous information seeking. Analysis of the patterns in the types and sequencing of elicitation showed significant strings of multiple elicitation regarding search terms and strategies, and relevance judgements. Discusses the implications of the findings for training search intermediaries and the design of interfaces eliciting information from end users
  16. Wood, F.; Ford, N.; Miller, D.; Sobczyk, G.; Duffin, R.: Information skills, searching behaviour and cognitive styles for student-centred learning : a computer-assisted learning approach (1996) 0.03
    0.029843606 = product of:
      0.059687212 = sum of:
        0.03959212 = weight(_text_:search in 4341) [ClassicSimilarity], result of:
          0.03959212 = score(doc=4341,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.230407 = fieldWeight in 4341, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=4341)
        0.02009509 = product of:
          0.04019018 = sum of:
            0.04019018 = weight(_text_:22 in 4341) [ClassicSimilarity], result of:
              0.04019018 = score(doc=4341,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.23214069 = fieldWeight in 4341, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4341)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Undergraduates were tested to establish how they searched databases, the effectiveness of their searches and their satisfaction with them. The students' cognitive and learning styles were determined by the Lancaster Approaches to Studying Inventory and Riding's Cognitive Styles Analysis tests. There were significant differences in the searching behaviour and the effectiveness of the searches carried out by students with different learning and cognitive styles. Computer-assisted learning (CAL) packages were developed for three departments. The effectiveness of the packages were evaluated. Significant differences were found in the ways students with different learning styles used the packages. Based on the experience gained, guidelines for the teaching of information skills and the production and use of packages were prepared. About 2/3 of the searches had serious weaknesses, indicating a need for effective training. It appears that choice of searching strategies, search effectiveness and use of CAL packages are all affected by the cognitive and learning styles of the searcher. Therefore, students should be made aware of their own styles and, if appropriate, how to adopt more effective strategies
    Source
    Journal of information science. 22(1996) no.2, S.79-92
  17. Qiu, L.: Markov models of search state patterns in a hypertext information retrieval system (1993) 0.03
    0.028285978 = product of:
      0.11314391 = sum of:
        0.11314391 = weight(_text_:search in 5296) [ClassicSimilarity], result of:
          0.11314391 = score(doc=5296,freq=12.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.65844285 = fieldWeight in 5296, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5296)
      0.25 = coord(1/4)
    
    Abstract
    The objective of this research is to discover the search state patterns through which users retrieve information in hypertext systems. The Markov model is used to describe users' search behavior. As determined by the log-linear model test, the second-order Markov model is the best model. Search patterns of different user groups were studied by comparing the corresponding transition probability matrices. The comparisons were made based on the following factors: gender, search experience, search task, and the user's academic background. The statistical tests revealed that there were significant differences between all the groups being compared
  18. Marchionini, G.: Information seeking in full-text end-user-oriented search system : the roles of domain and search expertise (1993) 0.03
    0.026394749 = product of:
      0.105578996 = sum of:
        0.105578996 = weight(_text_:search in 5100) [ClassicSimilarity], result of:
          0.105578996 = score(doc=5100,freq=8.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.6144187 = fieldWeight in 5100, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0625 = fieldNorm(doc=5100)
      0.25 = coord(1/4)
    
    Abstract
    Presents a study that identifies and examines the roles that information-seeking expertise and domain expertise play in information seeking in full text, and user search systems. This forms part of an investigation to characterise information seeking and to determine how it is affected by interactive electronic access to primary information. Distinguishes between the approaches of search experts and domain experts. Makes recommendations for systems design
  19. Wildemuth, B.M.; Jacob, E.K.; Fullington, A.;; Bliek, R. de; Friedman, C.P.: ¬A detailed analysis of end-user search behaviours (1991) 0.03
    0.026083602 = product of:
      0.10433441 = sum of:
        0.10433441 = weight(_text_:search in 2423) [ClassicSimilarity], result of:
          0.10433441 = score(doc=2423,freq=20.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.60717577 = fieldWeight in 2423, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2423)
      0.25 = coord(1/4)
    
    Abstract
    Search statements in this revision process can be viewed as a 'move' in the overall search strategy. Very little is known about how end users develop and revise their search strategies. A study was conducted to analyse the moves made in 244 data base searches conducted by 26 medical students at the University of North Carolina at Chapel Hill. Students search INQUIRER, a data base of facts and concepts in microbiology. The searches were conducted during a 3-week period in spring 1990 and were recorded by the INQUIRER system. Each search statement was categorised, using Fidel's online searching moves (S. Online review 9(1985) S.61-74) and Bates' search tactics (s. JASIS 30(1979) S.205-214). Further analyses indicated that the most common moves were Browse/Specity, Select Exhaust, Intersect, and Vary, and that selection of moves varied by student and by problem. Analysis of search tactics (combinations of moves) identified 5 common search approaches. The results of this study have implcations for future research on search behaviours, for thedesign of system interfaces and data base structures, and for the training of end users
  20. Peters, T.A.; Kurth, M.: Controlled and uncontrolled vocabulary subject searching in an academic library online catalog (1991) 0.03
    0.025821447 = product of:
      0.10328579 = sum of:
        0.10328579 = weight(_text_:search in 2348) [ClassicSimilarity], result of:
          0.10328579 = score(doc=2348,freq=10.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.6010733 = fieldWeight in 2348, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2348)
      0.25 = coord(1/4)
    
    Abstract
    An analysis of transaction logs from an academic library online catalog describes instances in which users have tried both controlled and uncontrolled (title keyword) vocabulary subject access during the same search session. Eight hypotheses were tested. Over 6.6% of all dial access search sessions contained both methods of subject access. Over 58% of the isolated sessions began with an uncontrolled vocabulary attempt. Over 76% contained only one vocabulary shift. On average, user persistence was greater during controlled vocabulary search logs, but search output was greater during uncontrolled vocabulary search logs. Several recommendations regarding catalog design and instruction are made.

Languages

Types

  • a 81
  • m 2
  • s 2
  • el 1
  • More… Less…