Search (149 results, page 1 of 8)

  • × language_ss:"e"
  • × theme_ss:"Retrievalstudien"
  • × type_ss:"a"
  1. Carterette, B.: Test collections (2009) 0.08
    0.07622548 = product of:
      0.11433822 = sum of:
        0.08728119 = weight(_text_:book in 3891) [ClassicSimilarity], result of:
          0.08728119 = score(doc=3891,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.39015728 = fieldWeight in 3891, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0625 = fieldNorm(doc=3891)
        0.027057027 = product of:
          0.054114055 = sum of:
            0.054114055 = weight(_text_:search in 3891) [ClassicSimilarity], result of:
              0.054114055 = score(doc=3891,freq=2.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.30720934 = fieldWeight in 3891, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3891)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Research and development of search engines and other information retrieval (IR) systems proceeds by a cycle of design, implementation, and experimentation, with the results of each experiment influencing design decisions in the next iteration of the cycle. Batch experiments on test collections help ensure that this process goes as smoothly and as quickly as possible. A test collection comprises a collection of documents, a set of information needs, and judgments of the relevance of documents to those needs.
    Footnote
    Vgl.: http://www.tandfonline.com/doi/book/10.1081/E-ELIS3.
  2. Iivonen, M.: Consistency in the selection of search concepts and search terms (1995) 0.07
    0.06612858 = product of:
      0.19838575 = sum of:
        0.19838575 = sum of:
          0.15718713 = weight(_text_:search in 1757) [ClassicSimilarity], result of:
            0.15718713 = score(doc=1757,freq=30.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.89236253 = fieldWeight in 1757, product of:
                5.477226 = tf(freq=30.0), with freq of:
                  30.0 = termFreq=30.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.046875 = fieldNorm(doc=1757)
          0.041198608 = weight(_text_:22 in 1757) [ClassicSimilarity], result of:
            0.041198608 = score(doc=1757,freq=2.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.23214069 = fieldWeight in 1757, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1757)
      0.33333334 = coord(1/3)
    
    Abstract
    Considers intersearcher and intrasearcher consistency in the selection of search terms. Based on an empirical study where 22 searchers from 4 different types of search environments analyzed altogether 12 search requests of 4 different types in 2 separate test situations between which 2 months elapsed. Statistically very significant differences in consistency were found according to the types of search environments and search requests. Consistency was also considered according to the extent of the scope of search concept. At level I search terms were compared character by character. At level II different search terms were accepted as the same search concept with a rather simple evaluation of linguistic expressions. At level III, in addition to level II, the hierarchical approach of the search request was also controlled. At level IV different search terms were accepted as the same search concept with a broad interpretation of the search concept. Both intersearcher and intrasearcher consistency grew most immediately after a rather simple evaluation of linguistic impressions
  3. Voorhees, E.M.; Harman, D.K.: ¬The Text REtrieval Conference (2005) 0.06
    0.06481525 = product of:
      0.09722287 = sum of:
        0.08538542 = weight(_text_:book in 5082) [ClassicSimilarity], result of:
          0.08538542 = score(doc=5082,freq=10.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.38168296 = fieldWeight in 5082, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5082)
        0.01183745 = product of:
          0.0236749 = sum of:
            0.0236749 = weight(_text_:search in 5082) [ClassicSimilarity], result of:
              0.0236749 = score(doc=5082,freq=2.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.1344041 = fieldWeight in 5082, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5082)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Text retrieval technology targets a problem that is all too familiar: finding relevant information in large stores of electronic documents. The problem is an old one, with the first research conference devoted to the subject held in 1958 [11]. Since then the problem has continued to grow as more information is created in electronic form and more people gain electronic access. The advent of the World Wide Web, where anyone can publish so everyone must search, is a graphic illustration of the need for effective retrieval technology. The Text REtrieval Conference (TREC) is a workshop series designed to build the infrastructure necessary for the large-scale evaluation of text retrieval technology, thereby accelerating its transfer into the commercial sector. The series is sponsored by the U.S. National Institute of Standards and Technology (NIST) and the U.S. Department of Defense. At the time of this writing, there have been twelve TREC workshops and preparations for the thirteenth workshop are under way. Participants in the workshops have been drawn from the academic, commercial, and government sectors, and have included representatives from more than twenty different countries. These collective efforts have accomplished a great deal: a variety of large test collections have been built for both traditional ad hoc retrieval and related tasks such as cross-language retrieval, speech retrieval, and question answering; retrieval effectiveness has approximately doubled; and many commercial retrieval systems now contain technology first developed in TREC.
    This book chronicles the evolution of retrieval systems over the course of TREC. To be sure, there has already been a wealth of information written about TREC. Each conference has produced a proceedings containing general overviews of the various tasks, papers written by the individual participants, and evaluation results.1 Reports on expanded versions of TREC experiments frequently appear in the wider information retrieval literature. There also have been special issues of journals devoted to particular TRECs [3; 13] and particular TREC tasks [6; 4]. No single volume could hope to be a comprehensive record of all TREC-related research. Instead, this book looks to distill the overabundance of detail into a manageable whole that summarizes the main lessons learned from TREC. The book consists of three main parts. The first part contains introductory and descriptive chapters on TREC's history, the major products of TREC (the test collections), and the retrieval evaluation methodology. Part II includes chapters describing the major TREC ''tracks,'' evaluations of special subtopics such as cross-language retrieval and question answering. Part III contains contributions from research groups that have participated in TREC. The epilogue to the book is written by Karen Sparck Jones, who reflects on the impact TREC has had on the information retrieval field. The structure of this introductory chapter is similar to that of the book as a whole. The chapter begins with a short history of TREC; expanded descriptions of specific aspects of the history are included in subsequent chapters to make those chapters self-contained. Section 1.2 describes TREC's track structure, which has been responsible for the growth of TREC and allows TREC to adapt to changing needs. The final section lists both the major accomplishments of TREC and some remaining challenges.
  4. Wildemuth, B.; Freund, L.; Toms, E.G.: Untangling search task complexity and difficulty in the context of interactive information retrieval studies (2014) 0.05
    0.045265343 = product of:
      0.13579603 = sum of:
        0.13579603 = sum of:
          0.10146385 = weight(_text_:search in 1786) [ClassicSimilarity], result of:
            0.10146385 = score(doc=1786,freq=18.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.5760175 = fieldWeight in 1786, product of:
                4.2426405 = tf(freq=18.0), with freq of:
                  18.0 = termFreq=18.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1786)
          0.034332175 = weight(_text_:22 in 1786) [ClassicSimilarity], result of:
            0.034332175 = score(doc=1786,freq=2.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.19345059 = fieldWeight in 1786, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1786)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - One core element of interactive information retrieval (IIR) experiments is the assignment of search tasks. The purpose of this paper is to provide an analytical review of current practice in developing those search tasks to test, observe or control task complexity and difficulty. Design/methodology/approach - Over 100 prior studies of IIR were examined in terms of how each defined task complexity and/or difficulty (or related concepts) and subsequently interpreted those concepts in the development of the assigned search tasks. Findings - Search task complexity is found to include three dimensions: multiplicity of subtasks or steps, multiplicity of facets, and indeterminability. Search task difficulty is based on an interaction between the search task and the attributes of the searcher or the attributes of the search situation. The paper highlights the anomalies in our use of these two concepts, concluding with suggestions for future methodological research related to search task complexity and difficulty. Originality/value - By analyzing and synthesizing current practices, this paper provides guidance for future experiments in IIR that involve these two constructs.
    Date
    6. 4.2015 19:31:22
  5. Kilgour, F.G.: Retrieval on information from computerized book texts (1989) 0.04
    0.0436406 = product of:
      0.1309218 = sum of:
        0.1309218 = weight(_text_:book in 2965) [ClassicSimilarity], result of:
          0.1309218 = score(doc=2965,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.58523595 = fieldWeight in 2965, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.09375 = fieldNorm(doc=2965)
      0.33333334 = coord(1/3)
    
  6. Brown, M.E.: By any other name : accounting for failure in the naming of subject categories (1995) 0.04
    0.0433591 = product of:
      0.1300773 = sum of:
        0.1300773 = sum of:
          0.082012266 = weight(_text_:search in 5598) [ClassicSimilarity], result of:
            0.082012266 = score(doc=5598,freq=6.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.46558946 = fieldWeight in 5598, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5598)
          0.04806504 = weight(_text_:22 in 5598) [ClassicSimilarity], result of:
            0.04806504 = score(doc=5598,freq=2.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.2708308 = fieldWeight in 5598, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5598)
      0.33333334 = coord(1/3)
    
    Abstract
    Research shows that 65-80% of subject search terms fail to match the appropriate subject heading and one third to one half of subject searches result in no references being retrieved. Examines the subject search terms geberated by 82 school and college students in Princeton, NJ, evaluated the match between the named terms and the expected subject headings, proposes an explanation for match failures in relation to 3 invariant properties common to all search terms: concreteness, complexity, and syndeticity. Suggests that match failure is a consequence of developmental naming patterns and that these patterns can be overcome through the use of metacognitive naming skills
    Date
    2.11.1996 13:08:22
  7. Pemberton, J.K.; Ojala, M.; Garman, N.: Head to head : searching the Web versus traditional services (1998) 0.04
    0.036348514 = product of:
      0.109045535 = sum of:
        0.109045535 = sum of:
          0.054114055 = weight(_text_:search in 3572) [ClassicSimilarity], result of:
            0.054114055 = score(doc=3572,freq=2.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.30720934 = fieldWeight in 3572, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.0625 = fieldNorm(doc=3572)
          0.054931477 = weight(_text_:22 in 3572) [ClassicSimilarity], result of:
            0.054931477 = score(doc=3572,freq=2.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.30952093 = fieldWeight in 3572, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=3572)
      0.33333334 = coord(1/3)
    
    Abstract
    Describes of 3 searches on the topic of virtual communities done on the WWW using HotBot and traditional databases using LEXIS-NEXIS and ABI/Inform. Concludes that the WWW is a good starting place for a broad concept search but the traditional services are better for more precise topics
    Source
    Online. 22(1998) no.3, S.24-26,28
  8. Blagden, J.F.: How much noise in a role-free and link-free co-ordinate indexing system? (1966) 0.03
    0.03180495 = product of:
      0.09541484 = sum of:
        0.09541484 = sum of:
          0.0473498 = weight(_text_:search in 2718) [ClassicSimilarity], result of:
            0.0473498 = score(doc=2718,freq=2.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.2688082 = fieldWeight in 2718, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2718)
          0.04806504 = weight(_text_:22 in 2718) [ClassicSimilarity], result of:
            0.04806504 = score(doc=2718,freq=2.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.2708308 = fieldWeight in 2718, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2718)
      0.33333334 = coord(1/3)
    
    Abstract
    A study of the number of irrelevant documents retrieved in a co-ordinate indexing system that does not employ eitherr roles or links. These tests were based on one hundred actual inquiries received in the library and therefore an evaluation of recall efficiency is not included. Over half the enquiries produced no noise, but the mean average percentage niose figure was approximately 33 per cent based on a total average retireval figure of eighteen documents per search. Details of the size of the indexed collection, methods of indexing, and an analysis of the reasons for the retrieval of irrelevant documents are discussed, thereby providing information officers who are thinking of installing such a system with some evidence on which to base a decision as to whether or not to utilize these devices
    Source
    Journal of documentation. 22(1966), S.203-209
  9. Losee, R.M.: Determining information retrieval and filtering performance without experimentation (1995) 0.03
    0.03180495 = product of:
      0.09541484 = sum of:
        0.09541484 = sum of:
          0.0473498 = weight(_text_:search in 3368) [ClassicSimilarity], result of:
            0.0473498 = score(doc=3368,freq=2.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.2688082 = fieldWeight in 3368, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3368)
          0.04806504 = weight(_text_:22 in 3368) [ClassicSimilarity], result of:
            0.04806504 = score(doc=3368,freq=2.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.2708308 = fieldWeight in 3368, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3368)
      0.33333334 = coord(1/3)
    
    Abstract
    The performance of an information retrieval or text and media filtering system may be determined through analytic methods as well as by traditional simulation or experimental methods. These analytic methods can provide precise statements about expected performance. They can thus determine which of 2 similarly performing systems is superior. For both a single query terms and for a multiple query term retrieval model, a model for comparing the performance of different probabilistic retrieval methods is developed. This method may be used in computing the average search length for a query, given only knowledge of database parameter values. Describes predictive models for inverse document frequency, binary independence, and relevance feedback based retrieval and filtering. Simulation illustrate how the single term model performs and sample performance predictions are given for single term and multiple term problems
    Date
    22. 2.1996 13:14:10
  10. Hodges, P.R.: Keyword in title indexes : effectiveness of retrieval in computer searches (1983) 0.03
    0.03180495 = product of:
      0.09541484 = sum of:
        0.09541484 = sum of:
          0.0473498 = weight(_text_:search in 5001) [ClassicSimilarity], result of:
            0.0473498 = score(doc=5001,freq=2.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.2688082 = fieldWeight in 5001, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5001)
          0.04806504 = weight(_text_:22 in 5001) [ClassicSimilarity], result of:
            0.04806504 = score(doc=5001,freq=2.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.2708308 = fieldWeight in 5001, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5001)
      0.33333334 = coord(1/3)
    
    Abstract
    A study was done to test the effectiveness of retrieval using title word searching. It was based on actual search profiles used in the Mechanized Information Center at Ohio State University, in order ro replicate as closely as possible actual searching conditions. Fewer than 50% of the relevant titles were retrieved by keywords in titles. The low rate of retrieval can be attributes to three sources: titles themselves, user and information specialist ignorance of the subject vocabulary in use, and to general language problems. Across fields it was found that the social sciences had the best retrieval rate, with science having the next best, and arts and humanities the lowest. Ways to enhance and supplement keyword in title searching on the computer and in printed indexes are discussed.
    Date
    14. 3.1996 13:22:21
  11. Tague-Sutcliffe, J.: Information retrieval experimentation (2009) 0.03
    0.029093731 = product of:
      0.08728119 = sum of:
        0.08728119 = weight(_text_:book in 3801) [ClassicSimilarity], result of:
          0.08728119 = score(doc=3801,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.39015728 = fieldWeight in 3801, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0625 = fieldNorm(doc=3801)
      0.33333334 = coord(1/3)
    
    Footnote
    Vgl.: http://www.tandfonline.com/doi/book/10.1081/E-ELIS3.
  12. Voorhees, E.M.: Text REtrieval Conference (TREC) (2009) 0.03
    0.029093731 = product of:
      0.08728119 = sum of:
        0.08728119 = weight(_text_:book in 3890) [ClassicSimilarity], result of:
          0.08728119 = score(doc=3890,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.39015728 = fieldWeight in 3890, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0625 = fieldNorm(doc=3890)
      0.33333334 = coord(1/3)
    
    Footnote
    Vgl.: http://www.tandfonline.com/doi/book/10.1081/E-ELIS3.
  13. Wood, F.; Ford, N.; Miller, D.; Sobczyk, G.; Duffin, R.: Information skills, searching behaviour and cognitive styles for student-centred learning : a computer-assisted learning approach (1996) 0.03
    0.027261382 = product of:
      0.081784144 = sum of:
        0.081784144 = sum of:
          0.04058554 = weight(_text_:search in 4341) [ClassicSimilarity], result of:
            0.04058554 = score(doc=4341,freq=2.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.230407 = fieldWeight in 4341, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.046875 = fieldNorm(doc=4341)
          0.041198608 = weight(_text_:22 in 4341) [ClassicSimilarity], result of:
            0.041198608 = score(doc=4341,freq=2.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.23214069 = fieldWeight in 4341, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4341)
      0.33333334 = coord(1/3)
    
    Abstract
    Undergraduates were tested to establish how they searched databases, the effectiveness of their searches and their satisfaction with them. The students' cognitive and learning styles were determined by the Lancaster Approaches to Studying Inventory and Riding's Cognitive Styles Analysis tests. There were significant differences in the searching behaviour and the effectiveness of the searches carried out by students with different learning and cognitive styles. Computer-assisted learning (CAL) packages were developed for three departments. The effectiveness of the packages were evaluated. Significant differences were found in the ways students with different learning styles used the packages. Based on the experience gained, guidelines for the teaching of information skills and the production and use of packages were prepared. About 2/3 of the searches had serious weaknesses, indicating a need for effective training. It appears that choice of searching strategies, search effectiveness and use of CAL packages are all affected by the cognitive and learning styles of the searcher. Therefore, students should be made aware of their own styles and, if appropriate, how to adopt more effective strategies
    Source
    Journal of information science. 22(1996) no.2, S.79-92
  14. Balog, K.; Schuth, A.; Dekker, P.; Tavakolpoursaleh, N.; Schaer, P.; Chuang, P.-Y.: Overview of the TREC 2016 Open Search track Academic Search Edition (2016) 0.03
    0.02550961 = product of:
      0.07652883 = sum of:
        0.07652883 = product of:
          0.15305766 = sum of:
            0.15305766 = weight(_text_:search in 43) [ClassicSimilarity], result of:
              0.15305766 = score(doc=43,freq=16.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.86891925 = fieldWeight in 43, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0625 = fieldNorm(doc=43)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    We present the TREC Open Search track, which represents a new evaluation paradigm for information retrieval. It offers the possibility for researchers to evaluate their approaches in a live setting, with real, unsuspecting users of an existing search engine. The first edition of the track focuses on the academic search domain and features the ad-hoc scientific literature search task. We report on experiments with three different academic search engines: Cite-SeerX, SSOAR, and Microsoft Academic Search.
  15. Kilgour, F.: ¬An experiment using coordinate title word searches (2004) 0.03
    0.025457015 = product of:
      0.076371044 = sum of:
        0.076371044 = weight(_text_:book in 2065) [ClassicSimilarity], result of:
          0.076371044 = score(doc=2065,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.34138763 = fieldWeight in 2065, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2065)
      0.33333334 = coord(1/3)
    
    Abstract
    This study, the fourth and last of a series designed to produce new information to improve retrievability of books in libraries, explores the effectiveness of retrieving a known-item book using words from titles only. From daily printouts of circulation records at the Walter Royal Davis Library of the University of North Carolina at Chapel Hill, 749 titles were taken and then searched an the 4-million entry catalog at the library of the University of Michigan. The principal finding was that searches produced titles having personal authors 81.4% of the time and anonymous titles 91.5% of the time; these figures are 15 and 5%, respectively, lower than the lowest findings presented in the previous three articles of this series (Kilgour, 1995; 1997; 2001).
  16. Belkin, N.J.: ¬An overview of results from Rutgers' investigations of interactive information retrieval (1998) 0.02
    0.022717819 = product of:
      0.068153456 = sum of:
        0.068153456 = sum of:
          0.033821285 = weight(_text_:search in 2339) [ClassicSimilarity], result of:
            0.033821285 = score(doc=2339,freq=2.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.19200584 = fieldWeight in 2339, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2339)
          0.034332175 = weight(_text_:22 in 2339) [ClassicSimilarity], result of:
            0.034332175 = score(doc=2339,freq=2.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.19345059 = fieldWeight in 2339, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2339)
      0.33333334 = coord(1/3)
    
    Abstract
    Over the last 4 years, the Information Interaction Laboratory at Rutgers' School of communication, Information and Library Studies has performed a series of investigations concerned with various aspects of people's interactions with advanced information retrieval (IR) systems. We have benn especially concerned with understanding not just what people do, and why, and with what effect, but also with what they would like to do, and how they attempt to accomplish it, and with what difficulties. These investigations have led to some quite interesting conclusions about the nature and structure of people's interactions with information, about support for cooperative human-computer interaction in query reformulation, and about the value of visualization of search results for supporting various forms of interaction with information. In this discussion, I give an overview of the research program and its projects, present representative results from the projects, and discuss some implications of these results for support of subject searching in information retrieval systems
    Date
    22. 9.1997 19:16:05
  17. Chu, H.: Factors affecting relevance judgment : a report from TREC Legal track (2011) 0.02
    0.022717819 = product of:
      0.068153456 = sum of:
        0.068153456 = sum of:
          0.033821285 = weight(_text_:search in 4540) [ClassicSimilarity], result of:
            0.033821285 = score(doc=4540,freq=2.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.19200584 = fieldWeight in 4540, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4540)
          0.034332175 = weight(_text_:22 in 4540) [ClassicSimilarity], result of:
            0.034332175 = score(doc=4540,freq=2.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.19345059 = fieldWeight in 4540, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4540)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - This study intends to identify factors that affect relevance judgment of retrieved information as part of the 2007 TREC Legal track interactive task. Design/methodology/approach - Data were gathered and analyzed from the participants of the 2007 TREC Legal track interactive task using a questionnaire which includes not only a list of 80 relevance factors identified in prior research, but also a space for expressing their thoughts on relevance judgment in the process. Findings - This study finds that topicality remains a primary criterion, out of various options, for determining relevance, while specificity of the search request, task, or retrieved results also helps greatly in relevance judgment. Research limitations/implications - Relevance research should focus on the topicality and specificity of what is being evaluated as well as conducted in real environments. Practical implications - If multiple relevance factors are presented to assessors, the total number in a list should be below ten to take account of the limited processing capacity of human beings' short-term memory. Otherwise, the assessors might either completely ignore or inadequately consider some of the relevance factors when making judgment decisions. Originality/value - This study presents a method for reducing the artificiality of relevance research design, an apparent limitation in many related studies. Specifically, relevance judgment was made in this research as part of the 2007 TREC Legal track interactive task rather than a study devised for the sake of it. The assessors also served as searchers so that their searching experience would facilitate their subsequent relevance judgments.
    Date
    12. 7.2011 18:29:22
  18. Qiu, L.: Analytical searching vs. browsing in hypertext information retrieval systems (1993) 0.02
    0.02232091 = product of:
      0.06696273 = sum of:
        0.06696273 = product of:
          0.13392545 = sum of:
            0.13392545 = weight(_text_:search in 7416) [ClassicSimilarity], result of:
              0.13392545 = score(doc=7416,freq=16.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.76030433 = fieldWeight in 7416, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7416)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports an experiment conducted to study search behaviour of different user groups in a hypertext information retrieval system. A three-way analysis of variance test was conducted to study the effects of gender, search task, and search experience on search option (analytical searching versus browsing), as measured by the proportion of nodes reached through analytical searching. The search task factor influenced search option in that a general task caused more browsing and specific task more analytical searching. Gender or search experience alone did not affect the search option. These findings are discussed in light of evaluation of existing systems and implications for future design
  19. Kristensen, J.: Expanding end-users' query statements for free text searching with a search-aid thesaurus (1993) 0.02
    0.02209197 = product of:
      0.06627591 = sum of:
        0.06627591 = product of:
          0.13255182 = sum of:
            0.13255182 = weight(_text_:search in 6621) [ClassicSimilarity], result of:
              0.13255182 = score(doc=6621,freq=12.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.75250614 = fieldWeight in 6621, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6621)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Tests the effectiveness of a thesaurus as a search-aid in free text searching of a full text database. A set of queries was searched against a large full text database of newspaper articles. The thesaurus contained equivalence, hierarchical and associative relationships. Each query was searched in five modes: basic search, synonym search, narrower term search, related term search, and union of all previous searches. The searches were analyzed in terms of relative recall and precision
  20. Palmquist, R.A.; Kim, K.-S.: Cognitive style and on-line database search experience as predictors of Web search performance (2000) 0.02
    0.021390459 = product of:
      0.064171374 = sum of:
        0.064171374 = product of:
          0.12834275 = sum of:
            0.12834275 = weight(_text_:search in 4605) [ClassicSimilarity], result of:
              0.12834275 = score(doc=4605,freq=20.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.72861093 = fieldWeight in 4605, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4605)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This study sought to investigate the effects of cognitive style (field dependent and field independent) and on-line database search experience (novice and experienced) on the WWW search performance of undergraduate college students (n=48). It also attempted to find user factors that could be used to predict search efficiency. search performance, the dependent variable was defined in 2 ways: (1) time required for retrieving a relevant information item, and (2) the number of nodes traversed for retrieving a relevant information item. the search tasks required were carried out on a University Web site, and included a factual task and a topical search task of interest to the participant. Results indicated that while cognitive style (FD/FI) significantly influenced the search performance of novice searchers, the influence was greatly reduced in those searchers who had on-line database search experience. Based on the findings, suggestions for possible changes to the design of the current Web interface and to user training programs are provided