Search (98 results, page 1 of 5)

  • × type_ss:"a"
  • × theme_ss:"Retrievalstudien"
  1. MacCall, S.L.; Cleveland, A.D.; Gibson, I.E.: Outline and preliminary evaluation of the classical digital library model (1999) 0.06
    0.058712237 = product of:
      0.11742447 = sum of:
        0.07461992 = weight(_text_:digital in 6541) [ClassicSimilarity], result of:
          0.07461992 = score(doc=6541,freq=6.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.37742734 = fieldWeight in 6541, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6541)
        0.042804558 = weight(_text_:library in 6541) [ClassicSimilarity], result of:
          0.042804558 = score(doc=6541,freq=10.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.32479787 = fieldWeight in 6541, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6541)
      0.5 = coord(2/4)
    
    Abstract
    The growing number of networked information resources and services offers unprecedented opportunities for delivering high quality information to the computer desktop of a wide range of individuals. However, currently there is a reliance on a database retrieval model, in which endusers use keywords to search large collections of automatically indexed resources in order to find needed information. As an alternative to the database retrieval model, this paper outlines the classical digital library model, which is derived from traditional practices of library and information science professionals. These practices include the selection and organization of information resources for local populations of users and the integration of advanced information retrieval tools, such as databases and the Internet into these collections. To evaluate this model, library and information professionals and endusers involved with primary care medicine were asked to respond to a series of questions comparing their experiences with a digital library developed for the primary care population to their experiences with general Internet use. Preliminary results are reported
  2. Blandford, A.; Adams, A.; Attfield, S.; Buchanan, G.; Gow, J.; Makri, S.; Rimmer, J.; Warwick, C.: ¬The PRET A Rapporter framework : evaluating digital libraries from the perspective of information work (2008) 0.06
    0.056257617 = product of:
      0.11251523 = sum of:
        0.0895439 = weight(_text_:digital in 2021) [ClassicSimilarity], result of:
          0.0895439 = score(doc=2021,freq=6.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.4529128 = fieldWeight in 2021, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=2021)
        0.022971334 = weight(_text_:library in 2021) [ClassicSimilarity], result of:
          0.022971334 = score(doc=2021,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.17430481 = fieldWeight in 2021, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=2021)
      0.5 = coord(2/4)
    
    Abstract
    The strongest tradition of IR systems evaluation has focused on system effectiveness; more recently, there has been a growing interest in evaluation of Interactive IR systems, balancing system and user-oriented evaluation criteria. In this paper we shift the focus to considering how IR systems, and particularly digital libraries, can be evaluated to assess (and improve) their fit with users' broader work activities. Taking this focus, we answer a different set of evaluation questions that reveal more about the design of interfaces, user-system interactions and how systems may be deployed in the information working context. The planning and conduct of such evaluation studies share some features with the established methods for conducting IR evaluation studies, but come with a shift in emphasis; for example, a greater range of ethical considerations may be pertinent. We present the PRET A Rapporter framework for structuring user-centred evaluation studies and illustrate its application to three evaluation studies of digital library systems.
  3. Park, S.: Usability, user preferences, effectiveness, and user behaviors when searching individual and integrated full-text databases : implications for digital libraries (2000) 0.05
    0.053888097 = product of:
      0.107776195 = sum of:
        0.07461992 = weight(_text_:digital in 4591) [ClassicSimilarity], result of:
          0.07461992 = score(doc=4591,freq=6.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.37742734 = fieldWeight in 4591, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4591)
        0.033156272 = weight(_text_:library in 4591) [ClassicSimilarity], result of:
          0.033156272 = score(doc=4591,freq=6.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.25158736 = fieldWeight in 4591, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4591)
      0.5 = coord(2/4)
    
    Abstract
    This article addresses a crucial issue in the digital library environment: how to support effective interaction of users with heterogeneous and distributed information resources. In particular, this study compared usability, user preference, effectiveness, and searching behaviors in systems that implement interaction with multiple databases as if they were one (integrated interaction) in a experiment in the TREC environment. 28 volunteers were recruited from the graduate students of the School of Communication, Information & Library Studies at Rutgers University. Significantly more subjects preferred the common interface to the integrated interface, mainly because they could have more control over database selection. Subjects were also more satisfied with the results from the common interface, and performed better with the common interface than with the integrated interface. Overall, it appears that for this population, interacting with databases through a common interface is preferable on all grounds to interacting with databases through an integrated interface. These results suggest that: (1) the general assumption of the information retrieval (IR) literature that an integrated interaction is best needs to be revisited; (2) it is important to allow for more user control in the distributed environment; (3) for digital library purposes, it is important to characterize different databases to support user choice for integration; and (4) certain users prefer control over database selection while still opting for results to be merged
  4. Dalrymple, P.W.: Retrieval by reformulation in two library catalogs : toward a cognitive model of searching behavior (1990) 0.05
    0.050567575 = product of:
      0.10113515 = sum of:
        0.053599782 = weight(_text_:library in 5089) [ClassicSimilarity], result of:
          0.053599782 = score(doc=5089,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.40671125 = fieldWeight in 5089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.109375 = fieldNorm(doc=5089)
        0.047535364 = product of:
          0.09507073 = sum of:
            0.09507073 = weight(_text_:22 in 5089) [ClassicSimilarity], result of:
              0.09507073 = score(doc=5089,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.5416616 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 7.2006 18:43:54
  5. Behnert, C.; Lewandowski, D.: ¬A framework for designing retrieval effectiveness studies of library information systems using human relevance assessments (2017) 0.05
    0.04686443 = product of:
      0.09372886 = sum of:
        0.043081827 = weight(_text_:digital in 3700) [ClassicSimilarity], result of:
          0.043081827 = score(doc=3700,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.21790776 = fieldWeight in 3700, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3700)
        0.050647035 = weight(_text_:library in 3700) [ClassicSimilarity], result of:
          0.050647035 = score(doc=3700,freq=14.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.384306 = fieldWeight in 3700, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3700)
      0.5 = coord(2/4)
    
    Abstract
    Purpose This paper demonstrates how to apply traditional information retrieval evaluation methods based on standards from the Text REtrieval Conference (TREC) and web search evaluation to all types of modern library information systems including online public access catalogs, discovery systems, and digital libraries that provide web search features to gather information from heterogeneous sources. Design/methodology/approach We apply conventional procedures from information retrieval evaluation to the library information system context considering the specific characteristics of modern library materials. Findings We introduce a framework consisting of five parts: (1) search queries, (2) search results, (3) assessors, (4) testing, and (5) data analysis. We show how to deal with comparability problems resulting from diverse document types, e.g., electronic articles vs. printed monographs and what issues need to be considered for retrieval tests in the library context. Practical implications The framework can be used as a guideline for conducting retrieval effectiveness studies in the library context. Originality/value Although a considerable amount of research has been done on information retrieval evaluation, and standards for conducting retrieval effectiveness studies do exist, to our knowledge this is the first attempt to provide a systematic framework for evaluating the retrieval effectiveness of twenty-first-century library information systems. We demonstrate which issues must be considered and what decisions must be made by researchers prior to a retrieval test.
  6. Boros, E.; Kantor, P.B.; Neu, D.J.: Pheromonic representation of user quests by digital structures (1999) 0.04
    0.042796366 = product of:
      0.08559273 = sum of:
        0.060926907 = weight(_text_:digital in 6684) [ClassicSimilarity], result of:
          0.060926907 = score(doc=6684,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.3081681 = fieldWeight in 6684, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6684)
        0.024665821 = product of:
          0.049331643 = sum of:
            0.049331643 = weight(_text_:project in 6684) [ClassicSimilarity], result of:
              0.049331643 = score(doc=6684,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.23317845 = fieldWeight in 6684, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6684)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In a novel approach to information finding in networked environments, each user's specific purpose or "quest" can be represented in numerous ways. The most familiar is a list of keywords, or a natural language sentence or paragraph. More effective is an extended text that has been judged as to relevance. This forms the basis of relevance feedback, as it is used in information retrieval. In the "Ant World" project (Ant World, 1999; Kantor et al., 1999b; Kantor et al., 1999a), the items to be retrieved are not documents, but rather quests, represented by entire collections of judged documents. In order to save space and time we have developed methods for representing these complex entities in a short string of about 1,000 bytes, which we call a "Digital Information Pheromone" (DIP). The principles for determining the DIP for a given quest, and for matching DIPs to each other are presented. The effectiveness of this scheme is explored with some applications to the large judged collections of TREC documents
  7. Belkin, N.J.: ¬An overview of results from Rutgers' investigations of interactive information retrieval (1998) 0.03
    0.027631238 = product of:
      0.055262476 = sum of:
        0.03828556 = weight(_text_:library in 2339) [ClassicSimilarity], result of:
          0.03828556 = score(doc=2339,freq=8.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.29050803 = fieldWeight in 2339, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2339)
        0.016976917 = product of:
          0.033953834 = sum of:
            0.033953834 = weight(_text_:22 in 2339) [ClassicSimilarity], result of:
              0.033953834 = score(doc=2339,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.19345059 = fieldWeight in 2339, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2339)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Over the last 4 years, the Information Interaction Laboratory at Rutgers' School of communication, Information and Library Studies has performed a series of investigations concerned with various aspects of people's interactions with advanced information retrieval (IR) systems. We have benn especially concerned with understanding not just what people do, and why, and with what effect, but also with what they would like to do, and how they attempt to accomplish it, and with what difficulties. These investigations have led to some quite interesting conclusions about the nature and structure of people's interactions with information, about support for cooperative human-computer interaction in query reformulation, and about the value of visualization of search results for supporting various forms of interaction with information. In this discussion, I give an overview of the research program and its projects, present representative results from the projects, and discuss some implications of these results for support of subject searching in information retrieval systems
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  8. Blagden, J.F.: How much noise in a role-free and link-free co-ordinate indexing system? (1966) 0.03
    0.025283787 = product of:
      0.050567575 = sum of:
        0.026799891 = weight(_text_:library in 2718) [ClassicSimilarity], result of:
          0.026799891 = score(doc=2718,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.20335563 = fieldWeight in 2718, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2718)
        0.023767682 = product of:
          0.047535364 = sum of:
            0.047535364 = weight(_text_:22 in 2718) [ClassicSimilarity], result of:
              0.047535364 = score(doc=2718,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.2708308 = fieldWeight in 2718, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2718)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    A study of the number of irrelevant documents retrieved in a co-ordinate indexing system that does not employ eitherr roles or links. These tests were based on one hundred actual inquiries received in the library and therefore an evaluation of recall efficiency is not included. Over half the enquiries produced no noise, but the mean average percentage niose figure was approximately 33 per cent based on a total average retireval figure of eighteen documents per search. Details of the size of the indexed collection, methods of indexing, and an analysis of the reasons for the retrieval of irrelevant documents are discussed, thereby providing information officers who are thinking of installing such a system with some evidence on which to base a decision as to whether or not to utilize these devices
    Source
    Journal of documentation. 22(1966), S.203-209
  9. Brown, M.E.: By any other name : accounting for failure in the naming of subject categories (1995) 0.03
    0.025283787 = product of:
      0.050567575 = sum of:
        0.026799891 = weight(_text_:library in 5598) [ClassicSimilarity], result of:
          0.026799891 = score(doc=5598,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.20335563 = fieldWeight in 5598, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5598)
        0.023767682 = product of:
          0.047535364 = sum of:
            0.047535364 = weight(_text_:22 in 5598) [ClassicSimilarity], result of:
              0.047535364 = score(doc=5598,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.2708308 = fieldWeight in 5598, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5598)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    2.11.1996 13:08:22
    Source
    Library and information science research. 17(1995) no.4, S.347-385
  10. King, D.W.: Blazing new trails : in celebration of an audacious career (2000) 0.02
    0.022024449 = product of:
      0.044048898 = sum of:
        0.027071979 = weight(_text_:library in 1184) [ClassicSimilarity], result of:
          0.027071979 = score(doc=1184,freq=4.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.2054202 = fieldWeight in 1184, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1184)
        0.016976917 = product of:
          0.033953834 = sum of:
            0.033953834 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
              0.033953834 = score(doc=1184,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.19345059 = fieldWeight in 1184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1184)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
    Source
    Saving the time of the library user through subject access innovation: Papers in honor of Pauline Atherton Cochrane. Ed.: W.J. Wheeler
  11. Salampasis, M.; Tait, J.; Bloor, C.: Evaluation of information-seeking performance in hypermedia digital libraries (1998) 0.02
    0.021324418 = product of:
      0.085297674 = sum of:
        0.085297674 = weight(_text_:digital in 3759) [ClassicSimilarity], result of:
          0.085297674 = score(doc=3759,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.43143538 = fieldWeight in 3759, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3759)
      0.25 = coord(1/4)
    
    Abstract
    Discusses current information retrieval methods based on recall (R) and precision (P) for evaluating information retrieval and examines their suitability for evaluating the performance of hypermedia digital libraries. Proposes a new quantitative evaluation methodology, based on the structural analysis of hypermedia networks and the navigational and search state patterns of information seekers. Although the proposed methodology retains some of the characteristics of R and P evaluation, it could be more suitable than them for measuring the performance of information-seeking environments where information seekers can utilize arbitrary mixtures of browsing and query-based searching strategies
  12. Petrelli, D.: On the role of user-centred evaluation in the advancement of interactive information retrieval (2008) 0.02
    0.02082137 = product of:
      0.08328548 = sum of:
        0.08328548 = sum of:
          0.049331643 = weight(_text_:project in 2026) [ClassicSimilarity], result of:
            0.049331643 = score(doc=2026,freq=2.0), product of:
              0.21156175 = queryWeight, product of:
                4.220981 = idf(docFreq=1764, maxDocs=44218)
                0.050121464 = queryNorm
              0.23317845 = fieldWeight in 2026, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.220981 = idf(docFreq=1764, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2026)
          0.033953834 = weight(_text_:22 in 2026) [ClassicSimilarity], result of:
            0.033953834 = score(doc=2026,freq=2.0), product of:
              0.17551683 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050121464 = queryNorm
              0.19345059 = fieldWeight in 2026, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2026)
      0.25 = coord(1/4)
    
    Abstract
    This paper discusses the role of user-centred evaluations as an essential method for researching interactive information retrieval. It draws mainly on the work carried out during the Clarity Project where different user-centred evaluations were run during the lifecycle of a cross-language information retrieval system. The iterative testing was not only instrumental to the development of a usable system, but it enhanced our knowledge of the potential, impact, and actual use of cross-language information retrieval technology. Indeed the role of the user evaluation was dual: by testing a specific prototype it was possible to gain a micro-view and assess the effectiveness of each component of the complex system; by cumulating the result of all the evaluations (in total 43 people were involved) it was possible to build a macro-view of how cross-language retrieval would impact on users and their tasks. By showing the richness of results that can be acquired, this paper aims at stimulating researchers into considering user-centred evaluations as a flexible, adaptable and comprehensive technique for investigating non-traditional information access systems.
    Source
    Information processing and management. 44(2008) no.1, S.22-38
  13. Harter, S.P.: ¬The Cranfield II relevance assessments : a critical evaluation (1971) 0.02
    0.015314223 = product of:
      0.061256893 = sum of:
        0.061256893 = weight(_text_:library in 5364) [ClassicSimilarity], result of:
          0.061256893 = score(doc=5364,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.46481284 = fieldWeight in 5364, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.125 = fieldNorm(doc=5364)
      0.25 = coord(1/4)
    
    Source
    Library quarterly. 41(1971), S.223-228
  14. Prabha, C.: ¬The large retrieval phenomenon (1991) 0.02
    0.015314223 = product of:
      0.061256893 = sum of:
        0.061256893 = weight(_text_:library in 7683) [ClassicSimilarity], result of:
          0.061256893 = score(doc=7683,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.46481284 = fieldWeight in 7683, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.125 = fieldNorm(doc=7683)
      0.25 = coord(1/4)
    
    Source
    Advances in library automation and networking. 4(1991), S.55-92
  15. Wilkes, A.; Nelson, A.: Subject searching in two online catalogs : authority control vs. non authority control (1995) 0.01
    0.014981594 = product of:
      0.059926376 = sum of:
        0.059926376 = weight(_text_:library in 4450) [ClassicSimilarity], result of:
          0.059926376 = score(doc=4450,freq=10.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.45471698 = fieldWeight in 4450, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4450)
      0.25 = coord(1/4)
    
    Abstract
    Compares the results of subject searching in 2 online catalogue systems, one system with authority control, the other without. Transaction logs from Library A (no authority control) were analyzed to identify searching patterns of users; 885 searches were attempted, 351 (39,7%) by subject. 142 (40,6%) of these subject searches were unsuccessful. Identical searches were performed in a comparable library that has authority control, Library B. Terms identified in 'see' references at Library B were searched in Library A. 105 (73,9%) of the searches that appeared to fail would have retrievd at least one, and usually many, records if a link had been provided between the term chosen by the user and the term used by the system
  16. Buckley, C.: ¬The SMART Project at TREC (2005) 0.01
    0.014799493 = product of:
      0.059197973 = sum of:
        0.059197973 = product of:
          0.11839595 = sum of:
            0.11839595 = weight(_text_:project in 5088) [ClassicSimilarity], result of:
              0.11839595 = score(doc=5088,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.5596283 = fieldWeight in 5088, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5088)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  17. Johnson, K.E.: OPAC missing record retrieval (1996) 0.01
    0.014067013 = product of:
      0.05626805 = sum of:
        0.05626805 = weight(_text_:library in 6735) [ClassicSimilarity], result of:
          0.05626805 = score(doc=6735,freq=12.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.42695788 = fieldWeight in 6735, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=6735)
      0.25 = coord(1/4)
    
    Abstract
    Reports results of a study, conducted at Rhode Island University Library, to determine whether cataloguing records known to be missing from a library consortium OPAC database could be identified using the database search features. Attempts to create lists of bibliographic records held by other libraries in the consortium using Boolean searching features failed due to search feature limitations. Samples of search logic were created, collections of records based on this logic were assembled manually and then compared with card catalogue of the single library. Results suggest that use of the Boolean OR operator to conduct the broadest possible search could find 56.000 of the library's missing records that were held by other libraries. Use of the Boolean AND operator to conduct the narrowest search found 85.000 missing records. A specific library search made of the records of the most likely consortium library to have overlaid the single library's holdings found that 80.000 of the single library's missing records were held by a specific library
  18. Bates, M.J.: Document familiarity, relevance, and Bradford's law : the Getty Online Searching Project report; no.5 (1996) 0.01
    0.013953096 = product of:
      0.055812385 = sum of:
        0.055812385 = product of:
          0.11162477 = sum of:
            0.11162477 = weight(_text_:project in 6978) [ClassicSimilarity], result of:
              0.11162477 = score(doc=6978,freq=4.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.52762264 = fieldWeight in 6978, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6978)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The Getty Online Searching Project studied the end user searching behaviour of 27 humanities scholars over a 2 year period. A number of scholars anticipated that they were already familiar with a percentage of records their searches retrieved. High document familiarity can be a significant factor in searching: Draws implications regarding the impact of high document familiarity on relevance and information retrieval theory. Makes speculations regarding high document familiarity and Bradford's law
  19. Wilbur, W.J.: Human subjectivity and performance limits in document retrieval (1999) 0.01
    0.013399946 = product of:
      0.053599782 = sum of:
        0.053599782 = weight(_text_:library in 4539) [ClassicSimilarity], result of:
          0.053599782 = score(doc=4539,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.40671125 = fieldWeight in 4539, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.109375 = fieldNorm(doc=4539)
      0.25 = coord(1/4)
    
    Source
    Encyclopedia of library and information science. Vol.64, [=Suppl.27]
  20. Voiskunskii, V.G.: Evaluation of search results (2000) 0.01
    0.013399946 = product of:
      0.053599782 = sum of:
        0.053599782 = weight(_text_:library in 4670) [ClassicSimilarity], result of:
          0.053599782 = score(doc=4670,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.40671125 = fieldWeight in 4670, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.109375 = fieldNorm(doc=4670)
      0.25 = coord(1/4)
    
    Source
    Encyclopedia of library and information science. Vol.66, [=Suppl.29]

Years

Languages