Search (111 results, page 1 of 6)

  • × theme_ss:"Retrievalstudien"
  1. Sievert, M.E.; McKinin, E.J.: Why full-text misses some relevant documents : an analysis of documents not retrieved by CCML or MEDIS (1989) 0.06
    0.061456952 = product of:
      0.122913904 = sum of:
        0.022971334 = weight(_text_:library in 3564) [ClassicSimilarity], result of:
          0.022971334 = score(doc=3564,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.17430481 = fieldWeight in 3564, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=3564)
        0.09994257 = sum of:
          0.059197973 = weight(_text_:project in 3564) [ClassicSimilarity], result of:
            0.059197973 = score(doc=3564,freq=2.0), product of:
              0.21156175 = queryWeight, product of:
                4.220981 = idf(docFreq=1764, maxDocs=44218)
                0.050121464 = queryNorm
              0.27981415 = fieldWeight in 3564, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.220981 = idf(docFreq=1764, maxDocs=44218)
                0.046875 = fieldNorm(doc=3564)
          0.0407446 = weight(_text_:22 in 3564) [ClassicSimilarity], result of:
            0.0407446 = score(doc=3564,freq=2.0), product of:
              0.17551683 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050121464 = queryNorm
              0.23214069 = fieldWeight in 3564, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3564)
      0.5 = coord(2/4)
    
    Abstract
    Searches conducted as part of the MEDLINE/Full-Text Research Project revealed that the full-text data bases of clinical medical journal articles (CCML (Comprehensive Core Medical Library) from BRS Information Technologies, and MEDIS from Mead Data Central) did not retrieve all the relevant citations. An analysis of the data indicated that 204 relevant citations were retrieved only by MEDLINE. A comparison of the strategies used on the full-text data bases with the text of the articles of these 204 citations revealed that 2 reasons contributed to these failure. The searcher often constructed a restrictive strategy which resulted in the loss of relevant documents; and as in other kinds of retrieval, the problems of natural language caused the loss of relevant documents.
    Date
    9. 1.1996 10:22:31
  2. MacCall, S.L.; Cleveland, A.D.; Gibson, I.E.: Outline and preliminary evaluation of the classical digital library model (1999) 0.06
    0.058712237 = product of:
      0.11742447 = sum of:
        0.07461992 = weight(_text_:digital in 6541) [ClassicSimilarity], result of:
          0.07461992 = score(doc=6541,freq=6.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.37742734 = fieldWeight in 6541, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6541)
        0.042804558 = weight(_text_:library in 6541) [ClassicSimilarity], result of:
          0.042804558 = score(doc=6541,freq=10.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.32479787 = fieldWeight in 6541, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6541)
      0.5 = coord(2/4)
    
    Abstract
    The growing number of networked information resources and services offers unprecedented opportunities for delivering high quality information to the computer desktop of a wide range of individuals. However, currently there is a reliance on a database retrieval model, in which endusers use keywords to search large collections of automatically indexed resources in order to find needed information. As an alternative to the database retrieval model, this paper outlines the classical digital library model, which is derived from traditional practices of library and information science professionals. These practices include the selection and organization of information resources for local populations of users and the integration of advanced information retrieval tools, such as databases and the Internet into these collections. To evaluate this model, library and information professionals and endusers involved with primary care medicine were asked to respond to a series of questions comparing their experiences with a digital library developed for the primary care population to their experiences with general Internet use. Preliminary results are reported
  3. Blandford, A.; Adams, A.; Attfield, S.; Buchanan, G.; Gow, J.; Makri, S.; Rimmer, J.; Warwick, C.: ¬The PRET A Rapporter framework : evaluating digital libraries from the perspective of information work (2008) 0.06
    0.056257617 = product of:
      0.11251523 = sum of:
        0.0895439 = weight(_text_:digital in 2021) [ClassicSimilarity], result of:
          0.0895439 = score(doc=2021,freq=6.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.4529128 = fieldWeight in 2021, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=2021)
        0.022971334 = weight(_text_:library in 2021) [ClassicSimilarity], result of:
          0.022971334 = score(doc=2021,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.17430481 = fieldWeight in 2021, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=2021)
      0.5 = coord(2/4)
    
    Abstract
    The strongest tradition of IR systems evaluation has focused on system effectiveness; more recently, there has been a growing interest in evaluation of Interactive IR systems, balancing system and user-oriented evaluation criteria. In this paper we shift the focus to considering how IR systems, and particularly digital libraries, can be evaluated to assess (and improve) their fit with users' broader work activities. Taking this focus, we answer a different set of evaluation questions that reveal more about the design of interfaces, user-system interactions and how systems may be deployed in the information working context. The planning and conduct of such evaluation studies share some features with the established methods for conducting IR evaluation studies, but come with a shift in emphasis; for example, a greater range of ethical considerations may be pertinent. We present the PRET A Rapporter framework for structuring user-centred evaluation studies and illustrate its application to three evaluation studies of digital library systems.
  4. Park, S.: Usability, user preferences, effectiveness, and user behaviors when searching individual and integrated full-text databases : implications for digital libraries (2000) 0.05
    0.053888097 = product of:
      0.107776195 = sum of:
        0.07461992 = weight(_text_:digital in 4591) [ClassicSimilarity], result of:
          0.07461992 = score(doc=4591,freq=6.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.37742734 = fieldWeight in 4591, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4591)
        0.033156272 = weight(_text_:library in 4591) [ClassicSimilarity], result of:
          0.033156272 = score(doc=4591,freq=6.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.25158736 = fieldWeight in 4591, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4591)
      0.5 = coord(2/4)
    
    Abstract
    This article addresses a crucial issue in the digital library environment: how to support effective interaction of users with heterogeneous and distributed information resources. In particular, this study compared usability, user preference, effectiveness, and searching behaviors in systems that implement interaction with multiple databases as if they were one (integrated interaction) in a experiment in the TREC environment. 28 volunteers were recruited from the graduate students of the School of Communication, Information & Library Studies at Rutgers University. Significantly more subjects preferred the common interface to the integrated interface, mainly because they could have more control over database selection. Subjects were also more satisfied with the results from the common interface, and performed better with the common interface than with the integrated interface. Overall, it appears that for this population, interacting with databases through a common interface is preferable on all grounds to interacting with databases through an integrated interface. These results suggest that: (1) the general assumption of the information retrieval (IR) literature that an integrated interaction is best needs to be revisited; (2) it is important to allow for more user control in the distributed environment; (3) for digital library purposes, it is important to characterize different databases to support user choice for integration; and (4) certain users prefer control over database selection while still opting for results to be merged
  5. Dalrymple, P.W.: Retrieval by reformulation in two library catalogs : toward a cognitive model of searching behavior (1990) 0.05
    0.050567575 = product of:
      0.10113515 = sum of:
        0.053599782 = weight(_text_:library in 5089) [ClassicSimilarity], result of:
          0.053599782 = score(doc=5089,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.40671125 = fieldWeight in 5089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.109375 = fieldNorm(doc=5089)
        0.047535364 = product of:
          0.09507073 = sum of:
            0.09507073 = weight(_text_:22 in 5089) [ClassicSimilarity], result of:
              0.09507073 = score(doc=5089,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.5416616 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 7.2006 18:43:54
  6. Behnert, C.; Lewandowski, D.: ¬A framework for designing retrieval effectiveness studies of library information systems using human relevance assessments (2017) 0.05
    0.04686443 = product of:
      0.09372886 = sum of:
        0.043081827 = weight(_text_:digital in 3700) [ClassicSimilarity], result of:
          0.043081827 = score(doc=3700,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.21790776 = fieldWeight in 3700, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3700)
        0.050647035 = weight(_text_:library in 3700) [ClassicSimilarity], result of:
          0.050647035 = score(doc=3700,freq=14.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.384306 = fieldWeight in 3700, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3700)
      0.5 = coord(2/4)
    
    Abstract
    Purpose This paper demonstrates how to apply traditional information retrieval evaluation methods based on standards from the Text REtrieval Conference (TREC) and web search evaluation to all types of modern library information systems including online public access catalogs, discovery systems, and digital libraries that provide web search features to gather information from heterogeneous sources. Design/methodology/approach We apply conventional procedures from information retrieval evaluation to the library information system context considering the specific characteristics of modern library materials. Findings We introduce a framework consisting of five parts: (1) search queries, (2) search results, (3) assessors, (4) testing, and (5) data analysis. We show how to deal with comparability problems resulting from diverse document types, e.g., electronic articles vs. printed monographs and what issues need to be considered for retrieval tests in the library context. Practical implications The framework can be used as a guideline for conducting retrieval effectiveness studies in the library context. Originality/value Although a considerable amount of research has been done on information retrieval evaluation, and standards for conducting retrieval effectiveness studies do exist, to our knowledge this is the first attempt to provide a systematic framework for evaluating the retrieval effectiveness of twenty-first-century library information systems. We demonstrate which issues must be considered and what decisions must be made by researchers prior to a retrieval test.
  7. Boros, E.; Kantor, P.B.; Neu, D.J.: Pheromonic representation of user quests by digital structures (1999) 0.04
    0.042796366 = product of:
      0.08559273 = sum of:
        0.060926907 = weight(_text_:digital in 6684) [ClassicSimilarity], result of:
          0.060926907 = score(doc=6684,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.3081681 = fieldWeight in 6684, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6684)
        0.024665821 = product of:
          0.049331643 = sum of:
            0.049331643 = weight(_text_:project in 6684) [ClassicSimilarity], result of:
              0.049331643 = score(doc=6684,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.23317845 = fieldWeight in 6684, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6684)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In a novel approach to information finding in networked environments, each user's specific purpose or "quest" can be represented in numerous ways. The most familiar is a list of keywords, or a natural language sentence or paragraph. More effective is an extended text that has been judged as to relevance. This forms the basis of relevance feedback, as it is used in information retrieval. In the "Ant World" project (Ant World, 1999; Kantor et al., 1999b; Kantor et al., 1999a), the items to be retrieved are not documents, but rather quests, represented by entire collections of judged documents. In order to save space and time we have developed methods for representing these complex entities in a short string of about 1,000 bytes, which we call a "Digital Information Pheromone" (DIP). The principles for determining the DIP for a given quest, and for matching DIPs to each other are presented. The effectiveness of this scheme is explored with some applications to the large judged collections of TREC documents
  8. Armstrong, C.J.; Medawar, K.: Investigation into the quality of databases in general use in the UK (1996) 0.04
    0.03621646 = product of:
      0.07243292 = sum of:
        0.03790077 = weight(_text_:library in 6768) [ClassicSimilarity], result of:
          0.03790077 = score(doc=6768,freq=4.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.28758827 = fieldWeight in 6768, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6768)
        0.034532152 = product of:
          0.069064304 = sum of:
            0.069064304 = weight(_text_:project in 6768) [ClassicSimilarity], result of:
              0.069064304 = score(doc=6768,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.32644984 = fieldWeight in 6768, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6768)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Reports on a Centre for Information Quality Management (CIQM) BLRRD funded project which investigated the quality of databases in general use in the UK. Gives a literature review of quality in library and information services. Reports the results of a CIQM questionnaire survey on the quality problems of databases and their affect on users. Carries out databases evaluations of: INSPEC on ESA-IRS, INSPEC on KR Data-Star, INSPEC on UMI CD-ROM, BNB on CD-ROM, and Information Science Abstracts Plus CD-ROM. Sets out a methodology for evaluation of bibliographic databases
    Imprint
    London : British Library
  9. Ellis, D.: Progress and problems in information retrieval (1996) 0.03
    0.028895756 = product of:
      0.057791512 = sum of:
        0.030628446 = weight(_text_:library in 789) [ClassicSimilarity], result of:
          0.030628446 = score(doc=789,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.23240642 = fieldWeight in 789, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0625 = fieldNorm(doc=789)
        0.027163066 = product of:
          0.054326132 = sum of:
            0.054326132 = weight(_text_:22 in 789) [ClassicSimilarity], result of:
              0.054326132 = score(doc=789,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.30952093 = fieldWeight in 789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=789)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    26. 7.2002 20:22:46
    Imprint
    London : Library association publishing
  10. Belkin, N.J.: ¬An overview of results from Rutgers' investigations of interactive information retrieval (1998) 0.03
    0.027631238 = product of:
      0.055262476 = sum of:
        0.03828556 = weight(_text_:library in 2339) [ClassicSimilarity], result of:
          0.03828556 = score(doc=2339,freq=8.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.29050803 = fieldWeight in 2339, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2339)
        0.016976917 = product of:
          0.033953834 = sum of:
            0.033953834 = weight(_text_:22 in 2339) [ClassicSimilarity], result of:
              0.033953834 = score(doc=2339,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.19345059 = fieldWeight in 2339, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2339)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Over the last 4 years, the Information Interaction Laboratory at Rutgers' School of communication, Information and Library Studies has performed a series of investigations concerned with various aspects of people's interactions with advanced information retrieval (IR) systems. We have benn especially concerned with understanding not just what people do, and why, and with what effect, but also with what they would like to do, and how they attempt to accomplish it, and with what difficulties. These investigations have led to some quite interesting conclusions about the nature and structure of people's interactions with information, about support for cooperative human-computer interaction in query reformulation, and about the value of visualization of search results for supporting various forms of interaction with information. In this discussion, I give an overview of the research program and its projects, present representative results from the projects, and discuss some implications of these results for support of subject searching in information retrieval systems
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  11. Blagden, J.F.: How much noise in a role-free and link-free co-ordinate indexing system? (1966) 0.03
    0.025283787 = product of:
      0.050567575 = sum of:
        0.026799891 = weight(_text_:library in 2718) [ClassicSimilarity], result of:
          0.026799891 = score(doc=2718,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.20335563 = fieldWeight in 2718, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2718)
        0.023767682 = product of:
          0.047535364 = sum of:
            0.047535364 = weight(_text_:22 in 2718) [ClassicSimilarity], result of:
              0.047535364 = score(doc=2718,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.2708308 = fieldWeight in 2718, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2718)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    A study of the number of irrelevant documents retrieved in a co-ordinate indexing system that does not employ eitherr roles or links. These tests were based on one hundred actual inquiries received in the library and therefore an evaluation of recall efficiency is not included. Over half the enquiries produced no noise, but the mean average percentage niose figure was approximately 33 per cent based on a total average retireval figure of eighteen documents per search. Details of the size of the indexed collection, methods of indexing, and an analysis of the reasons for the retrieval of irrelevant documents are discussed, thereby providing information officers who are thinking of installing such a system with some evidence on which to base a decision as to whether or not to utilize these devices
    Source
    Journal of documentation. 22(1966), S.203-209
  12. Brown, M.E.: By any other name : accounting for failure in the naming of subject categories (1995) 0.03
    0.025283787 = product of:
      0.050567575 = sum of:
        0.026799891 = weight(_text_:library in 5598) [ClassicSimilarity], result of:
          0.026799891 = score(doc=5598,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.20335563 = fieldWeight in 5598, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5598)
        0.023767682 = product of:
          0.047535364 = sum of:
            0.047535364 = weight(_text_:22 in 5598) [ClassicSimilarity], result of:
              0.047535364 = score(doc=5598,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.2708308 = fieldWeight in 5598, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5598)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    2.11.1996 13:08:22
    Source
    Library and information science research. 17(1995) no.4, S.347-385
  13. Hancock-Beaulieu, M.; McKenzie, L.; Irving, A.: Evaluative protocols for searching behaviour in online library catalogues (1991) 0.02
    0.02320939 = product of:
      0.09283756 = sum of:
        0.09283756 = weight(_text_:library in 347) [ClassicSimilarity], result of:
          0.09283756 = score(doc=347,freq=6.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.7044446 = fieldWeight in 347, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.109375 = fieldNorm(doc=347)
      0.25 = coord(1/4)
    
    Imprint
    London : British Library
    Series
    British Library research and development report; no.6031
  14. King, D.W.: Blazing new trails : in celebration of an audacious career (2000) 0.02
    0.022024449 = product of:
      0.044048898 = sum of:
        0.027071979 = weight(_text_:library in 1184) [ClassicSimilarity], result of:
          0.027071979 = score(doc=1184,freq=4.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.2054202 = fieldWeight in 1184, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1184)
        0.016976917 = product of:
          0.033953834 = sum of:
            0.033953834 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
              0.033953834 = score(doc=1184,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.19345059 = fieldWeight in 1184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1184)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
    Source
    Saving the time of the library user through subject access innovation: Papers in honor of Pauline Atherton Cochrane. Ed.: W.J. Wheeler
  15. Salampasis, M.; Tait, J.; Bloor, C.: Evaluation of information-seeking performance in hypermedia digital libraries (1998) 0.02
    0.021324418 = product of:
      0.085297674 = sum of:
        0.085297674 = weight(_text_:digital in 3759) [ClassicSimilarity], result of:
          0.085297674 = score(doc=3759,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.43143538 = fieldWeight in 3759, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3759)
      0.25 = coord(1/4)
    
    Abstract
    Discusses current information retrieval methods based on recall (R) and precision (P) for evaluating information retrieval and examines their suitability for evaluating the performance of hypermedia digital libraries. Proposes a new quantitative evaluation methodology, based on the structural analysis of hypermedia networks and the navigational and search state patterns of information seekers. Although the proposed methodology retains some of the characteristics of R and P evaluation, it could be more suitable than them for measuring the performance of information-seeking environments where information seekers can utilize arbitrary mixtures of browsing and query-based searching strategies
  16. Petrelli, D.: On the role of user-centred evaluation in the advancement of interactive information retrieval (2008) 0.02
    0.02082137 = product of:
      0.08328548 = sum of:
        0.08328548 = sum of:
          0.049331643 = weight(_text_:project in 2026) [ClassicSimilarity], result of:
            0.049331643 = score(doc=2026,freq=2.0), product of:
              0.21156175 = queryWeight, product of:
                4.220981 = idf(docFreq=1764, maxDocs=44218)
                0.050121464 = queryNorm
              0.23317845 = fieldWeight in 2026, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.220981 = idf(docFreq=1764, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2026)
          0.033953834 = weight(_text_:22 in 2026) [ClassicSimilarity], result of:
            0.033953834 = score(doc=2026,freq=2.0), product of:
              0.17551683 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050121464 = queryNorm
              0.19345059 = fieldWeight in 2026, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2026)
      0.25 = coord(1/4)
    
    Abstract
    This paper discusses the role of user-centred evaluations as an essential method for researching interactive information retrieval. It draws mainly on the work carried out during the Clarity Project where different user-centred evaluations were run during the lifecycle of a cross-language information retrieval system. The iterative testing was not only instrumental to the development of a usable system, but it enhanced our knowledge of the potential, impact, and actual use of cross-language information retrieval technology. Indeed the role of the user evaluation was dual: by testing a specific prototype it was possible to gain a micro-view and assess the effectiveness of each component of the complex system; by cumulating the result of all the evaluations (in total 43 people were involved) it was possible to build a macro-view of how cross-language retrieval would impact on users and their tasks. By showing the richness of results that can be acquired, this paper aims at stimulating researchers into considering user-centred evaluations as a flexible, adaptable and comprehensive technique for investigating non-traditional information access systems.
    Source
    Information processing and management. 44(2008) no.1, S.22-38
  17. TREC: experiment and evaluation in information retrieval (2005) 0.02
    0.016936913 = product of:
      0.033873826 = sum of:
        0.021540914 = weight(_text_:digital in 636) [ClassicSimilarity], result of:
          0.021540914 = score(doc=636,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.10895388 = fieldWeight in 636, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.01953125 = fieldNorm(doc=636)
        0.012332911 = product of:
          0.024665821 = sum of:
            0.024665821 = weight(_text_:project in 636) [ClassicSimilarity], result of:
              0.024665821 = score(doc=636,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.116589226 = fieldWeight in 636, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=636)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    Enthält die Beiträge: 1. The Text REtrieval Conference - Ellen M. Voorhees and Donna K. Harman 2. The TREC Test Collections - Donna K. Harman 3. Retrieval System Evaluation - Chris Buckley and Ellen M. Voorhees 4. The TREC Ad Hoc Experiments - Donna K. Harman 5. Routing and Filtering - Stephen Robertson and Jamie Callan 6. The TREC Interactive Tracks: Putting the User into Search - Susan T. Dumais and Nicholas J. Belkin 7. Beyond English - Donna K. Harman 8. Retrieving Noisy Text - Ellen M. Voorhees and John S. Garofolo 9.The Very Large Collection and Web Tracks - David Hawking and Nick Craswell 10. Question Answering in TREC - Ellen M. Voorhees 11. The University of Massachusetts and a Dozen TRECs - James Allan, W. Bruce Croft and Jamie Callan 12. How Okapi Came to TREC - Stephen Robertson 13. The SMART Project at TREC - Chris Buckley 14. Ten Years of Ad Hoc Retrieval at TREC Using PIRCS - Kui-Lam Kwok 15. MultiText Experiments for TREC - Gordon V. Cormack, Charles L. A. Clarke, Christopher R. Palmer and Thomas R. Lynam 16. A Language-Modeling Approach to TREC - Djoerd Hiemstra and Wessel Kraaij 17. BM Research Activities at TREC - Eric W. Brown, David Carmel, Martin Franz, Abraham Ittycheriah, Tapas Kanungo, Yoelle Maarek, J. Scott McCarley, Robert L. Mack, John M. Prager, John R. Smith, Aya Soffer, Jason Y. Zien and Alan D. Marwick Epilogue: Metareflections on TREC - Karen Sparck Jones
    Series
    Digital libraries and electronic publishing
  18. Harter, S.P.: ¬The Cranfield II relevance assessments : a critical evaluation (1971) 0.02
    0.015314223 = product of:
      0.061256893 = sum of:
        0.061256893 = weight(_text_:library in 5364) [ClassicSimilarity], result of:
          0.061256893 = score(doc=5364,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.46481284 = fieldWeight in 5364, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.125 = fieldNorm(doc=5364)
      0.25 = coord(1/4)
    
    Source
    Library quarterly. 41(1971), S.223-228
  19. Prabha, C.: ¬The large retrieval phenomenon (1991) 0.02
    0.015314223 = product of:
      0.061256893 = sum of:
        0.061256893 = weight(_text_:library in 7683) [ClassicSimilarity], result of:
          0.061256893 = score(doc=7683,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.46481284 = fieldWeight in 7683, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.125 = fieldNorm(doc=7683)
      0.25 = coord(1/4)
    
    Source
    Advances in library automation and networking. 4(1991), S.55-92
  20. Wilkes, A.; Nelson, A.: Subject searching in two online catalogs : authority control vs. non authority control (1995) 0.01
    0.014981594 = product of:
      0.059926376 = sum of:
        0.059926376 = weight(_text_:library in 4450) [ClassicSimilarity], result of:
          0.059926376 = score(doc=4450,freq=10.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.45471698 = fieldWeight in 4450, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4450)
      0.25 = coord(1/4)
    
    Abstract
    Compares the results of subject searching in 2 online catalogue systems, one system with authority control, the other without. Transaction logs from Library A (no authority control) were analyzed to identify searching patterns of users; 885 searches were attempted, 351 (39,7%) by subject. 142 (40,6%) of these subject searches were unsuccessful. Identical searches were performed in a comparable library that has authority control, Library B. Terms identified in 'see' references at Library B were searched in Library A. 105 (73,9%) of the searches that appeared to fail would have retrievd at least one, and usually many, records if a link had been provided between the term chosen by the user and the term used by the system

Languages

Types

  • a 98
  • s 5
  • el 4
  • m 4
  • r 3
  • d 1
  • p 1
  • More… Less…