Search (47 results, page 1 of 3)

  • × theme_ss:"Retrievalstudien"
  1. Gilchrist, A.: Research and consultancy (1998) 0.10
    0.09922923 = product of:
      0.14884384 = sum of:
        0.09589544 = weight(_text_:electronic in 1394) [ClassicSimilarity], result of:
          0.09589544 = score(doc=1394,freq=4.0), product of:
            0.19623034 = queryWeight, product of:
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.05019314 = queryNorm
            0.48868814 = fieldWeight in 1394, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.0625 = fieldNorm(doc=1394)
        0.052948397 = product of:
          0.10589679 = sum of:
            0.10589679 = weight(_text_:publishing in 1394) [ClassicSimilarity], result of:
              0.10589679 = score(doc=1394,freq=2.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.4318339 = fieldWeight in 1394, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1394)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    State of the art review of literature published about research and consultancy in library and information science (LIS). Issues covered include: scope and definitions of what constitutes research and consultancy; funding of research and development; national LIS research and the funding agencies; electronic libraries; document delivery; multimedia document delivery; the Z39.50 standard for client server computer architecture, the Internet and WWW; electronic publishing; information retrieval; evaluation and evaluation techniques; the Text Retrieval Conferences (TREC); the user domain; management issues; decision support systems; information politics and organizational culture; and value for money issues
  2. Lespinasse, K.: TREC: une conference pour l'evaluation des systemes de recherche d'information (1997) 0.06
    0.06334015 = product of:
      0.09501022 = sum of:
        0.06780831 = weight(_text_:electronic in 744) [ClassicSimilarity], result of:
          0.06780831 = score(doc=744,freq=2.0), product of:
            0.19623034 = queryWeight, product of:
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.05019314 = queryNorm
            0.34555468 = fieldWeight in 744, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.0625 = fieldNorm(doc=744)
        0.02720191 = product of:
          0.05440382 = sum of:
            0.05440382 = weight(_text_:22 in 744) [ClassicSimilarity], result of:
              0.05440382 = score(doc=744,freq=2.0), product of:
                0.17576782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05019314 = queryNorm
                0.30952093 = fieldWeight in 744, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=744)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    TREC ia an annual conference held in the USA devoted to electronic systems for large full text information searching. The conference deals with evaluation and comparison techniques developed since 1992 by participants from the research and industrial fields. The work of the conference is destined for designers (rather than users) of systems which access full text information. Describes the context, objectives, organization, evaluation methods and limits of TREC
    Date
    1. 8.1996 22:01:00
  3. Ellis, D.: Progress and problems in information retrieval (1996) 0.05
    0.053433537 = product of:
      0.16030061 = sum of:
        0.16030061 = sum of:
          0.10589679 = weight(_text_:publishing in 789) [ClassicSimilarity], result of:
            0.10589679 = score(doc=789,freq=2.0), product of:
              0.24522576 = queryWeight, product of:
                4.885643 = idf(docFreq=907, maxDocs=44218)
                0.05019314 = queryNorm
              0.4318339 = fieldWeight in 789, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.885643 = idf(docFreq=907, maxDocs=44218)
                0.0625 = fieldNorm(doc=789)
          0.05440382 = weight(_text_:22 in 789) [ClassicSimilarity], result of:
            0.05440382 = score(doc=789,freq=2.0), product of:
              0.17576782 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05019314 = queryNorm
              0.30952093 = fieldWeight in 789, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=789)
      0.33333334 = coord(1/3)
    
    Date
    26. 7.2002 20:22:46
    Imprint
    London : Library association publishing
  4. Hirsh, S.G.: Children's relevance criteria and information seeking on electronic resources (1999) 0.03
    0.028253464 = product of:
      0.08476039 = sum of:
        0.08476039 = weight(_text_:electronic in 4297) [ClassicSimilarity], result of:
          0.08476039 = score(doc=4297,freq=8.0), product of:
            0.19623034 = queryWeight, product of:
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.05019314 = queryNorm
            0.43194336 = fieldWeight in 4297, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4297)
      0.33333334 = coord(1/3)
    
    Abstract
    This study explores the relevance criteria and search strategies elementary school children applied when searching for information related to a class assignment in a school library setting. Students were interviewed on 2 occasions at different stages of the research process; field observations involved students thinking aloud to explain their search proceses and shadowing as students moved around the school library. Students performed searches on an online catalog, an electronic encyclopedia, an electronic magazine index, and the WWW. Results are presented for children selecting the topic, conducting the search, examining the results, and extracting relevant results. A total of 254 mentions of relevance criteria were identified, including 197 references to textual relevance criteria that were coded into 9 categories and 57 references to graphical relevance criteria that were coded into 5 categories. Students exhibited little concern for the authority of the textual and graphical information they found, based the majority of their relevance decisions for textual material on topicality, and identified information they found interesting. Students devoted a large portion of their research time to find pictures. Understanding the ways that children use electronic resources and the relevance criteria they apply has implications for information literacy training and for systems design
  5. TREC: experiment and evaluation in information retrieval (2005) 0.03
    0.02515765 = product of:
      0.03773647 = sum of:
        0.021190098 = weight(_text_:electronic in 636) [ClassicSimilarity], result of:
          0.021190098 = score(doc=636,freq=2.0), product of:
            0.19623034 = queryWeight, product of:
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.05019314 = queryNorm
            0.10798584 = fieldWeight in 636, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.01953125 = fieldNorm(doc=636)
        0.016546374 = product of:
          0.03309275 = sum of:
            0.03309275 = weight(_text_:publishing in 636) [ClassicSimilarity], result of:
              0.03309275 = score(doc=636,freq=2.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.13494809 = fieldWeight in 636, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=636)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Series
    Digital libraries and electronic publishing
  6. Marchionini, G.: Information seeking in full-text end-user-oriented search system : the roles of domain and search expertise (1993) 0.02
    0.02260277 = product of:
      0.06780831 = sum of:
        0.06780831 = weight(_text_:electronic in 5100) [ClassicSimilarity], result of:
          0.06780831 = score(doc=5100,freq=2.0), product of:
            0.19623034 = queryWeight, product of:
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.05019314 = queryNorm
            0.34555468 = fieldWeight in 5100, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.0625 = fieldNorm(doc=5100)
      0.33333334 = coord(1/3)
    
    Abstract
    Presents a study that identifies and examines the roles that information-seeking expertise and domain expertise play in information seeking in full text, and user search systems. This forms part of an investigation to characterise information seeking and to determine how it is affected by interactive electronic access to primary information. Distinguishes between the approaches of search experts and domain experts. Makes recommendations for systems design
  7. Voorhees, E.M.; Harman, D.K.: ¬The Text REtrieval Conference (2005) 0.02
    0.017127752 = product of:
      0.051383257 = sum of:
        0.051383257 = weight(_text_:electronic in 5082) [ClassicSimilarity], result of:
          0.051383257 = score(doc=5082,freq=6.0), product of:
            0.19623034 = queryWeight, product of:
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.05019314 = queryNorm
            0.26185176 = fieldWeight in 5082, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5082)
      0.33333334 = coord(1/3)
    
    Abstract
    Text retrieval technology targets a problem that is all too familiar: finding relevant information in large stores of electronic documents. The problem is an old one, with the first research conference devoted to the subject held in 1958 [11]. Since then the problem has continued to grow as more information is created in electronic form and more people gain electronic access. The advent of the World Wide Web, where anyone can publish so everyone must search, is a graphic illustration of the need for effective retrieval technology. The Text REtrieval Conference (TREC) is a workshop series designed to build the infrastructure necessary for the large-scale evaluation of text retrieval technology, thereby accelerating its transfer into the commercial sector. The series is sponsored by the U.S. National Institute of Standards and Technology (NIST) and the U.S. Department of Defense. At the time of this writing, there have been twelve TREC workshops and preparations for the thirteenth workshop are under way. Participants in the workshops have been drawn from the academic, commercial, and government sectors, and have included representatives from more than twenty different countries. These collective efforts have accomplished a great deal: a variety of large test collections have been built for both traditional ad hoc retrieval and related tasks such as cross-language retrieval, speech retrieval, and question answering; retrieval effectiveness has approximately doubled; and many commercial retrieval systems now contain technology first developed in TREC.
  8. Evans, J.E.: Some external and internal factors affecting users of interactive information systems (1996) 0.02
    0.016952079 = product of:
      0.050856233 = sum of:
        0.050856233 = weight(_text_:electronic in 6262) [ClassicSimilarity], result of:
          0.050856233 = score(doc=6262,freq=2.0), product of:
            0.19623034 = queryWeight, product of:
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.05019314 = queryNorm
            0.259166 = fieldWeight in 6262, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.046875 = fieldNorm(doc=6262)
      0.33333334 = coord(1/3)
    
    Abstract
    This contribution reports the results of continuing research in human-information system interactions. Following training and experience with an electronic information retrieval system novice and experienced subject groups responded to questions ranking their value assessments of 7 attributes of information sources in relation to 15 factors describing the search process. In general, novice users were more heavily influenced by the process factors (negative influences) than by the positive attributes of information qualities. Experienced users, while still concerned with process factors, were more strongly influenced by the qualitative information attributes. The specific advantages and contributions of this research are several: higher dimensionality of measured factors and attributes (15 x 7); higher granularity of analysis using a 7 value metric in a closed-end Likert scale; development of bi-directional, firced-choice influence vectors; and a larger sample size (N=186) than previously reported in the literature
  9. Cavanagh, A.K.: ¬A comparison of the retrieval performance of multi-disciplinary table-of-contents databases with conventional specialised databases (1997) 0.02
    0.016952079 = product of:
      0.050856233 = sum of:
        0.050856233 = weight(_text_:electronic in 770) [ClassicSimilarity], result of:
          0.050856233 = score(doc=770,freq=2.0), product of:
            0.19623034 = queryWeight, product of:
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.05019314 = queryNorm
            0.259166 = fieldWeight in 770, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.046875 = fieldNorm(doc=770)
      0.33333334 = coord(1/3)
    
    Abstract
    In an endeavour to compare retrieval performance and periodical overlap in a biological field, the same topic was searched on 5 Table of Contents (ToC) databases and 3 specialised biological databases. Performance was assessed in terms of precision and recall. The ToC databases in general had higher precision in that most material found was relevant. They were less satisfactory in recall where some located fewer than 50% of identified high relevance articles. Subject specific databases had overall better recall but lower precision with many more false drops and items of low relevance occuring. These differences were associated with variations in indexing practice and policy and searching capabilities of the various databases. In a further comparison, it was found that the electronic databases, as a group, identified only 75% of the articles known from independent source to have been published in the field
  10. Bodoff, D.; Kambil, A.: Partial coordination : II. A preliminary evaluation and failure analysis (1998) 0.02
    0.016952079 = product of:
      0.050856233 = sum of:
        0.050856233 = weight(_text_:electronic in 2323) [ClassicSimilarity], result of:
          0.050856233 = score(doc=2323,freq=2.0), product of:
            0.19623034 = queryWeight, product of:
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.05019314 = queryNorm
            0.259166 = fieldWeight in 2323, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.046875 = fieldNorm(doc=2323)
      0.33333334 = coord(1/3)
    
    Abstract
    Partial coordination is a new method for cataloging documents for subject access. It is especially designed to enhance the precision of document searches in online environments. This article reports a preliminary evaluation of partial coordination that shows promising results compared with full-text retrieval. We also report the difficulties in empirically evaluating the effectiveness of automatic full-text retrieval in contrast to mixed methods such as partial coordination which combine human cataloging with computerized retrieval. Based on our study, we propose research in this area will substantially benefit from a common framework for failure analysis and a common data set. This will allow information retrieval researchers adapting 'library style'cataloging to large electronic document collections, as well as those developing automated or mixed methods, to directly compare their proposals for indexing and retrieval. This article concludes by suggesting guidelines for constructing such as testbed
  11. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.02
    0.01586778 = product of:
      0.047603343 = sum of:
        0.047603343 = product of:
          0.095206685 = sum of:
            0.095206685 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.095206685 = score(doc=262,freq=2.0), product of:
                0.17576782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05019314 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    20.10.2000 12:22:23
  12. Tomaiuolo, N.G.; Parker, J.: Maximizing relevant retrieval : keyword and natural language searching (1998) 0.02
    0.01586778 = product of:
      0.047603343 = sum of:
        0.047603343 = product of:
          0.095206685 = sum of:
            0.095206685 = weight(_text_:22 in 6418) [ClassicSimilarity], result of:
              0.095206685 = score(doc=6418,freq=2.0), product of:
                0.17576782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05019314 = queryNorm
                0.5416616 = fieldWeight in 6418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6418)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Online. 22(1998) no.6, S.57-58
  13. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.02
    0.01586778 = product of:
      0.047603343 = sum of:
        0.047603343 = product of:
          0.095206685 = sum of:
            0.095206685 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
              0.095206685 = score(doc=6438,freq=2.0), product of:
                0.17576782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05019314 = queryNorm
                0.5416616 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    11. 8.2001 16:22:19
  14. Dalrymple, P.W.: Retrieval by reformulation in two library catalogs : toward a cognitive model of searching behavior (1990) 0.02
    0.01586778 = product of:
      0.047603343 = sum of:
        0.047603343 = product of:
          0.095206685 = sum of:
            0.095206685 = weight(_text_:22 in 5089) [ClassicSimilarity], result of:
              0.095206685 = score(doc=5089,freq=2.0), product of:
                0.17576782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05019314 = queryNorm
                0.5416616 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 7.2006 18:43:54
  15. Behnert, C.; Lewandowski, D.: ¬A framework for designing retrieval effectiveness studies of library information systems using human relevance assessments (2017) 0.01
    0.014126732 = product of:
      0.042380195 = sum of:
        0.042380195 = weight(_text_:electronic in 3700) [ClassicSimilarity], result of:
          0.042380195 = score(doc=3700,freq=2.0), product of:
            0.19623034 = queryWeight, product of:
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.05019314 = queryNorm
            0.21597168 = fieldWeight in 3700, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3700)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose This paper demonstrates how to apply traditional information retrieval evaluation methods based on standards from the Text REtrieval Conference (TREC) and web search evaluation to all types of modern library information systems including online public access catalogs, discovery systems, and digital libraries that provide web search features to gather information from heterogeneous sources. Design/methodology/approach We apply conventional procedures from information retrieval evaluation to the library information system context considering the specific characteristics of modern library materials. Findings We introduce a framework consisting of five parts: (1) search queries, (2) search results, (3) assessors, (4) testing, and (5) data analysis. We show how to deal with comparability problems resulting from diverse document types, e.g., electronic articles vs. printed monographs and what issues need to be considered for retrieval tests in the library context. Practical implications The framework can be used as a guideline for conducting retrieval effectiveness studies in the library context. Originality/value Although a considerable amount of research has been done on information retrieval evaluation, and standards for conducting retrieval effectiveness studies do exist, to our knowledge this is the first attempt to provide a systematic framework for evaluating the retrieval effectiveness of twenty-first-century library information systems. We demonstrate which issues must be considered and what decisions must be made by researchers prior to a retrieval test.
  16. Munkelt, J.; Schaer, P.; Lepsky, K.: Towards an IR test collection for the German National Library (2018) 0.01
    0.0132371 = product of:
      0.0397113 = sum of:
        0.0397113 = product of:
          0.0794226 = sum of:
            0.0794226 = weight(_text_:publishing in 4311) [ClassicSimilarity], result of:
              0.0794226 = score(doc=4311,freq=2.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.32387543 = fieldWeight in 4311, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4311)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Automatic content indexing is one of the innovations that are increasingly changing the way libraries work. In theory, it promises a cataloguing service that would hardly be possible with humans in terms of speed, quantity and maybe quality. The German National Library (DNB) has also recognised this potential and is increasingly relying on the automatic indexing of their catalogue content. The DNB took a major step in this direction in 2017, which was announced in two papers. The announcement was rather restrained, but the content of the papers is all the more explosive for the library community: Since September 2017, the DNB has discontinued the intellectual indexing of series Band H and has switched to an automatic process for these series. The subject indexing of online publications (series O) has been purely automatical since 2010; from September 2017, monographs and periodicals published outside the publishing industry and university publications will no longer be indexed by people. This raises the question: What is the quality of the automatic indexing compared to the manual work or in other words to which degree can the automatic indexing replace people without a signi cant drop in regards to quality?
  17. Allan, J.; Callan, J.P.; Croft, W.B.; Ballesteros, L.; Broglio, J.; Xu, J.; Shu, H.: INQUERY at TREC-5 (1997) 0.01
    0.011334131 = product of:
      0.03400239 = sum of:
        0.03400239 = product of:
          0.06800478 = sum of:
            0.06800478 = weight(_text_:22 in 3103) [ClassicSimilarity], result of:
              0.06800478 = score(doc=3103,freq=2.0), product of:
                0.17576782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05019314 = queryNorm
                0.38690117 = fieldWeight in 3103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3103)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    27. 2.1999 20:55:22
  18. Ng, K.B.; Loewenstern, D.; Basu, C.; Hirsh, H.; Kantor, P.B.: Data fusion of machine-learning methods for the TREC5 routing tak (and other work) (1997) 0.01
    0.011334131 = product of:
      0.03400239 = sum of:
        0.03400239 = product of:
          0.06800478 = sum of:
            0.06800478 = weight(_text_:22 in 3107) [ClassicSimilarity], result of:
              0.06800478 = score(doc=3107,freq=2.0), product of:
                0.17576782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05019314 = queryNorm
                0.38690117 = fieldWeight in 3107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3107)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    27. 2.1999 20:59:22
  19. Saracevic, T.: On a method for studying the structure and nature of requests in information retrieval (1983) 0.01
    0.011334131 = product of:
      0.03400239 = sum of:
        0.03400239 = product of:
          0.06800478 = sum of:
            0.06800478 = weight(_text_:22 in 2417) [ClassicSimilarity], result of:
              0.06800478 = score(doc=2417,freq=2.0), product of:
                0.17576782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05019314 = queryNorm
                0.38690117 = fieldWeight in 2417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2417)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Pages
    S.22-25
  20. Cooper, M.D.; Chen, H.-M.: Predicting the relevance of a library catalog search (2001) 0.01
    0.011301385 = product of:
      0.033904154 = sum of:
        0.033904154 = weight(_text_:electronic in 6519) [ClassicSimilarity], result of:
          0.033904154 = score(doc=6519,freq=2.0), product of:
            0.19623034 = queryWeight, product of:
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.05019314 = queryNorm
            0.17277734 = fieldWeight in 6519, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.03125 = fieldNorm(doc=6519)
      0.33333334 = coord(1/3)
    
    Abstract
    Relevance has been a difficult concept to define, let alone measure. In this paper, a simple operational definition of relevance is proposed for a Web-based library catalog: whether or not during a search session the user saves, prints, mails, or downloads a citation. If one of those actions is performed, the session is considered relevant to the user. An analysis is presented illustrating the advantages and disadvantages of this definition. With this definition and good transaction logging, it is possible to ascertain the relevance of a session. This was done for 905,970 sessions conducted with the University of California's Melvyl online catalog. Next, a methodology was developed to try to predict the relevance of a session. A number of variables were defined that characterize a session, none of which used any demographic information about the user. The values of the variables were computed for the sessions. Principal components analysis was used to extract a new set of variables out of the original set. A stratified random sampling technique was used to form ten strata such that each new strata of 90,570 sessions contained the same proportion of relevant to nonrelevant sessions. Logistic regression was used to ascertain the regression coefficients for nine of the ten strata. Then, the coefficients were used to predict the relevance of the sessions in the missing strata. Overall, 17.85% of the sessions were determined to be relevant. The predicted number of relevant sessions for all ten strata was 11 %, a 6.85% difference. The authors believe that the methodology can be further refined and the prediction improved. This methodology could also have significant application in improving user searching and also in predicting electronic commerce buying decisions without the use of personal demographic data

Languages

  • e 42
  • d 3
  • f 1
  • More… Less…

Types

  • a 42
  • s 4
  • m 3
  • More… Less…