Search (117 results, page 1 of 6)

  • × theme_ss:"Retrievalstudien"
  1. Smithson, S.: Information retrieval evaluation in practice : a case study approach (1994) 0.08
    0.08209024 = product of:
      0.1915439 = sum of:
        0.1219638 = weight(_text_:case in 7302) [ClassicSimilarity], result of:
          0.1219638 = score(doc=7302,freq=8.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.68003565 = fieldWeight in 7302, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7302)
        0.05023533 = weight(_text_:studies in 7302) [ClassicSimilarity], result of:
          0.05023533 = score(doc=7302,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.30860704 = fieldWeight in 7302, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7302)
        0.019344779 = product of:
          0.038689557 = sum of:
            0.038689557 = weight(_text_:22 in 7302) [ClassicSimilarity], result of:
              0.038689557 = score(doc=7302,freq=2.0), product of:
                0.14285508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04079441 = queryNorm
                0.2708308 = fieldWeight in 7302, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7302)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    The evaluation of information retrieval systems is an important yet difficult operation. This paper describes an exploratory evaluation study that takes an interpretive approach to evaluation. The longitudinal study examines evaluation through the information-seeking behaviour of 22 case studies of 'real' users. The eclectic approach to data collection produced behavioral data that is compared with relevance judgements and satisfaction ratings. The study demonstrates considerable variations among the cases, among different evaluation measures within the same case, and among the same measures at different stages within a single case. It is argued that those involved in evaluation should be aware of the difficulties, and base any evaluation on a good understanding of the cases in question
  2. Blandford, A.; Adams, A.; Attfield, S.; Buchanan, G.; Gow, J.; Makri, S.; Rimmer, J.; Warwick, C.: ¬The PRET A Rapporter framework : evaluating digital libraries from the perspective of information work (2008) 0.04
    0.036397107 = product of:
      0.12738986 = sum of:
        0.041272152 = weight(_text_:libraries in 2021) [ClassicSimilarity], result of:
          0.041272152 = score(doc=2021,freq=4.0), product of:
            0.13401186 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.04079441 = queryNorm
            0.30797386 = fieldWeight in 2021, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.046875 = fieldNorm(doc=2021)
        0.08611771 = weight(_text_:studies in 2021) [ClassicSimilarity], result of:
          0.08611771 = score(doc=2021,freq=8.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.52904063 = fieldWeight in 2021, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=2021)
      0.2857143 = coord(2/7)
    
    Abstract
    The strongest tradition of IR systems evaluation has focused on system effectiveness; more recently, there has been a growing interest in evaluation of Interactive IR systems, balancing system and user-oriented evaluation criteria. In this paper we shift the focus to considering how IR systems, and particularly digital libraries, can be evaluated to assess (and improve) their fit with users' broader work activities. Taking this focus, we answer a different set of evaluation questions that reveal more about the design of interfaces, user-system interactions and how systems may be deployed in the information working context. The planning and conduct of such evaluation studies share some features with the established methods for conducting IR evaluation studies, but come with a shift in emphasis; for example, a greater range of ethical considerations may be pertinent. We present the PRET A Rapporter framework for structuring user-centred evaluation studies and illustrate its application to three evaluation studies of digital library systems.
  3. Ruthven, I.: Relevance behaviour in TREC (2014) 0.03
    0.032949504 = product of:
      0.11532326 = sum of:
        0.043558497 = weight(_text_:case in 1785) [ClassicSimilarity], result of:
          0.043558497 = score(doc=1785,freq=2.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.24286987 = fieldWeight in 1785, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1785)
        0.07176476 = weight(_text_:studies in 1785) [ClassicSimilarity], result of:
          0.07176476 = score(doc=1785,freq=8.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.44086722 = fieldWeight in 1785, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1785)
      0.2857143 = coord(2/7)
    
    Abstract
    Purpose - The purpose of this paper is to examine how various types of TREC data can be used to better understand relevance and serve as test-bed for exploring relevance. The author proposes that there are many interesting studies that can be performed on the TREC data collections that are not directly related to evaluating systems but to learning more about human judgements of information and relevance and that these studies can provide useful research questions for other types of investigation. Design/methodology/approach - Through several case studies the author shows how existing data from TREC can be used to learn more about the factors that may affect relevance judgements and interactive search decisions and answer new research questions for exploring relevance. Findings - The paper uncovers factors, such as familiarity, interest and strictness of relevance criteria, that affect the nature of relevance assessments within TREC, contrasting these against findings from user studies of relevance. Research limitations/implications - The research only considers certain uses of TREC data and assessment given by professional relevance assessors but motivates further exploration of the TREC data so that the research community can further exploit the effort involved in the construction of TREC test collections. Originality/value - The paper presents an original viewpoint on relevance investigations and TREC itself by motivating TREC as a source of inspiration on understanding relevance rather than purely as a source of evaluation material.
  4. Behnert, C.; Lewandowski, D.: ¬A framework for designing retrieval effectiveness studies of library information systems using human relevance assessments (2017) 0.02
    0.024705702 = product of:
      0.086469956 = sum of:
        0.024319848 = weight(_text_:libraries in 3700) [ClassicSimilarity], result of:
          0.024319848 = score(doc=3700,freq=2.0), product of:
            0.13401186 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.04079441 = queryNorm
            0.18147534 = fieldWeight in 3700, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3700)
        0.062150106 = weight(_text_:studies in 3700) [ClassicSimilarity], result of:
          0.062150106 = score(doc=3700,freq=6.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.3818022 = fieldWeight in 3700, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3700)
      0.2857143 = coord(2/7)
    
    Abstract
    Purpose This paper demonstrates how to apply traditional information retrieval evaluation methods based on standards from the Text REtrieval Conference (TREC) and web search evaluation to all types of modern library information systems including online public access catalogs, discovery systems, and digital libraries that provide web search features to gather information from heterogeneous sources. Design/methodology/approach We apply conventional procedures from information retrieval evaluation to the library information system context considering the specific characteristics of modern library materials. Findings We introduce a framework consisting of five parts: (1) search queries, (2) search results, (3) assessors, (4) testing, and (5) data analysis. We show how to deal with comparability problems resulting from diverse document types, e.g., electronic articles vs. printed monographs and what issues need to be considered for retrieval tests in the library context. Practical implications The framework can be used as a guideline for conducting retrieval effectiveness studies in the library context. Originality/value Although a considerable amount of research has been done on information retrieval evaluation, and standards for conducting retrieval effectiveness studies do exist, to our knowledge this is the first attempt to provide a systematic framework for evaluating the retrieval effectiveness of twenty-first-century library information systems. We demonstrate which issues must be considered and what decisions must be made by researchers prior to a retrieval test.
  5. Cuadra, C.A.; Katter, R.V.: Experimental studies of relevance judgements: final report : Vol.2: description of individual studies (1967) 0.02
    0.023197873 = product of:
      0.1623851 = sum of:
        0.1623851 = weight(_text_:studies in 5356) [ClassicSimilarity], result of:
          0.1623851 = score(doc=5356,freq=4.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.9975686 = fieldWeight in 5356, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.125 = fieldNorm(doc=5356)
      0.14285715 = coord(1/7)
    
  6. Leininger, K.: Interindexer consistency in PsychINFO (2000) 0.02
    0.022135902 = product of:
      0.07747565 = sum of:
        0.06089442 = weight(_text_:studies in 2552) [ClassicSimilarity], result of:
          0.06089442 = score(doc=2552,freq=4.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.37408823 = fieldWeight in 2552, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=2552)
        0.016581237 = product of:
          0.033162475 = sum of:
            0.033162475 = weight(_text_:22 in 2552) [ClassicSimilarity], result of:
              0.033162475 = score(doc=2552,freq=2.0), product of:
                0.14285508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04079441 = queryNorm
                0.23214069 = fieldWeight in 2552, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2552)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Reports results of a study to examine interindexer consistency (the degree to which indexers, when assigning terms to a chosen record, will choose the same terms to reflect that record) in the PsycINFO database using 60 records that were inadvertently processed twice between 1996 and 1998. Five aspects of interindexer consistency were analysed. Two methods were used to calculate interindexer consistency: one posited by Hooper (1965) and the other by Rollin (1981). Aspects analysed were: checktag consistency (66.24% using Hooper's calculation and 77.17% using Rollin's); major-to-all term consistency (49.31% and 62.59% respectively); overall indexing consistency (49.02% and 63.32%); classification code consistency (44.17% and 45.00%); and major-to-major term consistency (43.24% and 56.09%). The average consistency across all categories was 50.4% using Hooper's method and 60.83% using Rollin's. Although comparison with previous studies is difficult due to methodological variations in the overall study of indexing consistency and the specific characteristics of the database, results generally support previous findings when trends and similar studies are analysed.
    Date
    9. 2.1997 18:44:22
  7. Saracevic, T.: Effects of inconsistent relevance judgments on information retrieval test results : a historical perspective (2008) 0.02
    0.0214472 = product of:
      0.075065196 = sum of:
        0.024319848 = weight(_text_:libraries in 5585) [ClassicSimilarity], result of:
          0.024319848 = score(doc=5585,freq=2.0), product of:
            0.13401186 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.04079441 = queryNorm
            0.18147534 = fieldWeight in 5585, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5585)
        0.050745346 = weight(_text_:studies in 5585) [ClassicSimilarity], result of:
          0.050745346 = score(doc=5585,freq=4.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.3117402 = fieldWeight in 5585, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5585)
      0.2857143 = coord(2/7)
    
    Abstract
    The main objective of information retrieval (IR) systems is to retrieve information or information objects relevant to user requests and possible needs. In IR tests, retrieval effectiveness is established by comparing IR systems retrievals (systems relevance) with users' or user surrogates' assessments (user relevance), where user relevance is treated as the gold standard for performance evaluation. Relevance is a human notion, and establishing relevance by humans is fraught with a number of problems-inconsistency in judgment being one of them. The aim of this critical review is to explore the relationship between relevance on the one hand and testing of IR systems and procedures on the other. Critics of IR tests raised the issue of validity of the IR tests because they were based on relevance judgments that are inconsistent. This review traces and synthesizes experimental studies dealing with (1) inconsistency of relevance judgments by people, (2) effects of such inconsistency on results of IR tests and (3) reasons for retrieval failures. A historical context for these studies and for IR testing is provided including an assessment of Lancaster's (1969) evaluation of MEDLARS and its unique place in the history of IR evaluation.
    Content
    Beitrag in einem Themenheft 'The Influence of F. W. Lancaster on Information Science and on Libraries', das als Festschrift für F.W. Lancaster deklariert ist.
  8. Wildemuth, B.; Freund, L.; Toms, E.G.: Untangling search task complexity and difficulty in the context of interactive information retrieval studies (2014) 0.02
    0.018446585 = product of:
      0.06456304 = sum of:
        0.050745346 = weight(_text_:studies in 1786) [ClassicSimilarity], result of:
          0.050745346 = score(doc=1786,freq=4.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.3117402 = fieldWeight in 1786, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1786)
        0.0138177 = product of:
          0.0276354 = sum of:
            0.0276354 = weight(_text_:22 in 1786) [ClassicSimilarity], result of:
              0.0276354 = score(doc=1786,freq=2.0), product of:
                0.14285508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04079441 = queryNorm
                0.19345059 = fieldWeight in 1786, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1786)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Purpose - One core element of interactive information retrieval (IIR) experiments is the assignment of search tasks. The purpose of this paper is to provide an analytical review of current practice in developing those search tasks to test, observe or control task complexity and difficulty. Design/methodology/approach - Over 100 prior studies of IIR were examined in terms of how each defined task complexity and/or difficulty (or related concepts) and subsequently interpreted those concepts in the development of the assigned search tasks. Findings - Search task complexity is found to include three dimensions: multiplicity of subtasks or steps, multiplicity of facets, and indeterminability. Search task difficulty is based on an interaction between the search task and the attributes of the searcher or the attributes of the search situation. The paper highlights the anomalies in our use of these two concepts, concluding with suggestions for future methodological research related to search task complexity and difficulty. Originality/value - By analyzing and synthesizing current practices, this paper provides guidance for future experiments in IIR that involve these two constructs.
    Date
    6. 4.2015 19:31:22
  9. Harter, S.P.: Search term combinations and retrieval overlap : a proposed methodology and case study (1990) 0.02
    0.0174234 = product of:
      0.1219638 = sum of:
        0.1219638 = weight(_text_:case in 339) [ClassicSimilarity], result of:
          0.1219638 = score(doc=339,freq=2.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.68003565 = fieldWeight in 339, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.109375 = fieldNorm(doc=339)
      0.14285715 = coord(1/7)
    
  10. Park, S.: Usability, user preferences, effectiveness, and user behaviors when searching individual and integrated full-text databases : implications for digital libraries (2000) 0.02
    0.017200638 = product of:
      0.060202226 = sum of:
        0.024319848 = weight(_text_:libraries in 4591) [ClassicSimilarity], result of:
          0.024319848 = score(doc=4591,freq=2.0), product of:
            0.13401186 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.04079441 = queryNorm
            0.18147534 = fieldWeight in 4591, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4591)
        0.03588238 = weight(_text_:studies in 4591) [ClassicSimilarity], result of:
          0.03588238 = score(doc=4591,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.22043361 = fieldWeight in 4591, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4591)
      0.2857143 = coord(2/7)
    
    Abstract
    This article addresses a crucial issue in the digital library environment: how to support effective interaction of users with heterogeneous and distributed information resources. In particular, this study compared usability, user preference, effectiveness, and searching behaviors in systems that implement interaction with multiple databases as if they were one (integrated interaction) in a experiment in the TREC environment. 28 volunteers were recruited from the graduate students of the School of Communication, Information & Library Studies at Rutgers University. Significantly more subjects preferred the common interface to the integrated interface, mainly because they could have more control over database selection. Subjects were also more satisfied with the results from the common interface, and performed better with the common interface than with the integrated interface. Overall, it appears that for this population, interacting with databases through a common interface is preferable on all grounds to interacting with databases through an integrated interface. These results suggest that: (1) the general assumption of the information retrieval (IR) literature that an integrated interaction is best needs to be revisited; (2) it is important to allow for more user control in the distributed environment; (3) for digital library purposes, it is important to characterize different databases to support user choice for integration; and (4) certain users prefer control over database selection while still opting for results to be merged
  11. Hodges, P.R.: Keyword in title indexes : effectiveness of retrieval in computer searches (1983) 0.02
    0.01525502 = product of:
      0.053392567 = sum of:
        0.03404779 = weight(_text_:libraries in 5001) [ClassicSimilarity], result of:
          0.03404779 = score(doc=5001,freq=2.0), product of:
            0.13401186 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.04079441 = queryNorm
            0.25406548 = fieldWeight in 5001, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5001)
        0.019344779 = product of:
          0.038689557 = sum of:
            0.038689557 = weight(_text_:22 in 5001) [ClassicSimilarity], result of:
              0.038689557 = score(doc=5001,freq=2.0), product of:
                0.14285508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04079441 = queryNorm
                0.2708308 = fieldWeight in 5001, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5001)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Date
    14. 3.1996 13:22:21
    Source
    Special libraries. 74(1983) no.1, S. 56-60
  12. Croft, W.B.; Thompson, R.H.: Support for browsing in an intelligent text retrieval system (1989) 0.01
    0.014352952 = product of:
      0.10047066 = sum of:
        0.10047066 = weight(_text_:studies in 5004) [ClassicSimilarity], result of:
          0.10047066 = score(doc=5004,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.6172141 = fieldWeight in 5004, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.109375 = fieldNorm(doc=5004)
      0.14285715 = coord(1/7)
    
    Source
    International journal of man-machine studies. 30(1989), S.639-668
  13. Beaulieu, M.: Approaches to user-based studies in information seeking and retrieval : a Sheffield perspective (2003) 0.01
    0.014352952 = product of:
      0.10047066 = sum of:
        0.10047066 = weight(_text_:studies in 4692) [ClassicSimilarity], result of:
          0.10047066 = score(doc=4692,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.6172141 = fieldWeight in 4692, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.109375 = fieldNorm(doc=4692)
      0.14285715 = coord(1/7)
    
  14. Barry, C.I.; Schamber, L.: User-defined relevance criteria : a comparison of 2 studies (1995) 0.01
    0.01420574 = product of:
      0.09944017 = sum of:
        0.09944017 = weight(_text_:studies in 3850) [ClassicSimilarity], result of:
          0.09944017 = score(doc=3850,freq=6.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.61088353 = fieldWeight in 3850, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0625 = fieldNorm(doc=3850)
      0.14285715 = coord(1/7)
    
    Abstract
    Aims to determine the extent to which there is a core of relevance criteria that soans such factors as information need situations, user environments, and types of information. 2 recent empirical studies have identified and described user defined relevance criteria. Synthesizes the findings of the 2 studies as a 1st step toward identifying criteria that seem to span information environments and criteria that may be more situationally specific
  15. Belkin, N.J.: ¬An overview of results from Rutgers' investigations of interactive information retrieval (1998) 0.01
    0.014200024 = product of:
      0.04970008 = sum of:
        0.03588238 = weight(_text_:studies in 2339) [ClassicSimilarity], result of:
          0.03588238 = score(doc=2339,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.22043361 = fieldWeight in 2339, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2339)
        0.0138177 = product of:
          0.0276354 = sum of:
            0.0276354 = weight(_text_:22 in 2339) [ClassicSimilarity], result of:
              0.0276354 = score(doc=2339,freq=2.0), product of:
                0.14285508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04079441 = queryNorm
                0.19345059 = fieldWeight in 2339, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2339)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Over the last 4 years, the Information Interaction Laboratory at Rutgers' School of communication, Information and Library Studies has performed a series of investigations concerned with various aspects of people's interactions with advanced information retrieval (IR) systems. We have benn especially concerned with understanding not just what people do, and why, and with what effect, but also with what they would like to do, and how they attempt to accomplish it, and with what difficulties. These investigations have led to some quite interesting conclusions about the nature and structure of people's interactions with information, about support for cooperative human-computer interaction in query reformulation, and about the value of visualization of search results for supporting various forms of interaction with information. In this discussion, I give an overview of the research program and its projects, present representative results from the projects, and discuss some implications of these results for support of subject searching in information retrieval systems
    Date
    22. 9.1997 19:16:05
  16. King, D.W.: Blazing new trails : in celebration of an audacious career (2000) 0.01
    0.014200024 = product of:
      0.04970008 = sum of:
        0.03588238 = weight(_text_:studies in 1184) [ClassicSimilarity], result of:
          0.03588238 = score(doc=1184,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.22043361 = fieldWeight in 1184, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1184)
        0.0138177 = product of:
          0.0276354 = sum of:
            0.0276354 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
              0.0276354 = score(doc=1184,freq=2.0), product of:
                0.14285508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04079441 = queryNorm
                0.19345059 = fieldWeight in 1184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1184)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    I had the distinct pleasure of working with Pauline Atherton (Cochrane) during the 1960s, a period that can be considered the heyday of automated information system design and evaluation in the United States. I first met Pauline at the 1962 American Documentation Institute annual meeting in North Hollywood, Florida. My company, Westat Research Analysts, had recently been awarded a contract by the U.S. Patent Office to provide statistical support for the design of experiments with automated information retrieval systems. I was asked to attend the meeting to learn more about information retrieval systems and to begin informing others of U.S. Patent Office activities in this area. At one session, Pauline and I questioned a speaker about the research that he presented. Pauline's questions concerned the logic of their approach and mine, the statistical aspects. After the session, she came over to talk to me and we began a professional and personal friendship that continues to this day. During the 1960s, Pauline was involved in several important information-retrieval projects including a series of studies for the American Institute of Physics, a dissertation examining the relevance of retrieved documents, and development and evaluation of an online information-retrieval system. I had the opportunity to work with Pauline and her colleagues an four of those projects and will briefly describe her work in the 1960s.
    Date
    22. 9.1997 19:16:05
  17. Chu, H.: Factors affecting relevance judgment : a report from TREC Legal track (2011) 0.01
    0.014200024 = product of:
      0.04970008 = sum of:
        0.03588238 = weight(_text_:studies in 4540) [ClassicSimilarity], result of:
          0.03588238 = score(doc=4540,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.22043361 = fieldWeight in 4540, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4540)
        0.0138177 = product of:
          0.0276354 = sum of:
            0.0276354 = weight(_text_:22 in 4540) [ClassicSimilarity], result of:
              0.0276354 = score(doc=4540,freq=2.0), product of:
                0.14285508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04079441 = queryNorm
                0.19345059 = fieldWeight in 4540, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4540)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Purpose - This study intends to identify factors that affect relevance judgment of retrieved information as part of the 2007 TREC Legal track interactive task. Design/methodology/approach - Data were gathered and analyzed from the participants of the 2007 TREC Legal track interactive task using a questionnaire which includes not only a list of 80 relevance factors identified in prior research, but also a space for expressing their thoughts on relevance judgment in the process. Findings - This study finds that topicality remains a primary criterion, out of various options, for determining relevance, while specificity of the search request, task, or retrieved results also helps greatly in relevance judgment. Research limitations/implications - Relevance research should focus on the topicality and specificity of what is being evaluated as well as conducted in real environments. Practical implications - If multiple relevance factors are presented to assessors, the total number in a list should be below ten to take account of the limited processing capacity of human beings' short-term memory. Otherwise, the assessors might either completely ignore or inadequately consider some of the relevance factors when making judgment decisions. Originality/value - This study presents a method for reducing the artificiality of relevance research design, an apparent limitation in many related studies. Specifically, relevance judgment was made in this research as part of the 2007 TREC Legal track interactive task rather than a study devised for the sake of it. The assessors also served as searchers so that their searching experience would facilitate their subsequent relevance judgments.
    Date
    12. 7.2011 18:29:22
  18. Borgman, C.L.: Why are online catalogs still hard to use? (1996) 0.01
    0.01376051 = product of:
      0.048161782 = sum of:
        0.019455878 = weight(_text_:libraries in 4380) [ClassicSimilarity], result of:
          0.019455878 = score(doc=4380,freq=2.0), product of:
            0.13401186 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.04079441 = queryNorm
            0.14518027 = fieldWeight in 4380, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03125 = fieldNorm(doc=4380)
        0.028705904 = weight(_text_:studies in 4380) [ClassicSimilarity], result of:
          0.028705904 = score(doc=4380,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.17634688 = fieldWeight in 4380, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03125 = fieldNorm(doc=4380)
      0.2857143 = coord(2/7)
    
    Abstract
    We return to arguments made 10 years ago that online catalogs are difficult to use because their design does not incorporate sufficient understanding of searching behavior. The earlier article examined studies of information retrieval system searching for their implications for online catalog design; this article examines the implications of card catalog design for online catalogs. With this analysis, we hope to contribute to a better understanding of user behavior and to lay to rest the card catalog design model for online catalogs. We discuss the problems with query matching systems, which were designed for skilled search intermediaries rather than end-users, and the knowledge and skills they require in the information-seeking process, illustrated with examples of searching card and online catalogs. Searching requires conceptual knowledge of the information retrieval process - translating an information need into a searchable query; semantic knowledge of how to implement a query in a given system - the how and when to use system features; and technical skills in executing the query - basic computing skills and the syntax of entering queries as specific search statements. In the short term, we can help make online catalogs easier to use through improved training and documentation that is based on information-seeking bahavior, with the caveat that good training is not a substitute for good system design. Our long term goal should be to design intuitive systems that require a minimum of instruction. Given the complexity of the information retrieval problem and the limited capabilities of today's systems, we are far from achieving that goal. If libraries are to provide primary information services for the networked world, they need to put research results on the information-seeking process into practice in designing the next generation of online public access information retrieval systems
  19. Lancaster, F.W.: Evaluating the performance of a large computerized information system (1985) 0.01
    0.01376051 = product of:
      0.048161782 = sum of:
        0.019455878 = weight(_text_:libraries in 3649) [ClassicSimilarity], result of:
          0.019455878 = score(doc=3649,freq=2.0), product of:
            0.13401186 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.04079441 = queryNorm
            0.14518027 = fieldWeight in 3649, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03125 = fieldNorm(doc=3649)
        0.028705904 = weight(_text_:studies in 3649) [ClassicSimilarity], result of:
          0.028705904 = score(doc=3649,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.17634688 = fieldWeight in 3649, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03125 = fieldNorm(doc=3649)
      0.2857143 = coord(2/7)
    
    Abstract
    F. W. Lancaster is known for his writing an the state of the art in librarylinformation science. His skill in identifying significant contributions and synthesizing literature in fields as diverse as online systems, vocabulary control, measurement and evaluation, and the paperless society have earned him esteem as a chronicler of information science. Equally deserving of repute is his own contribution to research in the discipline-his evaluation of the MEDLARS operating system. The MEDLARS study is notable for several reasons. It was the first large-scale application of retrieval experiment methodology to the evaluation of an actual operating system. As such, problems had to be faced that do not arise in laboratory-like conditions. One example is the problem of recall: how to determine, for a very large and dynamic database, the number of documents relevant to a given search request. By solving this problem and others attendant upon transferring an experimental methodology to the real world, Lancaster created a constructive procedure that could be used to improve the design and functioning of retrieval systems. The MEDLARS study is notable also for its contribution to our understanding of what constitutes a good index language and good indexing. The ideal retrieval system would be one that retrieves all and only relevant documents. The failures that occur in real operating systems, when a relevant document is not retrieved (a recall failure) or an irrelevant document is retrieved (a precision failure), can be analysed to assess the impact of various factors an the performance of the system. This is exactly what Lancaster did. He found both the MEDLARS indexing and the McSH index language to be significant factors affecting retrieval performance. The indexing, primarily because it was insufficiently exhaustive, explained a large number of recall failures. The index language, largely because of its insufficient specificity, accounted for a large number of precision failures. The purpose of identifying factors responsible for a system's failures is ultimately to improve the system. Unlike many user studies, the MEDLARS evaluation yielded recommendations that were eventually implemented.* Indexing exhaustivity was increased and the McSH index language was enriched with more specific terms and a larger entry vocabulary.
    Imprint
    Littleton, CO : Libraries Unlimited
  20. Hull, D.A.: Stemming algorithms : a case study for detailed evaluation (1996) 0.01
    0.012320205 = product of:
      0.08624143 = sum of:
        0.08624143 = weight(_text_:case in 2999) [ClassicSimilarity], result of:
          0.08624143 = score(doc=2999,freq=4.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.48085782 = fieldWeight in 2999, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2999)
      0.14285715 = coord(1/7)
    
    Abstract
    The majority of information retrieval experiments are evaluated by measures such as average precision and average recall. Fundamental decisions about the superiority of one retrieval technique over another are made solely on the bases of these measures. We claim that average performance figures need to be validated with a careful statistical analysis and that there is a great deal of additional information that can be uncovered by looking closely at the results of individual queries. This article is a case study of stemming algorithms which describes a number of novel approaches to evaluation and demonstrates their value

Languages

  • e 112
  • d 3
  • f 1
  • More… Less…

Types

  • a 111
  • s 4
  • el 3
  • m 3
  • r 1
  • More… Less…