Search (405 results, page 2 of 21)

  • × theme_ss:"Retrievalstudien"
  1. Petrelli, D.: On the role of user-centred evaluation in the advancement of interactive information retrieval (2008) 0.02
    0.0206593 = product of:
      0.061977897 = sum of:
        0.019801848 = weight(_text_:of in 2026) [ClassicSimilarity], result of:
          0.019801848 = score(doc=2026,freq=28.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.32322758 = fieldWeight in 2026, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2026)
        0.02890629 = weight(_text_:systems in 2026) [ClassicSimilarity], result of:
          0.02890629 = score(doc=2026,freq=4.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.24009174 = fieldWeight in 2026, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2026)
        0.013269759 = product of:
          0.026539518 = sum of:
            0.026539518 = weight(_text_:22 in 2026) [ClassicSimilarity], result of:
              0.026539518 = score(doc=2026,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.19345059 = fieldWeight in 2026, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2026)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    This paper discusses the role of user-centred evaluations as an essential method for researching interactive information retrieval. It draws mainly on the work carried out during the Clarity Project where different user-centred evaluations were run during the lifecycle of a cross-language information retrieval system. The iterative testing was not only instrumental to the development of a usable system, but it enhanced our knowledge of the potential, impact, and actual use of cross-language information retrieval technology. Indeed the role of the user evaluation was dual: by testing a specific prototype it was possible to gain a micro-view and assess the effectiveness of each component of the complex system; by cumulating the result of all the evaluations (in total 43 people were involved) it was possible to build a macro-view of how cross-language retrieval would impact on users and their tasks. By showing the richness of results that can be acquired, this paper aims at stimulating researchers into considering user-centred evaluations as a flexible, adaptable and comprehensive technique for investigating non-traditional information access systems.
    Footnote
    Beitrag eines Themenbereichs: Evaluation of Interactive Information Retrieval Systems
    Source
    Information processing and management. 44(2008) no.1, S.22-38
  2. Crestani, F.; Rijsbergen, C.J. van: Information retrieval by imaging (1996) 0.02
    0.019834556 = product of:
      0.059503667 = sum of:
        0.019052157 = weight(_text_:of in 6967) [ClassicSimilarity], result of:
          0.019052157 = score(doc=6967,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.3109903 = fieldWeight in 6967, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=6967)
        0.0245278 = weight(_text_:systems in 6967) [ClassicSimilarity], result of:
          0.0245278 = score(doc=6967,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2037246 = fieldWeight in 6967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=6967)
        0.015923709 = product of:
          0.031847417 = sum of:
            0.031847417 = weight(_text_:22 in 6967) [ClassicSimilarity], result of:
              0.031847417 = score(doc=6967,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.23214069 = fieldWeight in 6967, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6967)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    Explains briefly what constitutes the imaging process and explains how imaging can be used in information retrieval. Proposes an approach based on the concept of: 'a term is a possible world'; which enables the exploitation of term to term relationships which are estimated using an information theoretic measure. Reports results of an evaluation exercise to compare the performance of imaging retrieval, using possible world semantics, with a benchmark and using the Cranfield 2 document collection to measure precision and recall. Initially, the performance imaging retrieval was seen to be better but statistical analysis proved that the difference was not significant. The problem with imaging retrieval lies in the amount of computations needed to be performed at run time and a later experiement investigated the possibility of reducing this amount. Notes lines of further investigation
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  3. Tague-Sutcliffe, J.M.: Some perspectives on the evaluation of information retrieval systems (1996) 0.02
    0.019425925 = product of:
      0.087416664 = sum of:
        0.023429861 = weight(_text_:of in 4163) [ClassicSimilarity], result of:
          0.023429861 = score(doc=4163,freq=20.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.38244802 = fieldWeight in 4163, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4163)
        0.0639868 = weight(_text_:systems in 4163) [ClassicSimilarity], result of:
          0.0639868 = score(doc=4163,freq=10.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.5314657 = fieldWeight in 4163, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4163)
      0.22222222 = coord(2/9)
    
    Abstract
    As an introduction to the papers in this special issue, some of the major problems facing in investigators evaluating information retrieval systems are presented. These problems include the question of the necessity of using real users, as opposed to subject experts, in making relevance judgements, the possibility of evaluating individual components of the retrieval process, rather than the process as a whole, the kinds of aggregation that are appropriate for the measures used in evaluating systems, the value of an analytic or simulatory, as opposed to an experimental, approach in evaluation retrieval systems, the difficulties in evaluating interactive systems, and the kind of generalization which are possible from information retrieval tests.
    Source
    Journal of the American Society for Information Science. 47(1996) no.1, S.1-3
  4. Feldman, S.: Testing natural language : comparing DIALOG, TARGET, and DR-LINK (1996) 0.02
    0.019144185 = product of:
      0.08614883 = sum of:
        0.020741362 = weight(_text_:of in 7463) [ClassicSimilarity], result of:
          0.020741362 = score(doc=7463,freq=12.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.33856338 = fieldWeight in 7463, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=7463)
        0.06540746 = weight(_text_:systems in 7463) [ClassicSimilarity], result of:
          0.06540746 = score(doc=7463,freq=8.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.5432656 = fieldWeight in 7463, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0625 = fieldNorm(doc=7463)
      0.22222222 = coord(2/9)
    
    Abstract
    Compares online searching of DIALOG (a traditional Boolean system), TARGET (a relevance ranking system) and DR-LINK (an advanced intelligent text processing system), in order to establish the differing strengths of traditional and natural language processing search systems. Details example search queries used in comparison and how each of the systems performed. Considers the implications of the findings for professional information searchers and end users. Natural language processing systems are useful because they develop an wider understanding of queries that use of traditional systems may not
  5. Tombros, T.; Crestani, F.: Users' perception of relevance of spoken documents (2000) 0.02
    0.01898674 = product of:
      0.08544032 = sum of:
        0.05872617 = weight(_text_:applications in 4996) [ClassicSimilarity], result of:
          0.05872617 = score(doc=4996,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.34048924 = fieldWeight in 4996, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4996)
        0.026714152 = weight(_text_:of in 4996) [ClassicSimilarity], result of:
          0.026714152 = score(doc=4996,freq=26.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.43605784 = fieldWeight in 4996, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4996)
      0.22222222 = coord(2/9)
    
    Abstract
    We present the results of a study of user's perception of relevance of documents. The aim is to study experimentally how users' perception varies depending on the form that retrieved documents are presented. Documents retrieved in response to a query are presented to users in a variety of ways, from full text to a machine spoken query-biased automatically-generated summary, and the difference in users' perception of relevance is studied. The experimental results suggest that the effectiveness of advanced multimedia Information Retrieval applications may be affected by the low level of users' perception of relevance of retrieved documents
    Source
    Journal of the American Society for Information Science. 51(2000) no.10, S.929-939
  6. Keen, E.M.: Laboratory tests of manual systems (1981) 0.02
    0.018298382 = product of:
      0.082342714 = sum of:
        0.016935252 = weight(_text_:of in 3152) [ClassicSimilarity], result of:
          0.016935252 = score(doc=3152,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.27643585 = fieldWeight in 3152, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.125 = fieldNorm(doc=3152)
        0.06540746 = weight(_text_:systems in 3152) [ClassicSimilarity], result of:
          0.06540746 = score(doc=3152,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.5432656 = fieldWeight in 3152, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.125 = fieldNorm(doc=3152)
      0.22222222 = coord(2/9)
    
  7. Borlund, P.: Evaluation of interactive information retrieval systems (2000) 0.02
    0.018132308 = product of:
      0.08159539 = sum of:
        0.008467626 = weight(_text_:of in 2556) [ClassicSimilarity], result of:
          0.008467626 = score(doc=2556,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.13821793 = fieldWeight in 2556, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=2556)
        0.07312777 = weight(_text_:systems in 2556) [ClassicSimilarity], result of:
          0.07312777 = score(doc=2556,freq=10.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.6073894 = fieldWeight in 2556, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0625 = fieldNorm(doc=2556)
      0.22222222 = coord(2/9)
    
    LCSH
    Information storage and retrieval systems / Evaluation
    Interactive computer systems / Evaluation
    Subject
    Information storage and retrieval systems / Evaluation
    Interactive computer systems / Evaluation
  8. Wildemuth, B.M.: Measures of success in searching a full-text fact base (1990) 0.02
    0.017989708 = product of:
      0.08095369 = sum of:
        0.05872617 = weight(_text_:applications in 2050) [ClassicSimilarity], result of:
          0.05872617 = score(doc=2050,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.34048924 = fieldWeight in 2050, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2050)
        0.022227516 = weight(_text_:of in 2050) [ClassicSimilarity], result of:
          0.022227516 = score(doc=2050,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.36282203 = fieldWeight in 2050, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2050)
      0.22222222 = coord(2/9)
    
    Abstract
    The traditional measures of online searching proficiency (recall and precision) are less appropriate when applied to the searching of full text databases. The pilot study investigated and evaluated 5 measures of overall success in searching a full text data bank. Data was drawn from INQUIRER searches conducted by medical students at North Carolina Univ. at Chapel Hill. INQUIRER ia an online database of facts and concepts in microbiology. The 5 measures were: success/failure; precision; search term overlap; number of search cycles; and time per search. Concludes that the last 4 measures look promising for the evaluation of fact data bases such as ENQUIRER
    Source
    ASIS'90: Information in the year 2000, from research to applications. Proc. of the 53rd Annual Meeting of the American Society for Information Science, Toronto, Canada, 4.-8.11.1990. Ed. by Diana Henderson
  9. Tonta, Y.: Analysis of search failures in document retrieval systems : a review (1992) 0.02
    0.017794183 = product of:
      0.08007382 = sum of:
        0.014666359 = weight(_text_:of in 4611) [ClassicSimilarity], result of:
          0.014666359 = score(doc=4611,freq=6.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.23940048 = fieldWeight in 4611, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=4611)
        0.06540746 = weight(_text_:systems in 4611) [ClassicSimilarity], result of:
          0.06540746 = score(doc=4611,freq=8.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.5432656 = fieldWeight in 4611, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0625 = fieldNorm(doc=4611)
      0.22222222 = coord(2/9)
    
    Abstract
    This paper examines search failures in document retrieval systems. Since search failures are closely related to overall document retrieval system performance, the paper briefly discusses retrieval effectiveness measures such as precision and recall. It examines 4 methods used to study retrieval failures: retrieval effectiveness measures, user satisfaction measures, transaction log analysis, and the critical incident technique. It summarizes the findings of major failure anaylsis studies and identifies the types of failures that usually occur in document retrieval systems
    Source
    Public-access computer systems review. 3(1992) no.1, S.4-53
  10. Blandford, A.; Adams, A.; Attfield, S.; Buchanan, G.; Gow, J.; Makri, S.; Rimmer, J.; Warwick, C.: ¬The PRET A Rapporter framework : evaluating digital libraries from the perspective of information work (2008) 0.02
    0.017585056 = product of:
      0.07913275 = sum of:
        0.019052157 = weight(_text_:of in 2021) [ClassicSimilarity], result of:
          0.019052157 = score(doc=2021,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.3109903 = fieldWeight in 2021, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2021)
        0.06008059 = weight(_text_:systems in 2021) [ClassicSimilarity], result of:
          0.06008059 = score(doc=2021,freq=12.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.4990213 = fieldWeight in 2021, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=2021)
      0.22222222 = coord(2/9)
    
    Abstract
    The strongest tradition of IR systems evaluation has focused on system effectiveness; more recently, there has been a growing interest in evaluation of Interactive IR systems, balancing system and user-oriented evaluation criteria. In this paper we shift the focus to considering how IR systems, and particularly digital libraries, can be evaluated to assess (and improve) their fit with users' broader work activities. Taking this focus, we answer a different set of evaluation questions that reveal more about the design of interfaces, user-system interactions and how systems may be deployed in the information working context. The planning and conduct of such evaluation studies share some features with the established methods for conducting IR evaluation studies, but come with a shift in emphasis; for example, a greater range of ethical considerations may be pertinent. We present the PRET A Rapporter framework for structuring user-centred evaluation studies and illustrate its application to three evaluation studies of digital library systems.
    Footnote
    Beitrag eines Themenbereichs: Evaluation of Interactive Information Retrieval Systems
  11. Shenouda, W.: Online bibliographic searching : how end-users modify their search strategies (1990) 0.02
    0.017083302 = product of:
      0.07687486 = sum of:
        0.05872617 = weight(_text_:applications in 4895) [ClassicSimilarity], result of:
          0.05872617 = score(doc=4895,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.34048924 = fieldWeight in 4895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4895)
        0.018148692 = weight(_text_:of in 4895) [ClassicSimilarity], result of:
          0.018148692 = score(doc=4895,freq=12.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.29624295 = fieldWeight in 4895, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4895)
      0.22222222 = coord(2/9)
    
    Abstract
    The study attempted to idendify how end-users modify their initial search strategies in the light of new information presented during their interaction with an online bibliographic information retrieval system in a real environment. This exploratory study was also conducted to determine the effectiveness of the changes, made by users during the online process, in retrieving relevant documents. Analysis of this data shows that all end-users modify their searches during the online process. Results indicate that certain changes were made more frequently than others. Changes affecting relevance and characteristics of end-users' online search behaviour were also identified
    Source
    ASIS'90: Information in the year 2000, from research to applications. Proc. of the 53rd Annual Meeting of the American Society for Information Science, Toronto, Canada, 4.-8.11.1990. Ed. by Diana Henderson
  12. Hersh, W.R.; Hickam, D.H.: ¬An evaluation of interactive Boolean and natural language searching with an online medical textbook (1995) 0.02
    0.016399765 = product of:
      0.07379895 = sum of:
        0.016567415 = weight(_text_:of in 2651) [ClassicSimilarity], result of:
          0.016567415 = score(doc=2651,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2704316 = fieldWeight in 2651, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2651)
        0.057231534 = weight(_text_:systems in 2651) [ClassicSimilarity], result of:
          0.057231534 = score(doc=2651,freq=8.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.47535738 = fieldWeight in 2651, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2651)
      0.22222222 = coord(2/9)
    
    Abstract
    Few studies have compared the interactive use of Boolean and natural language search systems. Studies the use of 3 retrieval systems by senior medical students searching on queries generated by actual physicians in a clinical setting. The searchers were randomized to search on 2 or 3 different retrieval systems: a Boolean system, a word-based natural language system, and a concept-based natural language system. Results showed no statistically significant differences in recall or precision among the 3 systems. Likewise, there is no user preference for any system over the other. The study revealed problems with traditional measures of retrieval evaluation when applied to the interactive search setting
    Source
    Journal of the American Society for Information Science. 46(1995) no.7, S.478-489
  13. Angelini, M.; Fazzini, V.; Ferro, N.; Santucci, G.; Silvello, G.: CLAIRE: A combinatorial visual analytics system for information retrieval evaluation (2018) 0.02
    0.016257832 = product of:
      0.073160246 = sum of:
        0.019081537 = weight(_text_:of in 5049) [ClassicSimilarity], result of:
          0.019081537 = score(doc=5049,freq=26.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.31146988 = fieldWeight in 5049, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5049)
        0.054078713 = weight(_text_:systems in 5049) [ClassicSimilarity], result of:
          0.054078713 = score(doc=5049,freq=14.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.4491705 = fieldWeight in 5049, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5049)
      0.22222222 = coord(2/9)
    
    Abstract
    Information Retrieval (IR) develops complex systems, constituted of several components, which aim at returning and optimally ranking the most relevant documents in response to user queries. In this context, experimental evaluation plays a central role, since it allows for measuring IR systems effectiveness, increasing the understanding of their functioning, and better directing the efforts for improving them. Current evaluation methodologies are limited by two major factors: (i) IR systems are evaluated as "black boxes", since it is not possible to decompose the contributions of the different components, e.g., stop lists, stemmers, and IR models; (ii) given that it is not possible to predict the effectiveness of an IR system, both academia and industry need to explore huge numbers of systems, originated by large combinatorial compositions of their components, to understand how they perform and how these components interact together. We propose a Combinatorial visuaL Analytics system for Information Retrieval Evaluation (CLAIRE) which allows for exploring and making sense of the performances of a large amount of IR systems, in order to quickly and intuitively grasp which system configurations are preferred, what are the contributions of the different components and how these components interact together. The CLAIRE system is then validated against use cases based on several test collections using a wide set of systems, generated by a combinatorial composition of several off-the-shelf components, representing the most common denominator almost always present in English IR systems. In particular, we validate the findings enabled by CLAIRE with respect to consolidated deep statistical analyses and we show that the CLAIRE system allows the generation of new insights, which were not detectable with traditional approaches.
  14. Allen, B.: Logical reasoning and retrieval performance (1993) 0.02
    0.016220849 = product of:
      0.07299382 = sum of:
        0.023429861 = weight(_text_:of in 5093) [ClassicSimilarity], result of:
          0.023429861 = score(doc=5093,freq=20.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.38244802 = fieldWeight in 5093, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5093)
        0.049563963 = weight(_text_:systems in 5093) [ClassicSimilarity], result of:
          0.049563963 = score(doc=5093,freq=6.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.41167158 = fieldWeight in 5093, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5093)
      0.22222222 = coord(2/9)
    
    Abstract
    Tests the logical reasoning ability of end users of a CD-ROM index and assesses associations between different levels of this ability and aspects of retrieval performance. Users' selection of vocabulary and their selection of citations for further examination are both influenced by this ability. The designs of information systems should address the effects of logical reasoning on search behaviour. People with lower levels of logical reasoning ability may experience difficulty using systems in which user selectivity plays an important role. Other systems, such as those with ranked output, may decrease the need for users to make selections and would be easier to use for people with lower levels of logical reasoning ability
  15. Borlund, P.: Experimental components for the evaluation of interactive information retrieval systems (2000) 0.02
    0.016036931 = product of:
      0.07216619 = sum of:
        0.026461331 = weight(_text_:of in 4549) [ClassicSimilarity], result of:
          0.026461331 = score(doc=4549,freq=50.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.43193102 = fieldWeight in 4549, product of:
              7.071068 = tf(freq=50.0), with freq of:
                50.0 = termFreq=50.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4549)
        0.045704857 = weight(_text_:systems in 4549) [ClassicSimilarity], result of:
          0.045704857 = score(doc=4549,freq=10.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.37961838 = fieldWeight in 4549, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4549)
      0.22222222 = coord(2/9)
    
    Abstract
    This paper presents a set of basic components which constitutes the experimental setting intended for the evaluation of interactive information retrieval (IIR) systems, the aim of which is to facilitate evaluation of IIR systems in a way which is as close as possible to realistic IR processes. The experimental settings consists of 3 components: (1) the involvement of potential users as test persons; (2) the application of dynamic and individual information needs; and (3) the use of multidimensionsal and dynamic relevance judgements. Hidden under the information need component is the essential central sub-component, the simulated work task situation, the tool that triggers the (simulated) dynamic information need. This paper also reports on the empirical findings of the meta-evaluation of the application of this sub-component, the purpose of which is to discover whether the application of simulated work task situations to future evaluation of IIR systems can be recommended. Investigations are carried out to dertermine whether any search behavioural differences exist between test persons' treatment of their own real information needs versus simulated information needs. The hypothesis is that if no difference exist one can correctly substitute real information needs with simulated information needs through the application of simulated work task situations. The empirical results of the meta-evaluation provide positive evidence for the application of simulated work task situations to the evaluation of IIR systems. The results also indicate that tailoring work task situations to the group of test persons is important in motivating them. Furthermore, the results of the evaluation show that different versions of semantic openness of the simulated situations make no difference to the test persons' search treatment
    Source
    Journal of documentation. 56(2000) no.1, S.71-90
  16. Gillman, P.: Text retrieval (1998) 0.02
    0.015874058 = product of:
      0.07143326 = sum of:
        0.016935252 = weight(_text_:of in 1502) [ClassicSimilarity], result of:
          0.016935252 = score(doc=1502,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.27643585 = fieldWeight in 1502, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=1502)
        0.054498006 = weight(_text_:software in 1502) [ClassicSimilarity], result of:
          0.054498006 = score(doc=1502,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.35064998 = fieldWeight in 1502, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0625 = fieldNorm(doc=1502)
      0.22222222 = coord(2/9)
    
    Abstract
    Considers some of the papers given at the 1997 Text Retrieval conference (TR 97) in the context of the development of text retrieval software and research, from the Cranfield experiments of the early 1960s up to the recent TREC tests. Suggests that the primitive techniques currently employed for searching the WWW appear to ignore all the serious work done on information retrieval over the past 4 decades
  17. Keen, E.M.; Hartley, R.J.: Phrase processing in text retrieval (1994) 0.02
    0.0153698595 = product of:
      0.069164366 = sum of:
        0.014666359 = weight(_text_:of in 1316) [ClassicSimilarity], result of:
          0.014666359 = score(doc=1316,freq=6.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.23940048 = fieldWeight in 1316, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=1316)
        0.054498006 = weight(_text_:software in 1316) [ClassicSimilarity], result of:
          0.054498006 = score(doc=1316,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.35064998 = fieldWeight in 1316, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0625 = fieldNorm(doc=1316)
      0.22222222 = coord(2/9)
    
    Abstract
    After introducing types of records, queries and text processing options, the features needed in software for phrase processing are identified and different approaches in current text retrieval research in the Text Retrieval Conference (TREC) projects are enumerated. Then follow eight observations on issues in phrase searching relating both to practice and to research, giving the authors' selection of crucial and controversial issues, supported by 21 references
    Source
    Journal of document and text management. 2(1994) no.1, S.23-34
  18. Salton, G.: Thoughts about modern retrieval technologies (1988) 0.02
    0.015253791 = product of:
      0.06864206 = sum of:
        0.020956306 = weight(_text_:of in 1522) [ClassicSimilarity], result of:
          0.020956306 = score(doc=1522,freq=16.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.34207192 = fieldWeight in 1522, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1522)
        0.047685754 = weight(_text_:software in 1522) [ClassicSimilarity], result of:
          0.047685754 = score(doc=1522,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.30681872 = fieldWeight in 1522, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1522)
      0.22222222 = coord(2/9)
    
    Abstract
    Paper presented at the 30th Annual Conference of the National Federation of Astracting and Information Services, Philadelphia, 28 Feb-2 Mar 88. In recent years, the amount and the variety of available machine-readable data, new technologies have been introduced, such as high density storage devices, and fancy graphic displays useful for information transformation and access. New approaches have also been considered for processing the stored data based on the construction of knowledge bases representing the contents and structure of the information, and the use of expert system techniques to control the user-system interactions. Provides a brief evaluation of the new information processing technologies, and of the software methods proposed for information manipulation.
  19. Ellis, D.: Progress and problems in information retrieval (1996) 0.01
    0.01499593 = product of:
      0.06748168 = sum of:
        0.046250064 = weight(_text_:systems in 789) [ClassicSimilarity], result of:
          0.046250064 = score(doc=789,freq=4.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.38414678 = fieldWeight in 789, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0625 = fieldNorm(doc=789)
        0.021231614 = product of:
          0.042463228 = sum of:
            0.042463228 = weight(_text_:22 in 789) [ClassicSimilarity], result of:
              0.042463228 = score(doc=789,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.30952093 = fieldWeight in 789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=789)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    An introduction to the principal generic approaches to information retrieval research with their associated concepts, models and systems, this text is designed to keep the information professional up to date with the major themes and developments that have preoccupied researchers in recent month in relation to textual and documentary retrieval systems.
    Date
    26. 7.2002 20:22:46
  20. Saracevic, T.; Mokros, H.; Su, L.: Nature of interaction between users and intermediaries in online searching : a qualitative analysis (1990) 0.01
    0.014919809 = product of:
      0.06713914 = sum of:
        0.050336715 = weight(_text_:applications in 4894) [ClassicSimilarity], result of:
          0.050336715 = score(doc=4894,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2918479 = fieldWeight in 4894, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=4894)
        0.016802425 = weight(_text_:of in 4894) [ClassicSimilarity], result of:
          0.016802425 = score(doc=4894,freq=14.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2742677 = fieldWeight in 4894, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4894)
      0.22222222 = coord(2/9)
    
    Abstract
    Reports preliminary results from a study, conducted at Rutgers Univ., School of Communication, Information and Library Studies, to conduct observations and experiments under real-life conditions on the nature, effects and patterns in the discourse between users and intermediary searchers and in the related computer commands in the context of online searching and responses. The study involved videotaping interactions between users and intermediaries and recording the search logs for 40 questions. Users judged the relevance of output and completed a number of other measures. Data is analysed both quantitatively, using standard and innovative statistical techniques, and qualitatively, through a grounded theory approach using microanalytic and observational methods
    Source
    ASIS'90: Information in the year 2000, from research to applications. Proc. of the 53rd Annual Meeting of the American Society for Information Science, Toronto, Canada, 4.-8.11.1990. Ed. by Diana Henderson

Languages

Types

  • a 373
  • s 15
  • m 9
  • el 7
  • r 6
  • x 3
  • d 1
  • p 1
  • More… Less…