Search (204 results, page 1 of 11)

  • × theme_ss:"Retrievalstudien"
  1. Evans, D.A.; Lefferts, R.G.: CLARIT-TREC experiments (1995) 0.10
    0.09831333 = product of:
      0.14747 = sum of:
        0.091177315 = weight(_text_:management in 1912) [ClassicSimilarity], result of:
          0.091177315 = score(doc=1912,freq=4.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.5266582 = fieldWeight in 1912, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.078125 = fieldNorm(doc=1912)
        0.05629268 = product of:
          0.11258536 = sum of:
            0.11258536 = weight(_text_:system in 1912) [ClassicSimilarity], result of:
              0.11258536 = score(doc=1912,freq=8.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.6959594 = fieldWeight in 1912, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1912)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Describes the following elements of the CLARIT system information management system: natural language processing, document indexing, vector space querying and query augmentation. Reports on the processing results carried out as part of the TREC-2 and into system parameterization. Results demonstrate high prescision and excellent recall, but the system is not yet optimized
    Source
    Information processing and management. 31(1995) no.3, S.385-395
  2. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.09
    0.09264909 = product of:
      0.13897362 = sum of:
        0.09026093 = weight(_text_:management in 6438) [ClassicSimilarity], result of:
          0.09026093 = score(doc=6438,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.521365 = fieldWeight in 6438, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.109375 = fieldNorm(doc=6438)
        0.0487127 = product of:
          0.0974254 = sum of:
            0.0974254 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
              0.0974254 = score(doc=6438,freq=2.0), product of:
                0.17986396 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051362853 = queryNorm
                0.5416616 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    11. 8.2001 16:22:19
    Source
    Information processing and management. 36(2000) no.1, S.3-36
  3. Losee, R.M.: Determining information retrieval and filtering performance without experimentation (1995) 0.09
    0.088832036 = product of:
      0.13324805 = sum of:
        0.045130465 = weight(_text_:management in 3368) [ClassicSimilarity], result of:
          0.045130465 = score(doc=3368,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.2606825 = fieldWeight in 3368, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3368)
        0.08811758 = sum of:
          0.039404877 = weight(_text_:system in 3368) [ClassicSimilarity], result of:
            0.039404877 = score(doc=3368,freq=2.0), product of:
              0.16177002 = queryWeight, product of:
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.051362853 = queryNorm
              0.2435858 = fieldWeight in 3368, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3368)
          0.0487127 = weight(_text_:22 in 3368) [ClassicSimilarity], result of:
            0.0487127 = score(doc=3368,freq=2.0), product of:
              0.17986396 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051362853 = queryNorm
              0.2708308 = fieldWeight in 3368, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3368)
      0.6666667 = coord(2/3)
    
    Abstract
    The performance of an information retrieval or text and media filtering system may be determined through analytic methods as well as by traditional simulation or experimental methods. These analytic methods can provide precise statements about expected performance. They can thus determine which of 2 similarly performing systems is superior. For both a single query terms and for a multiple query term retrieval model, a model for comparing the performance of different probabilistic retrieval methods is developed. This method may be used in computing the average search length for a query, given only knowledge of database parameter values. Describes predictive models for inverse document frequency, binary independence, and relevance feedback based retrieval and filtering. Simulation illustrate how the single term model performs and sample performance predictions are given for single term and multiple term problems
    Date
    22. 2.1996 13:14:10
    Source
    Information processing and management. 31(1995) no.4, S.555-572
  4. Petrelli, D.: On the role of user-centred evaluation in the advancement of interactive information retrieval (2008) 0.08
    0.07718782 = product of:
      0.11578173 = sum of:
        0.032236047 = weight(_text_:management in 2026) [ClassicSimilarity], result of:
          0.032236047 = score(doc=2026,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.18620178 = fieldWeight in 2026, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2026)
        0.083545685 = sum of:
          0.048750892 = weight(_text_:system in 2026) [ClassicSimilarity], result of:
            0.048750892 = score(doc=2026,freq=6.0), product of:
              0.16177002 = queryWeight, product of:
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.051362853 = queryNorm
              0.30135927 = fieldWeight in 2026, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2026)
          0.03479479 = weight(_text_:22 in 2026) [ClassicSimilarity], result of:
            0.03479479 = score(doc=2026,freq=2.0), product of:
              0.17986396 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051362853 = queryNorm
              0.19345059 = fieldWeight in 2026, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2026)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper discusses the role of user-centred evaluations as an essential method for researching interactive information retrieval. It draws mainly on the work carried out during the Clarity Project where different user-centred evaluations were run during the lifecycle of a cross-language information retrieval system. The iterative testing was not only instrumental to the development of a usable system, but it enhanced our knowledge of the potential, impact, and actual use of cross-language information retrieval technology. Indeed the role of the user evaluation was dual: by testing a specific prototype it was possible to gain a micro-view and assess the effectiveness of each component of the complex system; by cumulating the result of all the evaluations (in total 43 people were involved) it was possible to build a macro-view of how cross-language retrieval would impact on users and their tasks. By showing the richness of results that can be acquired, this paper aims at stimulating researchers into considering user-centred evaluations as a flexible, adaptable and comprehensive technique for investigating non-traditional information access systems.
    Source
    Information processing and management. 44(2008) no.1, S.22-38
  5. Rajagopal, P.; Ravana, S.D.; Koh, Y.S.; Balakrishnan, V.: Evaluating the effectiveness of information retrieval systems using effort-based relevance judgment (2019) 0.07
    0.071223855 = product of:
      0.106835775 = sum of:
        0.032236047 = weight(_text_:management in 5287) [ClassicSimilarity], result of:
          0.032236047 = score(doc=5287,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.18620178 = fieldWeight in 5287, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5287)
        0.07459973 = sum of:
          0.03980494 = weight(_text_:system in 5287) [ClassicSimilarity], result of:
            0.03980494 = score(doc=5287,freq=4.0), product of:
              0.16177002 = queryWeight, product of:
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.051362853 = queryNorm
              0.24605882 = fieldWeight in 5287, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5287)
          0.03479479 = weight(_text_:22 in 5287) [ClassicSimilarity], result of:
            0.03479479 = score(doc=5287,freq=2.0), product of:
              0.17986396 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051362853 = queryNorm
              0.19345059 = fieldWeight in 5287, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5287)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose The effort in addition to relevance is a major factor for satisfaction and utility of the document to the actual user. The purpose of this paper is to propose a method in generating relevance judgments that incorporate effort without human judges' involvement. Then the study determines the variation in system rankings due to low effort relevance judgment in evaluating retrieval systems at different depth of evaluation. Design/methodology/approach Effort-based relevance judgments are generated using a proposed boxplot approach for simple document features, HTML features and readability features. The boxplot approach is a simple yet repeatable approach in classifying documents' effort while ensuring outlier scores do not skew the grading of the entire set of documents. Findings The retrieval systems evaluation using low effort relevance judgments has a stronger influence on shallow depth of evaluation compared to deeper depth. It is proved that difference in the system rankings is due to low effort documents and not the number of relevant documents. Originality/value Hence, it is crucial to evaluate retrieval systems at shallow depth using low effort relevance judgments.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 71(2019) no.1, S.2-17
  6. Chen, H.; Dhar, V.: Cognitive process as a basis for intelligent retrieval system design (1991) 0.06
    0.060385596 = product of:
      0.09057839 = sum of:
        0.051577676 = weight(_text_:management in 3845) [ClassicSimilarity], result of:
          0.051577676 = score(doc=3845,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.29792285 = fieldWeight in 3845, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0625 = fieldNorm(doc=3845)
        0.039000716 = product of:
          0.07800143 = sum of:
            0.07800143 = weight(_text_:system in 3845) [ClassicSimilarity], result of:
              0.07800143 = score(doc=3845,freq=6.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.48217484 = fieldWeight in 3845, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3845)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    2 studies were conducted to investigate the cognitive processes involved in online document-based information retrieval. These studies led to the development of 5 computerised models of online document retrieval. These models were incorporated into a design of an 'intelligent' document-based retrieval system. Following a discussion of this system, discusses the broader implications of the research for the design of information retrieval sysems
    Source
    Information processing and management. 27(1991) no.5, S.405-432
  7. Wolfram, D.; Volz, A.; Dimitroff, A.: ¬The effect of linkage structure on retrieval performance in a hypertext-based bibliographic retrieval system (1996) 0.06
    0.0563569 = product of:
      0.084535345 = sum of:
        0.045130465 = weight(_text_:management in 6622) [ClassicSimilarity], result of:
          0.045130465 = score(doc=6622,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.2606825 = fieldWeight in 6622, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6622)
        0.039404877 = product of:
          0.07880975 = sum of:
            0.07880975 = weight(_text_:system in 6622) [ClassicSimilarity], result of:
              0.07880975 = score(doc=6622,freq=8.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.4871716 = fieldWeight in 6622, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6622)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Investigates how linkage environments in a hypertext based bibliographic retrieval system affect retrieval performance for novice and experienced searchers, 2 systems, 1 with inter record linkages to authors and descriptors and 1 that also included title and abstract keywords, were tested. No significant differences in retrieval performance and system usage were found for most search tests. The enhanced system did provide better performance where title and abstract keywords provided the most direct access to relevant records. The findings have implications for the design of bilbiographic information retrieval systems using hypertext linkages
    Source
    Information processing and management. 32(1996) no.5, S.529-541
  8. Gluck, M.: Exploring the relationship between user satisfaction and relevance in information systems (1996) 0.05
    0.05337383 = product of:
      0.08006074 = sum of:
        0.045588657 = weight(_text_:management in 4082) [ClassicSimilarity], result of:
          0.045588657 = score(doc=4082,freq=4.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.2633291 = fieldWeight in 4082, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4082)
        0.034472086 = product of:
          0.06894417 = sum of:
            0.06894417 = weight(_text_:system in 4082) [ClassicSimilarity], result of:
              0.06894417 = score(doc=4082,freq=12.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.42618635 = fieldWeight in 4082, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4082)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Aims to better understand the relationship between relevance and user satisfaction, the 2 predominant aspects of user based performance in information systems. Unconfounds relevance and user satisfaction assessments of system performance at the retrieved item level. To minimize the idiosyncrasies of any one system, a generalized, naturalistic information system was employed in this study. Respondents completed sensemaking timeline questionnaires in which they described a recent need they had for geographic information. Retrieved documents from the generalized system consisted of the responses users obtained while resolving their information needs. Respondents directly provided process, product, cost benefit, and overall satisfaction assessments with the generalized geographic systems. Relevance judgements of retrieved items were obtained through content analysis from sensemaking questionnaires as a secondary observation technique. The content analysis provided relevance values on both 5 category and 2 category scales. Results indicate that relevance has strong relationships with process, product and overall user satisfaction measures while relevance and cost benefit satisfaction measures have no significant relationship. This analysis also indicates that neither relevance nor user satisfaction subsumes the other concept, and that understanding the proper units of analysis for these measures helps resolve the paradox of the management information system and information science literature not informing aech other concerning user based information system performance measures
    Source
    Information processing and management. 32(1996) no.1, S.89-104
  9. Serrano Cobos, J.; Quintero Orta, A.: Design, development and management of an information recovery system for an Internet Website : from documentary theory to practice (2003) 0.05
    0.0523929 = product of:
      0.07858935 = sum of:
        0.05470639 = weight(_text_:management in 2726) [ClassicSimilarity], result of:
          0.05470639 = score(doc=2726,freq=4.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.31599492 = fieldWeight in 2726, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=2726)
        0.02388296 = product of:
          0.04776592 = sum of:
            0.04776592 = weight(_text_:system in 2726) [ClassicSimilarity], result of:
              0.04776592 = score(doc=2726,freq=4.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.29527056 = fieldWeight in 2726, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2726)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    A real case study is shown, explaining in a timeline the whole process of design, development and evaluation of a search engine used as a navigational help tool for end users and clients an a content website, e-commerce driven. The nature of the website is a community website, which will determine the core design of the information service. This study will involve several steps, such as information recovery system analysis, comparative analysis of other commercial search engines, service design, functionalities and scope; software selection, design of the project, project management, future service administration and conclusions.
  10. Díaz, A.; García, A.; Gervás, P.: User-centred versus system-centred evaluation of a personalization system (2008) 0.05
    0.049637042 = product of:
      0.07445556 = sum of:
        0.032236047 = weight(_text_:management in 2094) [ClassicSimilarity], result of:
          0.032236047 = score(doc=2094,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.18620178 = fieldWeight in 2094, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2094)
        0.042219512 = product of:
          0.084439024 = sum of:
            0.084439024 = weight(_text_:system in 2094) [ClassicSimilarity], result of:
              0.084439024 = score(doc=2094,freq=18.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.52196956 = fieldWeight in 2094, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2094)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Some of the most popular measures to evaluate information filtering systems are usually independent of the users because they are based in relevance judgments obtained from experts. On the other hand, the user-centred evaluation allows showing the different impressions that the users have perceived about the system running. This work is focused on discussing the problem of user-centred versus system-centred evaluation of a Web content personalization system where the personalization is based on a user model that stores long term (section, categories and keywords) and short term interests (adapted from user provided feedback). The user-centred evaluation is based on questionnaires filled in by the users before and after using the system and the system-centred evaluation is based on the comparison between ranking of documents, obtained from the application of a multi-tier selection process, and binary relevance judgments collected previously from real users. The user-centred and system-centred evaluations performed with 106 users during 14 working days have provided valuable data concerning the behaviour of the users with respect to issues such as document relevance or the relative importance attributed to different ways of personalization. The results obtained shows general satisfaction on both the personalization processes (selection, adaptation and presentation) and the system as a whole.
    Source
    Information processing and management. 44(2008) no.3, S.1293-1307
  11. Sparck Jones, K.: Reflections on TREC : TREC-2 (1995) 0.05
    0.0493965 = product of:
      0.07409475 = sum of:
        0.051577676 = weight(_text_:management in 1916) [ClassicSimilarity], result of:
          0.051577676 = score(doc=1916,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.29792285 = fieldWeight in 1916, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0625 = fieldNorm(doc=1916)
        0.022517072 = product of:
          0.045034144 = sum of:
            0.045034144 = weight(_text_:system in 1916) [ClassicSimilarity], result of:
              0.045034144 = score(doc=1916,freq=2.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.27838376 = fieldWeight in 1916, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1916)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Discusses the TREC programme as a major enterprise in information retrieval research. It reviews its structure as an evaluation exercise, characterises the methods of indexing and retrieval being tested within it in terms of the approaches to system performance factors these represent; analyses the test results for solid, overall conclusions that can be drawn from them; and, in the light of the particular features of the test data, assesses TREC both for generally applicable findings that emerge from it and for directions it offers for future research
    Source
    Information processing and management. 31(1995) no.3, S.291-314
  12. Blair, D.C.; Maron, M.E.: Full-text information retrieval : further analysis and clarification (1990) 0.05
    0.0493965 = product of:
      0.07409475 = sum of:
        0.051577676 = weight(_text_:management in 2046) [ClassicSimilarity], result of:
          0.051577676 = score(doc=2046,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.29792285 = fieldWeight in 2046, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0625 = fieldNorm(doc=2046)
        0.022517072 = product of:
          0.045034144 = sum of:
            0.045034144 = weight(_text_:system in 2046) [ClassicSimilarity], result of:
              0.045034144 = score(doc=2046,freq=2.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.27838376 = fieldWeight in 2046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2046)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In 1985, an article by Blair and Maron described a detailed evaluation of the effectiveness of an operational full text retrieval system used to support the defense of a large corporate lawsuit. The following year Salton published an article which called into question the conclusions of the 1985 study. The following article briefly reviews the initial study, replies to the objections raised by the secon article, and clarifies several confusions and misunderstandings of the 1985 study
    Source
    Information processing and management. 26(1990), S.437-447
  13. Harman, D.K.: ¬The first text retrieval conference : TREC-1, 1992 (1993) 0.05
    0.0493965 = product of:
      0.07409475 = sum of:
        0.051577676 = weight(_text_:management in 1317) [ClassicSimilarity], result of:
          0.051577676 = score(doc=1317,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.29792285 = fieldWeight in 1317, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0625 = fieldNorm(doc=1317)
        0.022517072 = product of:
          0.045034144 = sum of:
            0.045034144 = weight(_text_:system in 1317) [ClassicSimilarity], result of:
              0.045034144 = score(doc=1317,freq=2.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.27838376 = fieldWeight in 1317, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1317)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Reports on the 1st Text Retrieval Conference (TREC-1) held in Rockville, MD, 4-6 Nov. 1992. The TREC experiment is being run by the National Institute of Standards and Technology to allow information retrieval researchers to scale up from small collection of data to larger sized experiments. Gropus of researchers have been provided with text documents compressed on CD-ROM. They used experimental retrieval system to search the text and evaluate the results
    Source
    Information processing and management. 29(1993) no.4, S.411-414
  14. Salton, G.: ¬The state of retrieval system evaluation (1992) 0.05
    0.0493965 = product of:
      0.07409475 = sum of:
        0.051577676 = weight(_text_:management in 5250) [ClassicSimilarity], result of:
          0.051577676 = score(doc=5250,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.29792285 = fieldWeight in 5250, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0625 = fieldNorm(doc=5250)
        0.022517072 = product of:
          0.045034144 = sum of:
            0.045034144 = weight(_text_:system in 5250) [ClassicSimilarity], result of:
              0.045034144 = score(doc=5250,freq=2.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.27838376 = fieldWeight in 5250, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5250)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Information processing and management. 28(1992) no.4, S.441-449
  15. Robertson, S.E.; Walker, S.; Hancock-Beaulieu, M.M.: Large test collection experiments of an operational, interactive system : OKAPI at TREC (1995) 0.05
    0.048662614 = product of:
      0.07299392 = sum of:
        0.045130465 = weight(_text_:management in 6964) [ClassicSimilarity], result of:
          0.045130465 = score(doc=6964,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.2606825 = fieldWeight in 6964, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6964)
        0.027863456 = product of:
          0.055726912 = sum of:
            0.055726912 = weight(_text_:system in 6964) [ClassicSimilarity], result of:
              0.055726912 = score(doc=6964,freq=4.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.34448233 = fieldWeight in 6964, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6964)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The Okapi system has been used in a series of experiments on the TREC collections, investiganting probabilistic methods, relevance feedback, and query expansion, and interaction issues. Some new probabilistic models have been developed, resulting in simple weigthing functions that take account of document length and within document and within query term frequency. All have been shown to be beneficial when based on large quantities of relevance data as in the routing task. Interaction issues are much more difficult to evaluate in the TREC framework, and no benefits have yet been demonstrated from feedback based on small numbers of 'relevant' items identified by intermediary searchers
    Source
    Information processing and management. 31(1995) no.3, S.345-360
  16. Drabenstott, K.M.; Vizine-Goetz, D.: Using subject headings for online retrieval : theory, practice and potential (1994) 0.05
    0.04830591 = product of:
      0.07245886 = sum of:
        0.038683258 = weight(_text_:management in 386) [ClassicSimilarity], result of:
          0.038683258 = score(doc=386,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.22344214 = fieldWeight in 386, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=386)
        0.03377561 = product of:
          0.06755122 = sum of:
            0.06755122 = weight(_text_:system in 386) [ClassicSimilarity], result of:
              0.06755122 = score(doc=386,freq=8.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.41757566 = fieldWeight in 386, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=386)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Using subject headings for Online Retrieval is an indispensable tool for online system desingners who are developing new systems or refining exicting ones. The book describes subject analysis and subject searching in online catalogs, including the limitations of retrieval, and demonstrates how such limitations can be overcome through system design and programming. The book describes the Library of Congress Subject headings system and system characteristics, shows how information is stored in machine readable files, and offers examples of and recommendations for successful methods. Tables are included to support these recommendations, and diagrams, graphs, and bar charts are used to provide results of data analyses.
    Footnote
    Rez. in: Information processing and management 31(1995) no.3, S.450-451 (R.R. Larson); Library resources and technical services 41(1997) no.1, S.60-67 (B.H. Weinberg)
  17. Smithson, S.: Information retrieval evaluation in practice : a case study approach (1994) 0.05
    0.046324544 = product of:
      0.06948681 = sum of:
        0.045130465 = weight(_text_:management in 7302) [ClassicSimilarity], result of:
          0.045130465 = score(doc=7302,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.2606825 = fieldWeight in 7302, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7302)
        0.02435635 = product of:
          0.0487127 = sum of:
            0.0487127 = weight(_text_:22 in 7302) [ClassicSimilarity], result of:
              0.0487127 = score(doc=7302,freq=2.0), product of:
                0.17986396 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051362853 = queryNorm
                0.2708308 = fieldWeight in 7302, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7302)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The evaluation of information retrieval systems is an important yet difficult operation. This paper describes an exploratory evaluation study that takes an interpretive approach to evaluation. The longitudinal study examines evaluation through the information-seeking behaviour of 22 case studies of 'real' users. The eclectic approach to data collection produced behavioral data that is compared with relevance judgements and satisfaction ratings. The study demonstrates considerable variations among the cases, among different evaluation measures within the same case, and among the same measures at different stages within a single case. It is argued that those involved in evaluation should be aware of the difficulties, and base any evaluation on a good understanding of the cases in question
    Source
    Information processing and management. 30(1994) no.2, S.205-221
  18. Blandford, A.; Adams, A.; Attfield, S.; Buchanan, G.; Gow, J.; Makri, S.; Rimmer, J.; Warwick, C.: ¬The PRET A Rapporter framework : evaluating digital libraries from the perspective of information work (2008) 0.05
    0.0452892 = product of:
      0.0679338 = sum of:
        0.038683258 = weight(_text_:management in 2021) [ClassicSimilarity], result of:
          0.038683258 = score(doc=2021,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.22344214 = fieldWeight in 2021, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=2021)
        0.029250536 = product of:
          0.058501072 = sum of:
            0.058501072 = weight(_text_:system in 2021) [ClassicSimilarity], result of:
              0.058501072 = score(doc=2021,freq=6.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.36163113 = fieldWeight in 2021, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2021)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The strongest tradition of IR systems evaluation has focused on system effectiveness; more recently, there has been a growing interest in evaluation of Interactive IR systems, balancing system and user-oriented evaluation criteria. In this paper we shift the focus to considering how IR systems, and particularly digital libraries, can be evaluated to assess (and improve) their fit with users' broader work activities. Taking this focus, we answer a different set of evaluation questions that reveal more about the design of interfaces, user-system interactions and how systems may be deployed in the information working context. The planning and conduct of such evaluation studies share some features with the established methods for conducting IR evaluation studies, but come with a shift in emphasis; for example, a greater range of ethical considerations may be pertinent. We present the PRET A Rapporter framework for structuring user-centred evaluation studies and illustrate its application to three evaluation studies of digital library systems.
    Source
    Information processing and management. 44(2008) no.1, S.4-21
  19. Angelini, M.; Fazzini, V.; Ferro, N.; Santucci, G.; Silvello, G.: CLAIRE: A combinatorial visual analytics system for information retrieval evaluation (2018) 0.04
    0.04447209 = product of:
      0.06670813 = sum of:
        0.032236047 = weight(_text_:management in 5049) [ClassicSimilarity], result of:
          0.032236047 = score(doc=5049,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.18620178 = fieldWeight in 5049, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5049)
        0.034472086 = product of:
          0.06894417 = sum of:
            0.06894417 = weight(_text_:system in 5049) [ClassicSimilarity], result of:
              0.06894417 = score(doc=5049,freq=12.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.42618635 = fieldWeight in 5049, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5049)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Information Retrieval (IR) develops complex systems, constituted of several components, which aim at returning and optimally ranking the most relevant documents in response to user queries. In this context, experimental evaluation plays a central role, since it allows for measuring IR systems effectiveness, increasing the understanding of their functioning, and better directing the efforts for improving them. Current evaluation methodologies are limited by two major factors: (i) IR systems are evaluated as "black boxes", since it is not possible to decompose the contributions of the different components, e.g., stop lists, stemmers, and IR models; (ii) given that it is not possible to predict the effectiveness of an IR system, both academia and industry need to explore huge numbers of systems, originated by large combinatorial compositions of their components, to understand how they perform and how these components interact together. We propose a Combinatorial visuaL Analytics system for Information Retrieval Evaluation (CLAIRE) which allows for exploring and making sense of the performances of a large amount of IR systems, in order to quickly and intuitively grasp which system configurations are preferred, what are the contributions of the different components and how these components interact together. The CLAIRE system is then validated against use cases based on several test collections using a wide set of systems, generated by a combinatorial composition of several off-the-shelf components, representing the most common denominator almost always present in English IR systems. In particular, we validate the findings enabled by CLAIRE with respect to consolidated deep statistical analyses and we show that the CLAIRE system allows the generation of new insights, which were not detectable with traditional approaches.
    Source
    Information processing and management. 54(2018) no.6, S.1077-1100
  20. Spink, A.; Goodrum, A.: ¬A study of search intermediary working notes : implications for IR system design (1996) 0.04
    0.043221936 = product of:
      0.0648329 = sum of:
        0.045130465 = weight(_text_:management in 6981) [ClassicSimilarity], result of:
          0.045130465 = score(doc=6981,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.2606825 = fieldWeight in 6981, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6981)
        0.019702438 = product of:
          0.039404877 = sum of:
            0.039404877 = weight(_text_:system in 6981) [ClassicSimilarity], result of:
              0.039404877 = score(doc=6981,freq=2.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.2435858 = fieldWeight in 6981, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6981)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Information processing and management. 32(1996) no.6, S.681-695

Languages

Types

  • a 184
  • s 11
  • m 6
  • r 4
  • el 3
  • x 2
  • p 1
  • More… Less…