Search (216 results, page 1 of 11)

  • × theme_ss:"Retrievalstudien"
  1. Madelung, H.-O.: Subject searching in the social sciences : a comparison of PRECIS and KWIC indexes indexes to newspaper articles (1982) 0.15
    0.14574726 = product of:
      0.24291208 = sum of:
        0.05648775 = weight(_text_:context in 5517) [ClassicSimilarity], result of:
          0.05648775 = score(doc=5517,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.32054642 = fieldWeight in 5517, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5517)
        0.1538049 = weight(_text_:index in 5517) [ClassicSimilarity], result of:
          0.1538049 = score(doc=5517,freq=12.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.82782143 = fieldWeight in 5517, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5517)
        0.03261943 = weight(_text_:system in 5517) [ClassicSimilarity], result of:
          0.03261943 = score(doc=5517,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.2435858 = fieldWeight in 5517, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5517)
      0.6 = coord(3/5)
    
    Abstract
    89 articles from a small, Danish left-wing newspaper were indexed by PRECIS and KWIC. The articles cover a wide range of social science subjects. Controlled test searches in both indexes were carried out by 20 students of library science. The results obtained from this small-scale retrieval test were evaluated by a chi-square test. The PRECIS index led to more correct answers and fewer wrong answers than the KWIC index, i.e. it had both better recall and greater precision. Furthermore, the students were more confident in their judgement of the relevance of retrieved articles in the PRECIS index than in the KWIC index; and they generally favoured the PRECIS index in the subjective judgement they were asked to make
    Theme
    Preserved Context Index System (PRECIS)
  2. Prasher, R.G.: Evaluation of indexing system (1989) 0.06
    0.058527745 = product of:
      0.14631936 = sum of:
        0.07176066 = weight(_text_:index in 4998) [ClassicSimilarity], result of:
          0.07176066 = score(doc=4998,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.3862362 = fieldWeight in 4998, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0625 = fieldNorm(doc=4998)
        0.0745587 = weight(_text_:system in 4998) [ClassicSimilarity], result of:
          0.0745587 = score(doc=4998,freq=8.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.5567675 = fieldWeight in 4998, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=4998)
      0.4 = coord(2/5)
    
    Abstract
    Describes information system and its various components-index file construstion, query formulation and searching. Discusses an indexing system, and brings out the need for its evaluation. Explains the concept of the efficiency of indexing systems and discusses factors which control this efficiency. Gives criteria for evaluation. Discusses recall and precision ratios, as also noise ratio, novelty ratio, and exhaustivity and specificity and the impact of each on the efficiency of indexing system. Mention also various steps for evaluation.
  3. Lancaster, F.W.: Evaluating the performance of a large computerized information system (1985) 0.05
    0.04696734 = product of:
      0.11741835 = sum of:
        0.07176066 = weight(_text_:index in 3649) [ClassicSimilarity], result of:
          0.07176066 = score(doc=3649,freq=8.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.3862362 = fieldWeight in 3649, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.03125 = fieldNorm(doc=3649)
        0.04565769 = weight(_text_:system in 3649) [ClassicSimilarity], result of:
          0.04565769 = score(doc=3649,freq=12.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.3409491 = fieldWeight in 3649, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=3649)
      0.4 = coord(2/5)
    
    Abstract
    F. W. Lancaster is known for his writing an the state of the art in librarylinformation science. His skill in identifying significant contributions and synthesizing literature in fields as diverse as online systems, vocabulary control, measurement and evaluation, and the paperless society have earned him esteem as a chronicler of information science. Equally deserving of repute is his own contribution to research in the discipline-his evaluation of the MEDLARS operating system. The MEDLARS study is notable for several reasons. It was the first large-scale application of retrieval experiment methodology to the evaluation of an actual operating system. As such, problems had to be faced that do not arise in laboratory-like conditions. One example is the problem of recall: how to determine, for a very large and dynamic database, the number of documents relevant to a given search request. By solving this problem and others attendant upon transferring an experimental methodology to the real world, Lancaster created a constructive procedure that could be used to improve the design and functioning of retrieval systems. The MEDLARS study is notable also for its contribution to our understanding of what constitutes a good index language and good indexing. The ideal retrieval system would be one that retrieves all and only relevant documents. The failures that occur in real operating systems, when a relevant document is not retrieved (a recall failure) or an irrelevant document is retrieved (a precision failure), can be analysed to assess the impact of various factors an the performance of the system. This is exactly what Lancaster did. He found both the MEDLARS indexing and the McSH index language to be significant factors affecting retrieval performance. The indexing, primarily because it was insufficiently exhaustive, explained a large number of recall failures. The index language, largely because of its insufficient specificity, accounted for a large number of precision failures. The purpose of identifying factors responsible for a system's failures is ultimately to improve the system. Unlike many user studies, the MEDLARS evaluation yielded recommendations that were eventually implemented.* Indexing exhaustivity was increased and the McSH index language was enriched with more specific terms and a larger entry vocabulary.
  4. Guglielmo, E.J.; Rowe, N.C.: Natural-language retrieval of images based on descriptive captions (1996) 0.05
    0.04653595 = product of:
      0.11633987 = sum of:
        0.0538205 = weight(_text_:index in 6624) [ClassicSimilarity], result of:
          0.0538205 = score(doc=6624,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.28967714 = fieldWeight in 6624, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.046875 = fieldNorm(doc=6624)
        0.06251937 = weight(_text_:system in 6624) [ClassicSimilarity], result of:
          0.06251937 = score(doc=6624,freq=10.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.46686378 = fieldWeight in 6624, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=6624)
      0.4 = coord(2/5)
    
    Abstract
    Describes a prototype intelligent information retrieval system that uses natural-language understanding to efficiently locate captioned data. Multimedia data generally requires captions to explain its features and significance. Such descriptive captions often rely on long nominal compunds (strings of consecutive nouns) which create problems of ambiguous word sense. Presents a system in which captions and user queries are parsed and interpreted to produce a logical form, using a detailed theory of the meaning of nominal compounds. A fine-grain match can then compare the logical form of the query to the logical forms for each caption. To improve system efficiency, the system performs a coarse-grain match with index files, using nouns and verbs extracted from the query. Experiments with randomly selected queries and captions from an existing image library show an increase of 30% in precision and 50% in recall over the keyphrase approach currently used. Processing times have a median of 7 seconds as compared to 8 minutes for the existing system
  5. Rijsbergen, C.J. van: ¬A test for the separation of relevant and non-relevant documents in experimental retrieval collections (1973) 0.05
    0.04577866 = product of:
      0.11444665 = sum of:
        0.05272096 = weight(_text_:system in 5002) [ClassicSimilarity], result of:
          0.05272096 = score(doc=5002,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.3936941 = fieldWeight in 5002, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=5002)
        0.061725687 = product of:
          0.09258853 = sum of:
            0.04650343 = weight(_text_:29 in 5002) [ClassicSimilarity], result of:
              0.04650343 = score(doc=5002,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.31092256 = fieldWeight in 5002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5002)
            0.046085097 = weight(_text_:22 in 5002) [ClassicSimilarity], result of:
              0.046085097 = score(doc=5002,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.30952093 = fieldWeight in 5002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5002)
          0.6666667 = coord(2/3)
      0.4 = coord(2/5)
    
    Abstract
    Many retrievalexperiments are intended to discover ways of improving performance, taking the results obtained with some particular technique as a baseline. The fact that substantial alterations to a system often have little or no effect on particular collections is puzzling. This may be due to the initially poor seperation of relevant and non-relevant documents. The paper presents a procedure for characterizing this seperation for a collection, which can be used to show whether proposed modifications of the base system are likely to be useful.
    Date
    19. 3.1996 11:22:12
    Source
    Journal of documentation. 29(1973) no.3, S.251-257
  6. Park, T.K.: ¬The nature of relevance in information retrieval : an empirical study (1993) 0.04
    0.044728827 = product of:
      0.11182207 = sum of:
        0.08386256 = weight(_text_:context in 5336) [ClassicSimilarity], result of:
          0.08386256 = score(doc=5336,freq=6.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.475888 = fieldWeight in 5336, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.046875 = fieldNorm(doc=5336)
        0.027959513 = weight(_text_:system in 5336) [ClassicSimilarity], result of:
          0.027959513 = score(doc=5336,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.20878783 = fieldWeight in 5336, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=5336)
      0.4 = coord(2/5)
    
    Abstract
    Experimental research in information retrieval (IR) depends on the idea of relevance. Because of its key role in IR, recent questions about relevance have raised issues of methododlogical concern and have shaken the philosophical foundations of IR theory development. Despite an existing set of theoretical definitions of this concept, our understanding of relevance from users' perspectives is still limited. Using naturalistic inquiry methodology, this article reports an emprical study of user-based relevance interpretations. A model is presented that reflects the nature of the thought process of users who are evaluating bibliographic citations produced by a document retrieval system. Three major categories of variables affecting relevance assessments - internal context, external context, and problem context - are idetified and described. Users' relevance assessments involve multiple layers of interpretations that are derived from individuals' experiences, perceptions, and private knowledge related to the particular information problems at hand
  7. Dunlop, M.D.; Johnson, C.W.; Reid, J.: Exploring the layers of information retrieval evaluation (1998) 0.04
    0.041047435 = product of:
      0.10261859 = sum of:
        0.05648775 = weight(_text_:context in 3762) [ClassicSimilarity], result of:
          0.05648775 = score(doc=3762,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.32054642 = fieldWeight in 3762, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3762)
        0.04613084 = weight(_text_:system in 3762) [ClassicSimilarity], result of:
          0.04613084 = score(doc=3762,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.34448233 = fieldWeight in 3762, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3762)
      0.4 = coord(2/5)
    
    Abstract
    Presents current work on modelling interactive information retrieval systems and users' interactions with them. Analyzes the papers in this special issue in the context of evaluation in information retrieval (IR) by examining the different layers at which IR use could be evaluated. IR poses the double evaluation problem of evaluating both the underlying system effectiveness and the overall ability of the system to aid users. The papers look at different issues in combining human-computer interaction (HCI) research with IR research and provide insights into the problem of evaluating the information seeking process
  8. Angelini, M.; Fazzini, V.; Ferro, N.; Santucci, G.; Silvello, G.: CLAIRE: A combinatorial visual analytics system for information retrieval evaluation (2018) 0.04
    0.0389682 = product of:
      0.097420506 = sum of:
        0.040348392 = weight(_text_:context in 5049) [ClassicSimilarity], result of:
          0.040348392 = score(doc=5049,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.22896172 = fieldWeight in 5049, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5049)
        0.057072114 = weight(_text_:system in 5049) [ClassicSimilarity], result of:
          0.057072114 = score(doc=5049,freq=12.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.42618635 = fieldWeight in 5049, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5049)
      0.4 = coord(2/5)
    
    Abstract
    Information Retrieval (IR) develops complex systems, constituted of several components, which aim at returning and optimally ranking the most relevant documents in response to user queries. In this context, experimental evaluation plays a central role, since it allows for measuring IR systems effectiveness, increasing the understanding of their functioning, and better directing the efforts for improving them. Current evaluation methodologies are limited by two major factors: (i) IR systems are evaluated as "black boxes", since it is not possible to decompose the contributions of the different components, e.g., stop lists, stemmers, and IR models; (ii) given that it is not possible to predict the effectiveness of an IR system, both academia and industry need to explore huge numbers of systems, originated by large combinatorial compositions of their components, to understand how they perform and how these components interact together. We propose a Combinatorial visuaL Analytics system for Information Retrieval Evaluation (CLAIRE) which allows for exploring and making sense of the performances of a large amount of IR systems, in order to quickly and intuitively grasp which system configurations are preferred, what are the contributions of the different components and how these components interact together. The CLAIRE system is then validated against use cases based on several test collections using a wide set of systems, generated by a combinatorial composition of several off-the-shelf components, representing the most common denominator almost always present in English IR systems. In particular, we validate the findings enabled by CLAIRE with respect to consolidated deep statistical analyses and we show that the CLAIRE system allows the generation of new insights, which were not detectable with traditional approaches.
  9. Blandford, A.; Adams, A.; Attfield, S.; Buchanan, G.; Gow, J.; Makri, S.; Rimmer, J.; Warwick, C.: ¬The PRET A Rapporter framework : evaluating digital libraries from the perspective of information work (2008) 0.04
    0.038738146 = product of:
      0.096845366 = sum of:
        0.04841807 = weight(_text_:context in 2021) [ClassicSimilarity], result of:
          0.04841807 = score(doc=2021,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.27475408 = fieldWeight in 2021, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.046875 = fieldNorm(doc=2021)
        0.048427295 = weight(_text_:system in 2021) [ClassicSimilarity], result of:
          0.048427295 = score(doc=2021,freq=6.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.36163113 = fieldWeight in 2021, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=2021)
      0.4 = coord(2/5)
    
    Abstract
    The strongest tradition of IR systems evaluation has focused on system effectiveness; more recently, there has been a growing interest in evaluation of Interactive IR systems, balancing system and user-oriented evaluation criteria. In this paper we shift the focus to considering how IR systems, and particularly digital libraries, can be evaluated to assess (and improve) their fit with users' broader work activities. Taking this focus, we answer a different set of evaluation questions that reveal more about the design of interfaces, user-system interactions and how systems may be deployed in the information working context. The planning and conduct of such evaluation studies share some features with the established methods for conducting IR evaluation studies, but come with a shift in emphasis; for example, a greater range of ethical considerations may be pertinent. We present the PRET A Rapporter framework for structuring user-centred evaluation studies and illustrate its application to three evaluation studies of digital library systems.
  10. Shakir, H.S.; Nagao, M.: Context-sensitive processing of semantic queries in an image database system (1996) 0.04
    0.0385732 = product of:
      0.096433006 = sum of:
        0.068473496 = weight(_text_:context in 6626) [ClassicSimilarity], result of:
          0.068473496 = score(doc=6626,freq=4.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.38856095 = fieldWeight in 6626, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.046875 = fieldNorm(doc=6626)
        0.027959513 = weight(_text_:system in 6626) [ClassicSimilarity], result of:
          0.027959513 = score(doc=6626,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.20878783 = fieldWeight in 6626, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=6626)
      0.4 = coord(2/5)
    
    Abstract
    In an image database environment, an image can be retrieved using common names of entities that appear in it. Shows how an image is abstracted into a hierarchy of entity names and features and how relations are established between entities visible in the image. Semantic queries are also hierarchical. Its core is a fuzzy matching technique that compares semantic queries to image abstractions by assessing the similarity of contexts between the query and the candidate image. An important object of this matching technique is to distinguish between abstractions of different images that have the same labels but are different in context from each other. Each image is tagged with a matching degree even when it does not provide an exact match of the query. Experiments have been conducted to evaluate the strategy
  11. Behnert, C.; Lewandowski, D.: ¬A framework for designing retrieval effectiveness studies of library information systems using human relevance assessments (2017) 0.04
    0.037274025 = product of:
      0.09318506 = sum of:
        0.06988547 = weight(_text_:context in 3700) [ClassicSimilarity], result of:
          0.06988547 = score(doc=3700,freq=6.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.39657336 = fieldWeight in 3700, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3700)
        0.023299592 = weight(_text_:system in 3700) [ClassicSimilarity], result of:
          0.023299592 = score(doc=3700,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.17398985 = fieldWeight in 3700, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3700)
      0.4 = coord(2/5)
    
    Abstract
    Purpose This paper demonstrates how to apply traditional information retrieval evaluation methods based on standards from the Text REtrieval Conference (TREC) and web search evaluation to all types of modern library information systems including online public access catalogs, discovery systems, and digital libraries that provide web search features to gather information from heterogeneous sources. Design/methodology/approach We apply conventional procedures from information retrieval evaluation to the library information system context considering the specific characteristics of modern library materials. Findings We introduce a framework consisting of five parts: (1) search queries, (2) search results, (3) assessors, (4) testing, and (5) data analysis. We show how to deal with comparability problems resulting from diverse document types, e.g., electronic articles vs. printed monographs and what issues need to be considered for retrieval tests in the library context. Practical implications The framework can be used as a guideline for conducting retrieval effectiveness studies in the library context. Originality/value Although a considerable amount of research has been done on information retrieval evaluation, and standards for conducting retrieval effectiveness studies do exist, to our knowledge this is the first attempt to provide a systematic framework for evaluating the retrieval effectiveness of twenty-first-century library information systems. We demonstrate which issues must be considered and what decisions must be made by researchers prior to a retrieval test.
  12. Buckley, C.; Voorhees, E.M.: Retrieval system evaluation (2005) 0.04
    0.036946345 = product of:
      0.09236586 = sum of:
        0.06523886 = weight(_text_:system in 648) [ClassicSimilarity], result of:
          0.06523886 = score(doc=648,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.4871716 = fieldWeight in 648, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.109375 = fieldNorm(doc=648)
        0.027127001 = product of:
          0.081381 = sum of:
            0.081381 = weight(_text_:29 in 648) [ClassicSimilarity], result of:
              0.081381 = score(doc=648,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.5441145 = fieldWeight in 648, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=648)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Date
    29. 3.1996 18:16:49
  13. Aldous, K.J.: ¬A system for the automatic retrieval of information from a specialist database (1996) 0.04
    0.035183515 = product of:
      0.08795879 = sum of:
        0.04841807 = weight(_text_:context in 4078) [ClassicSimilarity], result of:
          0.04841807 = score(doc=4078,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.27475408 = fieldWeight in 4078, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.046875 = fieldNorm(doc=4078)
        0.03954072 = weight(_text_:system in 4078) [ClassicSimilarity], result of:
          0.03954072 = score(doc=4078,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.29527056 = fieldWeight in 4078, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=4078)
      0.4 = coord(2/5)
    
    Abstract
    Accessing useful information from a complex database requires knowledge of the structure of the database and an understanding of the methods of information retrieval. A means of overcoming this knowledge barrier to the use of narrow domain databases is proposed in which the user is required to enter only a series of terms which identify the required material. Describes a method which classifies terms according to their meaning in the context of the database and which uses this classification to access and execute models of code stored in the database to effect retrieval. Presents an implementation of the method using a database of technical information on the nature and use of fungicides. Initial results of trials with potential users indicate that the system can produce relevant resposes to queries expressed in this style. Since the code modules are part of the database, extensions may be easily implemented to handle most queries which users are likely to pose
  14. Lespinasse, K.: TREC: une conference pour l'evaluation des systemes de recherche d'information (1997) 0.03
    0.03196765 = product of:
      0.07991912 = sum of:
        0.064557426 = weight(_text_:context in 744) [ClassicSimilarity], result of:
          0.064557426 = score(doc=744,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.36633876 = fieldWeight in 744, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0625 = fieldNorm(doc=744)
        0.015361699 = product of:
          0.046085097 = sum of:
            0.046085097 = weight(_text_:22 in 744) [ClassicSimilarity], result of:
              0.046085097 = score(doc=744,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.30952093 = fieldWeight in 744, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=744)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    TREC ia an annual conference held in the USA devoted to electronic systems for large full text information searching. The conference deals with evaluation and comparison techniques developed since 1992 by participants from the research and industrial fields. The work of the conference is destined for designers (rather than users) of systems which access full text information. Describes the context, objectives, organization, evaluation methods and limits of TREC
    Date
    1. 8.1996 22:01:00
  15. López-Ostenero, F.; Peinado, V.; Gonzalo, J.; Verdejo, F.: Interactive question answering : Is Cross-Language harder than monolingual searching? (2008) 0.03
    0.030551035 = product of:
      0.076377586 = sum of:
        0.04841807 = weight(_text_:context in 2023) [ClassicSimilarity], result of:
          0.04841807 = score(doc=2023,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.27475408 = fieldWeight in 2023, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.046875 = fieldNorm(doc=2023)
        0.027959513 = weight(_text_:system in 2023) [ClassicSimilarity], result of:
          0.027959513 = score(doc=2023,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.20878783 = fieldWeight in 2023, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=2023)
      0.4 = coord(2/5)
    
    Abstract
    Is Cross-Language answer finding harder than Monolingual answer finding for users? In this paper we provide initial quantitative and qualitative evidence to answer this question. In our study, which involves 16 users searching questions under four different system conditions, we find that interactive cross-language answer finding is not substantially harder (in terms of accuracy) than its monolingual counterpart, using general purpose Machine Translation systems and standard Information Retrieval machinery, although it takes more time. We have also seen that users need more context to provide accurate answers (full documents) than what is usually considered by systems (paragraphs or passages). Finally, we also discuss the limitations of standard evaluation methodologies for interactive Information Retrieval experiments in the case of cross-language question answering.
  16. Pirkola, A.; Järvelin, K.: Employing the resolution power of search keys (2001) 0.03
    0.028020501 = product of:
      0.07005125 = sum of:
        0.05648775 = weight(_text_:context in 5907) [ClassicSimilarity], result of:
          0.05648775 = score(doc=5907,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.32054642 = fieldWeight in 5907, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5907)
        0.013563501 = product of:
          0.0406905 = sum of:
            0.0406905 = weight(_text_:29 in 5907) [ClassicSimilarity], result of:
              0.0406905 = score(doc=5907,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.27205724 = fieldWeight in 5907, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5907)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Search key resolution power is analyzed in the context of a request, i.e., among the set of search keys for the request. Methods of characterizing the resolution power of keys automatically are studied, and the effects search keys of varying resolution power have on retrieval effectiveness are analyzed. It is shown that it often is possible to identify the best key of a query while the discrimination between the remaining keys presents problems. It is also shown that query performance is improved by suitably using the best key in a structured query. The tests were run with InQuery in a subcollection of the TREC collection, which contained some 515,000 documents
    Date
    29. 9.2001 14:01:42
  17. Blagden, J.F.: How much noise in a role-free and link-free co-ordinate indexing system? (1966) 0.03
    0.027976 = product of:
      0.06994 = sum of:
        0.056498513 = weight(_text_:system in 2718) [ClassicSimilarity], result of:
          0.056498513 = score(doc=2718,freq=6.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.42190298 = fieldWeight in 2718, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2718)
        0.013441487 = product of:
          0.04032446 = sum of:
            0.04032446 = weight(_text_:22 in 2718) [ClassicSimilarity], result of:
              0.04032446 = score(doc=2718,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.2708308 = fieldWeight in 2718, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2718)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    A study of the number of irrelevant documents retrieved in a co-ordinate indexing system that does not employ eitherr roles or links. These tests were based on one hundred actual inquiries received in the library and therefore an evaluation of recall efficiency is not included. Over half the enquiries produced no noise, but the mean average percentage niose figure was approximately 33 per cent based on a total average retireval figure of eighteen documents per search. Details of the size of the indexed collection, methods of indexing, and an analysis of the reasons for the retrieval of irrelevant documents are discussed, thereby providing information officers who are thinking of installing such a system with some evidence on which to base a decision as to whether or not to utilize these devices
    Source
    Journal of documentation. 22(1966), S.203-209
  18. ¬The Fifth Text Retrieval Conference (TREC-5) (1997) 0.03
    0.027233064 = product of:
      0.06808266 = sum of:
        0.05272096 = weight(_text_:system in 3087) [ClassicSimilarity], result of:
          0.05272096 = score(doc=3087,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.3936941 = fieldWeight in 3087, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=3087)
        0.015361699 = product of:
          0.046085097 = sum of:
            0.046085097 = weight(_text_:22 in 3087) [ClassicSimilarity], result of:
              0.046085097 = score(doc=3087,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.30952093 = fieldWeight in 3087, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3087)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Proceedings of the 5th TREC-confrerence held in Gaithersburgh, Maryland, Nov 20-22, 1996. Aim of the conference was discussion on retrieval techniques for large test collections. Different research groups used different techniques, such as automated thesauri, term weighting, natural language techniques, relevance feedback and advanced pattern matching, for information retrieval from the same large database. This procedure makes it possible to compare the results. The proceedings include papers, tables of the system results, and brief system descriptions including timing and storage information
  19. Blair, D.C.; Maron, M.E.: ¬An evaluation of retrieval effectiveness for a full-text document-retrieval system (1985) 0.03
    0.026390245 = product of:
      0.065975614 = sum of:
        0.046599183 = weight(_text_:system in 1345) [ClassicSimilarity], result of:
          0.046599183 = score(doc=1345,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.3479797 = fieldWeight in 1345, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.078125 = fieldNorm(doc=1345)
        0.01937643 = product of:
          0.05812929 = sum of:
            0.05812929 = weight(_text_:29 in 1345) [ClassicSimilarity], result of:
              0.05812929 = score(doc=1345,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.38865322 = fieldWeight in 1345, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1345)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Footnote
    Vgl. auch : Salton, G.: Another look ... Comm. ACM 29(1986) S.S.648-656; Blair, D.C.: Full text retrieval ... Int. Class. 13(1986) S.18-23: Blair, D.C., M.E. Maron: Full-text information retrieval ... Inf. proc. man. 26(1990) S.437-447.
  20. Aitchison, T.M.: Comparative evaluation of index languages : Part I, Design. Part II, Results (1969) 0.03
    0.025116233 = product of:
      0.12558116 = sum of:
        0.12558116 = weight(_text_:index in 561) [ClassicSimilarity], result of:
          0.12558116 = score(doc=561,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.67591333 = fieldWeight in 561, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.109375 = fieldNorm(doc=561)
      0.2 = coord(1/5)
    

Authors

Languages

Types

  • a 196
  • s 11
  • m 6
  • r 5
  • el 4
  • p 1
  • x 1
  • More… Less…