Search (46 results, page 1 of 3)

  • × theme_ss:"Retrievalstudien"
  1. Petrelli, D.: On the role of user-centred evaluation in the advancement of interactive information retrieval (2008) 0.08
    0.08120373 = product of:
      0.16240746 = sum of:
        0.120043606 = weight(_text_:assess in 2026) [ClassicSimilarity], result of:
          0.120043606 = score(doc=2026,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.32564306 = fieldWeight in 2026, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2026)
        0.042363856 = weight(_text_:22 in 2026) [ClassicSimilarity], result of:
          0.042363856 = score(doc=2026,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.19345059 = fieldWeight in 2026, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2026)
      0.5 = coord(2/4)
    
    Abstract
    This paper discusses the role of user-centred evaluations as an essential method for researching interactive information retrieval. It draws mainly on the work carried out during the Clarity Project where different user-centred evaluations were run during the lifecycle of a cross-language information retrieval system. The iterative testing was not only instrumental to the development of a usable system, but it enhanced our knowledge of the potential, impact, and actual use of cross-language information retrieval technology. Indeed the role of the user evaluation was dual: by testing a specific prototype it was possible to gain a micro-view and assess the effectiveness of each component of the complex system; by cumulating the result of all the evaluations (in total 43 people were involved) it was possible to build a macro-view of how cross-language retrieval would impact on users and their tasks. By showing the richness of results that can be acquired, this paper aims at stimulating researchers into considering user-centred evaluations as a flexible, adaptable and comprehensive technique for investigating non-traditional information access systems.
    Source
    Information processing and management. 44(2008) no.1, S.22-38
  2. Robertson, S.: On the history of evaluation in IR (2009) 0.05
    0.04801744 = product of:
      0.19206975 = sum of:
        0.19206975 = weight(_text_:assess in 3653) [ClassicSimilarity], result of:
          0.19206975 = score(doc=3653,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.5210289 = fieldWeight in 3653, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.0625 = fieldNorm(doc=3653)
      0.25 = coord(1/4)
    
    Abstract
    This paper is a personal take on the history of evaluation experiments in information retrieval. It describes some of the early experiments that were formative in our understanding, and goes on to discuss the current dominance of TREC (the Text REtrieval Conference) and to assess its impact.
  3. Hansen, P.; Karlgren, J.: Effects of foreign language and task scenario on relevance assessment (2005) 0.04
    0.04244182 = product of:
      0.16976728 = sum of:
        0.16976728 = weight(_text_:assess in 4393) [ClassicSimilarity], result of:
          0.16976728 = score(doc=4393,freq=4.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.4605288 = fieldWeight in 4393, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4393)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - This paper aims to investigate how readers assess relevance of retrieved documents in a foreign language they know well compared with their native language, and whether work-task scenario descriptions have effect on the assessment process. Design/methodology/approach - Queries, test collections, and relevance assessments were used from the 2002 Interactive CLEF. Swedish first-language speakers, fluent in English, were given simulated information-seeking scenarios and presented with retrieval results in both languages. Twenty-eight subjects in four groups were asked to rate the retrieved text documents by relevance. A two-level work-task scenario description framework was developed and applied to facilitate the study of context effects on the assessment process. Findings - Relevance assessment takes longer in a foreign language than in the user first language. The quality of assessments by comparison with pre-assessed results is inferior to those made in the users' first language. Work-task scenario descriptions had an effect on the assessment process, both by measured access time and by self-report by subjects. However, effects on results by traditional relevance ranking were detectable. This may be an argument for extending the traditional IR experimental topical relevance measures to cater for context effects. Originality/value - An extended two-level work-task scenario description framework was developed and applied. Contextual aspects had an effect on the relevance assessment process. English texts took longer to assess than Swedish and were assessed less well, especially for the most difficult queries. The IR research field needs to close this gap and to design information access systems with users' language competence in mind.
  4. Shafique, M.; Chaudhry, A.S.: Intelligent agent-based online information retrieval (1995) 0.04
    0.036013078 = product of:
      0.14405231 = sum of:
        0.14405231 = weight(_text_:assess in 3851) [ClassicSimilarity], result of:
          0.14405231 = score(doc=3851,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.39077166 = fieldWeight in 3851, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.046875 = fieldNorm(doc=3851)
      0.25 = coord(1/4)
    
    Abstract
    Describes an intelligent agent based information retrieval model. The relevance matrix used by the intelligent agent consists of rows and columns; rows represent the documents and columns are used for keywords. Entries represent predetermined weights of keywords in documents. The search/query vector is constructed by the intelligent agent through explicit interaction with the user, using an interactive query refinement techniques. With manipulation of the relevance matrix against the search vector, the agent uses the manipulated information to filter the document representations and retrieve the most relevant documents, consequently improving the retrieval performance. Work is in progress on an experiment to compare the retrieval results from a conventional retrieval model and an intelligent agent based retrieval model. A test document collection on artificial intelligence has been selected as a sample. Retrieval tests are being carried out on a selected group of researchers using the 2 retrieval systems. Results will be compared to assess the retrieval performance using precision and recall matrices
  5. Keyes, J.G.: Using conceptual categories of questions to measure differences in retrieval performance (1996) 0.04
    0.036013078 = product of:
      0.14405231 = sum of:
        0.14405231 = weight(_text_:assess in 7440) [ClassicSimilarity], result of:
          0.14405231 = score(doc=7440,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.39077166 = fieldWeight in 7440, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.046875 = fieldNorm(doc=7440)
      0.25 = coord(1/4)
    
    Abstract
    The form of a question denotes the relationship between the current state of knowledge of the questioner and the propositional content of the question. To assess whether these semantic differences have implications for information retrieval, uses CF database, a 1239 document test database containing titles and abstracts of documents pertaining to cystic fibrosis. The database has an accompanying list of 100 questions which were divided into 5 conceptual categories of questions based on their semantic representation. 2 retrieval methods were used to investigate potential diferences in outcomes across conceptual categories: the cosine measurement and the similarity measurement. The ranked results produced by different algorithms will vary for individual conceptual categories as well as for overall performance
  6. Blandford, A.; Adams, A.; Attfield, S.; Buchanan, G.; Gow, J.; Makri, S.; Rimmer, J.; Warwick, C.: ¬The PRET A Rapporter framework : evaluating digital libraries from the perspective of information work (2008) 0.04
    0.036013078 = product of:
      0.14405231 = sum of:
        0.14405231 = weight(_text_:assess in 2021) [ClassicSimilarity], result of:
          0.14405231 = score(doc=2021,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.39077166 = fieldWeight in 2021, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.046875 = fieldNorm(doc=2021)
      0.25 = coord(1/4)
    
    Abstract
    The strongest tradition of IR systems evaluation has focused on system effectiveness; more recently, there has been a growing interest in evaluation of Interactive IR systems, balancing system and user-oriented evaluation criteria. In this paper we shift the focus to considering how IR systems, and particularly digital libraries, can be evaluated to assess (and improve) their fit with users' broader work activities. Taking this focus, we answer a different set of evaluation questions that reveal more about the design of interfaces, user-system interactions and how systems may be deployed in the information working context. The planning and conduct of such evaluation studies share some features with the established methods for conducting IR evaluation studies, but come with a shift in emphasis; for example, a greater range of ethical considerations may be pertinent. We present the PRET A Rapporter framework for structuring user-centred evaluation studies and illustrate its application to three evaluation studies of digital library systems.
  7. Sun, Y.; Kantor, P.B.: Cross-evaluation : a new model for information system evaluation (2006) 0.03
    0.030010901 = product of:
      0.120043606 = sum of:
        0.120043606 = weight(_text_:assess in 5048) [ClassicSimilarity], result of:
          0.120043606 = score(doc=5048,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.32564306 = fieldWeight in 5048, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5048)
      0.25 = coord(1/4)
    
    Abstract
    In this article, we introduce a new information system evaluation method and report on its application to a collaborative information seeking system, AntWorld. The key innovation of the new method is to use precisely the same group of users who work with the system as judges, a system we call Cross-Evaluation. In the new method, we also propose to assess the system at the level of task completion. The obvious potential limitation of this method is that individuals may be inclined to think more highly of the materials that they themselves have found and are almost certain to think more highly of their own work product than they do of the products built by others. The keys to neutralizing this problem are careful design and a corresponding analytical model based on analysis of variance. We model the several measures of task completion with a linear model of five effects, describing the users who interact with the system, the system used to finish the task, the task itself, the behavior of individuals as judges, and the selfjudgment bias. Our analytical method successfully isolates the effect of each variable. This approach provides a successful model to make concrete the "threerealities" paradigm, which calls for "real tasks," "real users," and "real systems."
  8. Ruthven, I.; Baillie, M.; Elsweiler, D.: ¬The relative effects of knowledge, interest and confidence in assessing relevance (2007) 0.03
    0.030010901 = product of:
      0.120043606 = sum of:
        0.120043606 = weight(_text_:assess in 835) [ClassicSimilarity], result of:
          0.120043606 = score(doc=835,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.32564306 = fieldWeight in 835, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=835)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The purpose of this paper is to examine how different aspects of an assessor's context, in particular their knowledge of a search topic, their interest in the search topic and their confidence in assessing relevance for a topic, affect the relevance judgements made and the assessor's ability to predict which documents they will assess as being relevant. Design/methodology/approach - The study was conducted as part of the Text REtrieval Conference (TREC) HARD track. Using a specially constructed questionnaire information was sought on TREC assessors' personal context and, using the TREC assessments gathered, the responses were correlated to the questionnaire questions and the final relevance decisions. Findings - This study found that each of the three factors (interest, knowledge and confidence) had an affect on how many documents were assessed as relevant and the balance between how many documents were marked as marginally or highly relevant. Also these factors are shown to affect an assessors' ability to predict what information they will finally mark as being relevant. Research limitations/implications - The major limitation is that the research is conducted within the TREC initiative. This means that we can report on results but cannot report on discussions with the assessors. The research implications are numerous but mainly on the effect of personal context on the outcomes of a user study. Practical implications - One major consequence is that we should take more account of how we construct search tasks for IIR evaluation to create tasks that are interesting and relevant to experimental subjects. Originality/value - Examining different search variables within one study to compare the relative effects on these variables on the search outcomes.
  9. Parapar, J.; Losada, D.E.; Presedo-Quindimil, M.A.; Barreiro, A.: Using score distributions to compare statistical significance tests for information retrieval evaluation (2020) 0.03
    0.030010901 = product of:
      0.120043606 = sum of:
        0.120043606 = weight(_text_:assess in 5506) [ClassicSimilarity], result of:
          0.120043606 = score(doc=5506,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.32564306 = fieldWeight in 5506, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5506)
      0.25 = coord(1/4)
    
    Abstract
    Statistical significance tests can provide evidence that the observed difference in performance between 2 methods is not due to chance. In information retrieval (IR), some studies have examined the validity and suitability of such tests for comparing search systems. We argue here that current methods for assessing the reliability of statistical tests suffer from some methodological weaknesses, and we propose a novel way to study significance tests for retrieval evaluation. Using Score Distributions, we model the output of multiple search systems, produce simulated search results from such models, and compare them using various significance tests. A key strength of this approach is that we assess statistical tests under perfect knowledge about the truth or falseness of the null hypothesis. This new method for studying the power of significance tests in IR evaluation is formal and innovative. Following this type of analysis, we found that both the sign test and Wilcoxon signed test have more power than the permutation test and the t-test. The sign test and Wilcoxon signed test also have good behavior in terms of type I errors. The bootstrap test shows few type I errors, but it has less power than the other methods tested.
  10. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.03
    0.029654698 = product of:
      0.118618794 = sum of:
        0.118618794 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
          0.118618794 = score(doc=262,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.5416616 = fieldWeight in 262, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.109375 = fieldNorm(doc=262)
      0.25 = coord(1/4)
    
    Date
    20.10.2000 12:22:23
  11. Tomaiuolo, N.G.; Parker, J.: Maximizing relevant retrieval : keyword and natural language searching (1998) 0.03
    0.029654698 = product of:
      0.118618794 = sum of:
        0.118618794 = weight(_text_:22 in 6418) [ClassicSimilarity], result of:
          0.118618794 = score(doc=6418,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.5416616 = fieldWeight in 6418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.109375 = fieldNorm(doc=6418)
      0.25 = coord(1/4)
    
    Source
    Online. 22(1998) no.6, S.57-58
  12. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.03
    0.029654698 = product of:
      0.118618794 = sum of:
        0.118618794 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
          0.118618794 = score(doc=6438,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.5416616 = fieldWeight in 6438, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.109375 = fieldNorm(doc=6438)
      0.25 = coord(1/4)
    
    Date
    11. 8.2001 16:22:19
  13. Dalrymple, P.W.: Retrieval by reformulation in two library catalogs : toward a cognitive model of searching behavior (1990) 0.03
    0.029654698 = product of:
      0.118618794 = sum of:
        0.118618794 = weight(_text_:22 in 5089) [ClassicSimilarity], result of:
          0.118618794 = score(doc=5089,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.5416616 = fieldWeight in 5089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.109375 = fieldNorm(doc=5089)
      0.25 = coord(1/4)
    
    Date
    22. 7.2006 18:43:54
  14. Lancaster, F.W.: Evaluating the performance of a large computerized information system (1985) 0.02
    0.02400872 = product of:
      0.09603488 = sum of:
        0.09603488 = weight(_text_:assess in 3649) [ClassicSimilarity], result of:
          0.09603488 = score(doc=3649,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.26051444 = fieldWeight in 3649, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.03125 = fieldNorm(doc=3649)
      0.25 = coord(1/4)
    
    Abstract
    F. W. Lancaster is known for his writing an the state of the art in librarylinformation science. His skill in identifying significant contributions and synthesizing literature in fields as diverse as online systems, vocabulary control, measurement and evaluation, and the paperless society have earned him esteem as a chronicler of information science. Equally deserving of repute is his own contribution to research in the discipline-his evaluation of the MEDLARS operating system. The MEDLARS study is notable for several reasons. It was the first large-scale application of retrieval experiment methodology to the evaluation of an actual operating system. As such, problems had to be faced that do not arise in laboratory-like conditions. One example is the problem of recall: how to determine, for a very large and dynamic database, the number of documents relevant to a given search request. By solving this problem and others attendant upon transferring an experimental methodology to the real world, Lancaster created a constructive procedure that could be used to improve the design and functioning of retrieval systems. The MEDLARS study is notable also for its contribution to our understanding of what constitutes a good index language and good indexing. The ideal retrieval system would be one that retrieves all and only relevant documents. The failures that occur in real operating systems, when a relevant document is not retrieved (a recall failure) or an irrelevant document is retrieved (a precision failure), can be analysed to assess the impact of various factors an the performance of the system. This is exactly what Lancaster did. He found both the MEDLARS indexing and the McSH index language to be significant factors affecting retrieval performance. The indexing, primarily because it was insufficiently exhaustive, explained a large number of recall failures. The index language, largely because of its insufficient specificity, accounted for a large number of precision failures. The purpose of identifying factors responsible for a system's failures is ultimately to improve the system. Unlike many user studies, the MEDLARS evaluation yielded recommendations that were eventually implemented.* Indexing exhaustivity was increased and the McSH index language was enriched with more specific terms and a larger entry vocabulary.
  15. Kutlu, M.; Elsayed, T.; Lease, M.: Intelligent topic selection for low-cost information retrieval evaluation : a new perspective on deep vs. shallow judging (2018) 0.02
    0.02400872 = product of:
      0.09603488 = sum of:
        0.09603488 = weight(_text_:assess in 5092) [ClassicSimilarity], result of:
          0.09603488 = score(doc=5092,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.26051444 = fieldWeight in 5092, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.03125 = fieldNorm(doc=5092)
      0.25 = coord(1/4)
    
    Abstract
    While test collections provide the cornerstone for Cranfield-based evaluation of information retrieval (IR) systems, it has become practically infeasible to rely on traditional pooling techniques to construct test collections at the scale of today's massive document collections (e.g., ClueWeb12's 700M+ Webpages). This has motivated a flurry of studies proposing more cost-effective yet reliable IR evaluation methods. In this paper, we propose a new intelligent topic selection method which reduces the number of search topics (and thereby costly human relevance judgments) needed for reliable IR evaluation. To rigorously assess our method, we integrate previously disparate lines of research on intelligent topic selection and deep vs. shallow judging (i.e., whether it is more cost-effective to collect many relevance judgments for a few topics or a few judgments for many topics). While prior work on intelligent topic selection has never been evaluated against shallow judging baselines, prior work on deep vs. shallow judging has largely argued for shallowed judging, but assuming random topic selection. We argue that for evaluating any topic selection method, ultimately one must ask whether it is actually useful to select topics, or should one simply perform shallow judging over many topics? In seeking a rigorous answer to this over-arching question, we conduct a comprehensive investigation over a set of relevant factors never previously studied together: 1) method of topic selection; 2) the effect of topic familiarity on human judging speed; and 3) how different topic generation processes (requiring varying human effort) impact (i) budget utilization and (ii) the resultant quality of judgments. Experiments on NIST TREC Robust 2003 and Robust 2004 test collections show that not only can we reliably evaluate IR systems with fewer topics, but also that: 1) when topics are intelligently selected, deep judging is often more cost-effective than shallow judging in evaluation reliability; and 2) topic familiarity and topic generation costs greatly impact the evaluation cost vs. reliability trade-off. Our findings challenge conventional wisdom in showing that deep judging is often preferable to shallow judging when topics are selected intelligently.
  16. Allan, J.; Callan, J.P.; Croft, W.B.; Ballesteros, L.; Broglio, J.; Xu, J.; Shu, H.: INQUERY at TREC-5 (1997) 0.02
    0.021181928 = product of:
      0.08472771 = sum of:
        0.08472771 = weight(_text_:22 in 3103) [ClassicSimilarity], result of:
          0.08472771 = score(doc=3103,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.38690117 = fieldWeight in 3103, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=3103)
      0.25 = coord(1/4)
    
    Date
    27. 2.1999 20:55:22
  17. Ng, K.B.; Loewenstern, D.; Basu, C.; Hirsh, H.; Kantor, P.B.: Data fusion of machine-learning methods for the TREC5 routing tak (and other work) (1997) 0.02
    0.021181928 = product of:
      0.08472771 = sum of:
        0.08472771 = weight(_text_:22 in 3107) [ClassicSimilarity], result of:
          0.08472771 = score(doc=3107,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.38690117 = fieldWeight in 3107, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=3107)
      0.25 = coord(1/4)
    
    Date
    27. 2.1999 20:59:22
  18. Saracevic, T.: On a method for studying the structure and nature of requests in information retrieval (1983) 0.02
    0.021181928 = product of:
      0.08472771 = sum of:
        0.08472771 = weight(_text_:22 in 2417) [ClassicSimilarity], result of:
          0.08472771 = score(doc=2417,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.38690117 = fieldWeight in 2417, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=2417)
      0.25 = coord(1/4)
    
    Pages
    S.22-25
  19. Rijsbergen, C.J. van: ¬A test for the separation of relevant and non-relevant documents in experimental retrieval collections (1973) 0.02
    0.016945543 = product of:
      0.06778217 = sum of:
        0.06778217 = weight(_text_:22 in 5002) [ClassicSimilarity], result of:
          0.06778217 = score(doc=5002,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.30952093 = fieldWeight in 5002, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=5002)
      0.25 = coord(1/4)
    
    Date
    19. 3.1996 11:22:12
  20. Sanderson, M.: ¬The Reuters test collection (1996) 0.02
    0.016945543 = product of:
      0.06778217 = sum of:
        0.06778217 = weight(_text_:22 in 6971) [ClassicSimilarity], result of:
          0.06778217 = score(doc=6971,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.30952093 = fieldWeight in 6971, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=6971)
      0.25 = coord(1/4)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon

Languages

  • e 41
  • d 3
  • f 1
  • More… Less…

Types

  • a 42
  • s 3
  • m 2
  • More… Less…