Search (8 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Retrievalstudien"
  • × year_i:[2010 TO 2020}
  1. Pal, S.; Mitra, M.; Kamps, J.: Evaluation effort, reliability and reusability in XML retrieval (2011) 0.04
    0.036841244 = sum of:
      0.020082973 = product of:
        0.08033189 = sum of:
          0.08033189 = weight(_text_:authors in 4197) [ClassicSimilarity], result of:
            0.08033189 = score(doc=4197,freq=4.0), product of:
              0.22555168 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.049475957 = queryNorm
              0.35615736 = fieldWeight in 4197, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4197)
        0.25 = coord(1/4)
      0.016758272 = product of:
        0.033516545 = sum of:
          0.033516545 = weight(_text_:22 in 4197) [ClassicSimilarity], result of:
            0.033516545 = score(doc=4197,freq=2.0), product of:
              0.17325637 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049475957 = queryNorm
              0.19345059 = fieldWeight in 4197, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4197)
        0.5 = coord(1/2)
    
    Abstract
    The Initiative for the Evaluation of XML retrieval (INEX) provides a TREC-like platform for evaluating content-oriented XML retrieval systems. Since 2007, INEX has been using a set of precision-recall based metrics for its ad hoc tasks. The authors investigate the reliability and robustness of these focused retrieval measures, and of the INEX pooling method. They explore four specific questions: How reliable are the metrics when assessments are incomplete, or when query sets are small? What is the minimum pool/query-set size that can be used to reliably evaluate systems? Can the INEX collections be used to fairly evaluate "new" systems that did not participate in the pooling process? And, for a fixed amount of assessment effort, would this effort be better spent in thoroughly judging a few queries, or in judging many queries relatively superficially? The authors' findings validate properties of precision-recall-based metrics observed in document retrieval settings. Early precision measures are found to be more error-prone and less stable under incomplete judgments and small topic-set sizes. They also find that system rankings remain largely unaffected even when assessment effort is substantially (but systematically) reduced, and confirm that the INEX collections remain usable when evaluating nonparticipating systems. Finally, they observe that for a fixed amount of effort, judging shallow pools for many queries is better than judging deep pools for a smaller set of queries. However, when judging only a random sample of a pool, it is better to completely judge fewer topics than to partially judge many topics. This result confirms the effectiveness of pooling methods.
    Date
    22. 1.2011 14:20:56
  2. Kelly, D.; Sugimoto, C.R.: ¬A systematic review of interactive information retrieval evaluation studies, 1967-2006 (2013) 0.01
    0.0100414865 = product of:
      0.020082973 = sum of:
        0.020082973 = product of:
          0.08033189 = sum of:
            0.08033189 = weight(_text_:authors in 684) [ClassicSimilarity], result of:
              0.08033189 = score(doc=684,freq=4.0), product of:
                0.22555168 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.049475957 = queryNorm
                0.35615736 = fieldWeight in 684, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=684)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    With the increasing number and diversity of search tools available, interest in the evaluation of search systems, particularly from a user perspective, has grown among researchers. More researchers are designing and evaluating interactive information retrieval (IIR) systems and beginning to innovate in evaluation methods. Maturation of a research specialty relies on the ability to replicate research, provide standards for measurement and analysis, and understand past endeavors. This article presents a historical overview of 40 years of IIR evaluation studies using the method of systematic review. A total of 2,791 journal and conference units were manually examined and 127 articles were selected for analysis in this study, based on predefined inclusion and exclusion criteria. These articles were systematically coded using features such as author, publication date, sources and references, and properties of the research method used in the articles, such as number of subjects, tasks, corpora, and measures. Results include data describing the growth of IIR studies over time, the most frequently occurring and cited authors and sources, and the most common types of corpora and measures used. An additional product of this research is a bibliography of IIR evaluation research that can be used by students, teachers, and those new to the area. To the authors' knowledge, this is the first historical, systematic characterization of the IIR evaluation literature, including the documentation of methods and measures used by researchers in this specialty.
  3. Al-Maskari, A.; Sanderson, M.: ¬A review of factors influencing user satisfaction in information retrieval (2010) 0.01
    0.009940564 = product of:
      0.019881127 = sum of:
        0.019881127 = product of:
          0.07952451 = sum of:
            0.07952451 = weight(_text_:authors in 3447) [ClassicSimilarity], result of:
              0.07952451 = score(doc=3447,freq=2.0), product of:
                0.22555168 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.049475957 = queryNorm
                0.35257778 = fieldWeight in 3447, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3447)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    The authors investigate factors influencing user satisfaction in information retrieval. It is evident from this study that user satisfaction is a subjective variable, which can be influenced by several factors such as system effectiveness, user effectiveness, user effort, and user characteristics and expectations. Therefore, information retrieval evaluators should consider all these factors in obtaining user satisfaction and in using it as a criterion of system effectiveness. Previous studies have conflicting conclusions on the relationship between user satisfaction and system effectiveness; this study has substantiated these findings and supports using user satisfaction as a criterion of system effectiveness.
  4. Schultz Jr., W.N.; Braddy, L.: ¬A librarian-centered study of perceptions of subject terms and controlled vocabulary (2017) 0.01
    0.009940564 = product of:
      0.019881127 = sum of:
        0.019881127 = product of:
          0.07952451 = sum of:
            0.07952451 = weight(_text_:authors in 5156) [ClassicSimilarity], result of:
              0.07952451 = score(doc=5156,freq=2.0), product of:
                0.22555168 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.049475957 = queryNorm
                0.35257778 = fieldWeight in 5156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5156)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    Controlled vocabulary and subject headings in OPAC records have proven to be useful in improving search results. The authors used a survey to gather information about librarian opinions and professional use of controlled vocabulary. Data from a range of backgrounds and expertise were examined, including academic and public libraries, and technical services as well as public services professionals. Responses overall demonstrated positive opinions of the value of controlled vocabulary, including in reference interactions as well as during bibliographic instruction sessions. Results are also examined based upon factors such as age and type of librarian.
  5. Chu, H.: Factors affecting relevance judgment : a report from TREC Legal track (2011) 0.01
    0.008379136 = product of:
      0.016758272 = sum of:
        0.016758272 = product of:
          0.033516545 = sum of:
            0.033516545 = weight(_text_:22 in 4540) [ClassicSimilarity], result of:
              0.033516545 = score(doc=4540,freq=2.0), product of:
                0.17325637 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049475957 = queryNorm
                0.19345059 = fieldWeight in 4540, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4540)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    12. 7.2011 18:29:22
  6. Wildemuth, B.; Freund, L.; Toms, E.G.: Untangling search task complexity and difficulty in the context of interactive information retrieval studies (2014) 0.01
    0.008379136 = product of:
      0.016758272 = sum of:
        0.016758272 = product of:
          0.033516545 = sum of:
            0.033516545 = weight(_text_:22 in 1786) [ClassicSimilarity], result of:
              0.033516545 = score(doc=1786,freq=2.0), product of:
                0.17325637 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049475957 = queryNorm
                0.19345059 = fieldWeight in 1786, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1786)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    6. 4.2015 19:31:22
  7. Ravana, S.D.; Taheri, M.S.; Rajagopal, P.: Document-based approach to improve the accuracy of pairwise comparison in evaluating information retrieval systems (2015) 0.01
    0.008379136 = product of:
      0.016758272 = sum of:
        0.016758272 = product of:
          0.033516545 = sum of:
            0.033516545 = weight(_text_:22 in 2587) [ClassicSimilarity], result of:
              0.033516545 = score(doc=2587,freq=2.0), product of:
                0.17325637 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049475957 = queryNorm
                0.19345059 = fieldWeight in 2587, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2587)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    20. 1.2015 18:30:22
  8. Rajagopal, P.; Ravana, S.D.; Koh, Y.S.; Balakrishnan, V.: Evaluating the effectiveness of information retrieval systems using effort-based relevance judgment (2019) 0.01
    0.008379136 = product of:
      0.016758272 = sum of:
        0.016758272 = product of:
          0.033516545 = sum of:
            0.033516545 = weight(_text_:22 in 5287) [ClassicSimilarity], result of:
              0.033516545 = score(doc=5287,freq=2.0), product of:
                0.17325637 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049475957 = queryNorm
                0.19345059 = fieldWeight in 5287, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5287)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    20. 1.2015 18:30:22