Search (4 results, page 1 of 1)

  • × theme_ss:"Retrievalstudien"
  • × year_i:[2020 TO 2030}
  1. Petras, V.; Womser-Hacker, C.: Evaluation im Information Retrieval (2023) 0.01
    0.010221538 = product of:
      0.020443076 = sum of:
        0.020443076 = product of:
          0.061329227 = sum of:
            0.061329227 = weight(_text_:c in 808) [ClassicSimilarity], result of:
              0.061329227 = score(doc=808,freq=6.0), product of:
                0.15484828 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044891298 = queryNorm
                0.3960601 = fieldWeight in 808, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.046875 = fieldNorm(doc=808)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Das Ziel einer Evaluation ist die Überprüfung, ob bzw. in welchem Ausmaß ein Informationssystem die an das System gestellten Anforderungen erfüllt. Informationssysteme können aus verschiedenen Perspektiven evaluiert werden. Für eine ganzheitliche Evaluation (als Synonym wird auch Evaluierung benutzt), die unterschiedliche Qualitätsaspekte betrachtet (z. B. wie gut ein System relevante Dokumente rankt, wie schnell ein System die Suche durchführt, wie die Ergebnispräsentation gestaltet ist oder wie Suchende durch das System geführt werden) und die Erfüllung mehrerer Anforderungen überprüft, empfiehlt es sich, sowohl eine perspektivische als auch methodische Triangulation (d. h. der Einsatz von mehreren Ansätzen zur Qualitätsüberprüfung) vorzunehmen. Im Information Retrieval (IR) konzentriert sich die Evaluation auf die Qualitätseinschätzung der Suchfunktion eines Information-Retrieval-Systems (IRS), wobei oft zwischen systemzentrierter und nutzerzentrierter Evaluation unterschieden wird. Dieses Kapitel setzt den Fokus auf die systemzentrierte Evaluation, während andere Kapitel dieses Handbuchs andere Evaluationsansätze diskutieren (s. Kapitel C 4 Interaktives Information Retrieval, C 7 Cross-Language Information Retrieval und D 1 Information Behavior).
  2. Parapar, J.; Losada, D.E.; Presedo-Quindimil, M.A.; Barreiro, A.: Using score distributions to compare statistical significance tests for information retrieval evaluation (2020) 0.01
    0.00831538 = product of:
      0.01663076 = sum of:
        0.01663076 = product of:
          0.049892277 = sum of:
            0.049892277 = weight(_text_:i in 5506) [ClassicSimilarity], result of:
              0.049892277 = score(doc=5506,freq=4.0), product of:
                0.16931784 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.044891298 = queryNorm
                0.29466638 = fieldWeight in 5506, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5506)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Statistical significance tests can provide evidence that the observed difference in performance between 2 methods is not due to chance. In information retrieval (IR), some studies have examined the validity and suitability of such tests for comparing search systems. We argue here that current methods for assessing the reliability of statistical tests suffer from some methodological weaknesses, and we propose a novel way to study significance tests for retrieval evaluation. Using Score Distributions, we model the output of multiple search systems, produce simulated search results from such models, and compare them using various significance tests. A key strength of this approach is that we assess statistical tests under perfect knowledge about the truth or falseness of the null hypothesis. This new method for studying the power of significance tests in IR evaluation is formal and innovative. Following this type of analysis, we found that both the sign test and Wilcoxon signed test have more power than the permutation test and the t-test. The sign test and Wilcoxon signed test also have good behavior in terms of type I errors. The bootstrap test shows few type I errors, but it has less power than the other methods tested.
  3. Wartena, C.; Golub, K.: Evaluierung von Verschlagwortung im Kontext des Information Retrievals (2021) 0.00
    0.0049178395 = product of:
      0.009835679 = sum of:
        0.009835679 = product of:
          0.029507035 = sum of:
            0.029507035 = weight(_text_:c in 376) [ClassicSimilarity], result of:
              0.029507035 = score(doc=376,freq=2.0), product of:
                0.15484828 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044891298 = queryNorm
                0.1905545 = fieldWeight in 376, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=376)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  4. Gao, R.; Ge, Y.; Sha, C.: FAIR: Fairness-aware information retrieval evaluation (2022) 0.00
    0.0049178395 = product of:
      0.009835679 = sum of:
        0.009835679 = product of:
          0.029507035 = sum of:
            0.029507035 = weight(_text_:c in 669) [ClassicSimilarity], result of:
              0.029507035 = score(doc=669,freq=2.0), product of:
                0.15484828 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044891298 = queryNorm
                0.1905545 = fieldWeight in 669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=669)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)