Search (4 results, page 1 of 1)

  • × theme_ss:"Retrievalstudien"
  • × type_ss:"el"
  • × year_i:[2010 TO 2020}
  1. Schirrmeister, N.-P.; Keil, S.: Aufbau einer Infrastruktur für Information Retrieval-Evaluationen (2012) 0.01
    0.0147717865 = product of:
      0.06647304 = sum of:
        0.011975031 = weight(_text_:of in 3097) [ClassicSimilarity], result of:
          0.011975031 = score(doc=3097,freq=4.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.19546966 = fieldWeight in 3097, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=3097)
        0.054498006 = weight(_text_:software in 3097) [ClassicSimilarity], result of:
          0.054498006 = score(doc=3097,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.35064998 = fieldWeight in 3097, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0625 = fieldNorm(doc=3097)
      0.22222222 = coord(2/9)
    
    Abstract
    Das Projekt "Aufbau einer Infrastruktur für Information Retrieval-Evaluationen" (AIIRE) bietet eine Softwareinfrastruktur zur Unterstützung von Information Retrieval-Evaluationen (IR-Evaluationen). Die Infrastruktur basiert auf einem Tool-Kit, das bei GESIS im Rahmen des DFG-Projekts IRM entwickelt wurde. Ziel ist es, ein System zu bieten, das zur Forschung und Lehre am Fachbereich Media für IR-Evaluationen genutzt werden kann. This paper describes some aspects of a project called "Aufbau einer Infrastruktur für Information Retrieval-Evaluationen" (AIIRE). Its goal is to build a software-infrastructure which supports the evaluation of information retrieval algorithms.
  2. Sünkler, S.: Prototypische Entwicklung einer Software für die Erfassung und Analyse explorativer Suchen in Verbindung mit Tests zur Retrievaleffektivität (2012) 0.01
    0.0053522103 = product of:
      0.048169892 = sum of:
        0.048169892 = weight(_text_:software in 479) [ClassicSimilarity], result of:
          0.048169892 = score(doc=479,freq=4.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.30993375 = fieldWeight in 479, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=479)
      0.11111111 = coord(1/9)
    
    Abstract
    Gegenstand dieser Arbeit ist die Entwicklung eines funktionalen Prototyps einer Webanwendung für die Verknüpfung der Evaluierung von explorativen Suchen in Verbindung mit der Durchführung klassisches Retrievaltests. Als Grundlage für die Programmierung des Prototyps werden benutzerorientierte und systemorientierte Evalulierungsmethoden für Suchmaschinen analysiert und in einem theoretischen Modell zur Untersuchung von Informationssysteme und Suchmaschinen kombiniert. Bei der Gestaltung des Modells und des Prototyps wird gezeigt, wie sich aufgezeichnete Aktionsdaten praktisch für die Suchmaschinenevaluierung verwenden lassen, um auf der einen Seite eine Datengrundlage für Retrievaltests zu gewinnen und andererseits, um für die Auswertung von Relevanzbewertungen auch das implizierte Feedback durch Handlungen der Anwender zu berücksichtigen. Retrievaltests sind das gängige und erprobte Mittel zur Messung der Retrievaleffektiviät von Informationssystemen und Suchmaschinen, verzichten aber auf eine Berücksichtigung des tatsächlichen Nutzerverhaltens. Eine Methode für die Erfassung der Interaktionen von Suchmaschinennutzern sind protokollbasierte Tests, mit denen sich Logdateien über Benutzer einer Anwendung generieren lassen. Die im Rahmen der Arbeit umgesetzte Software bietet einen Ansatz, Retrievaltests auf Basis protokollierter Nutzerdaten in Verbindung mit kontrollierten Suchaufgaben, durchzuführen. Das Ergebnis dieser Arbeit ist ein fertiger funktionaler Prototyp, der in seinem Umfang bereits innerhalb von Suchmaschinenstudien nutzbar ist.
  3. Hider, P.: ¬The search value added by professional indexing to a bibliographic database (2017) 0.00
    0.0019502735 = product of:
      0.017552461 = sum of:
        0.017552461 = weight(_text_:of in 3868) [ClassicSimilarity], result of:
          0.017552461 = score(doc=3868,freq=22.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.28651062 = fieldWeight in 3868, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3868)
      0.11111111 = coord(1/9)
    
    Abstract
    Gross et al. (2015) have demonstrated that about a quarter of hits would typically be lost to keyword searchers if contemporary academic library catalogs dropped their controlled subject headings. This paper reports on an analysis of the loss levels that would result if a bibliographic database, namely the Australian Education Index (AEI), were missing the subject descriptors and identifiers assigned by its professional indexers, employing the methodology developed by Gross and Taylor (2005), and later by Gross et al. (2015). The results indicate that AEI users would lose a similar proportion of hits per query to that experienced by library catalog users: on average, 27% of the resources found by a sample of keyword queries on the AEI database would not have been found without the subject indexing, based on the Australian Thesaurus of Education Descriptors (ATED). The paper also discusses the methodological limitations of these studies, pointing out that real-life users might still find some of the resources missed by a particular query through follow-up searches, while additional resources might also be found through iterative searching on the subject vocabulary. The paper goes on to describe a new research design, based on a before - and - after experiment, which addresses some of these limitations. It is argued that this alternative design will provide a more realistic picture of the value that professionally assigned subject indexing and controlled subject vocabularies can add to literature searching of a more scholarly and thorough kind.
  4. Schaer, P.; Mayr, P.; Sünkler, S.; Lewandowski, D.: How relevant is the long tail? : a relevance assessment study on million short (2016) 0.00
    0.0013148742 = product of:
      0.011833867 = sum of:
        0.011833867 = weight(_text_:of in 3144) [ClassicSimilarity], result of:
          0.011833867 = score(doc=3144,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.19316542 = fieldWeight in 3144, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3144)
      0.11111111 = coord(1/9)
    
    Abstract
    Users of web search engines are known to mostly focus on the top ranked results of the search engine result page. While many studies support this well known information seeking pattern only few studies concentrate on the question what users are missing by neglecting lower ranked results. To learn more about the relevance distributions in the so-called long tail we conducted a relevance assessment study with the Million Short long-tail web search engine. While we see a clear difference in the content between the head and the tail of the search engine result list we see no statistical significant differences in the binary relevance judgments and weak significant differences when using graded relevance. The tail contains different but still valuable results. We argue that the long tail can be a rich source for the diversification of web search engine result lists but it needs more evaluation to clearly describe the differences.
    Footnote
    To appear in Experimental IR Meets Multilinguality, Multimodality, and Interaction. 7th International Conference of the CLEF Association, CLEF 2016, \'Evora, Portugal, September 5-8, 2016.

Languages

Types