Search (123 results, page 1 of 7)

  • × theme_ss:"Retrievalstudien"
  1. Van der Walt, H.E.A.; Brakel, P.A. van: Method for the evaluation of the retrieval effectiveness of a CD-ROM bibliographic database (1991) 0.08
    0.0789566 = product of:
      0.1579132 = sum of:
        0.14943607 = weight(_text_:van in 3114) [ClassicSimilarity], result of:
          0.14943607 = score(doc=3114,freq=4.0), product of:
            0.24500148 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.043933928 = queryNorm
            0.60993946 = fieldWeight in 3114, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3114)
        0.008477128 = product of:
          0.016954256 = sum of:
            0.016954256 = weight(_text_:der in 3114) [ClassicSimilarity], result of:
              0.016954256 = score(doc=3114,freq=2.0), product of:
                0.098138146 = queryWeight, product of:
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.043933928 = queryNorm
                0.17275909 = fieldWeight in 3114, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3114)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
  2. Rijsbergen, C.J. van: Foundations of evaluation (1974) 0.08
    0.07547662 = product of:
      0.30190647 = sum of:
        0.30190647 = weight(_text_:van in 1078) [ClassicSimilarity], result of:
          0.30190647 = score(doc=1078,freq=2.0), product of:
            0.24500148 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.043933928 = queryNorm
            1.2322638 = fieldWeight in 1078, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.15625 = fieldNorm(doc=1078)
      0.25 = coord(1/4)
    
  3. Rijsbergen, C.J. van: ¬A test for the separation of relevant and non-relevant documents in experimental retrieval collections (1973) 0.07
    0.07228617 = product of:
      0.14457235 = sum of:
        0.12076259 = weight(_text_:van in 5002) [ClassicSimilarity], result of:
          0.12076259 = score(doc=5002,freq=2.0), product of:
            0.24500148 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.043933928 = queryNorm
            0.49290553 = fieldWeight in 5002, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.0625 = fieldNorm(doc=5002)
        0.023809763 = product of:
          0.047619525 = sum of:
            0.047619525 = weight(_text_:22 in 5002) [ClassicSimilarity], result of:
              0.047619525 = score(doc=5002,freq=2.0), product of:
                0.15384912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043933928 = queryNorm
                0.30952093 = fieldWeight in 5002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5002)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    19. 3.1996 11:22:12
  4. Rijsbergen, C.J. van: Retrieval effectiveness (1981) 0.06
    0.060381293 = product of:
      0.24152517 = sum of:
        0.24152517 = weight(_text_:van in 3147) [ClassicSimilarity], result of:
          0.24152517 = score(doc=3147,freq=2.0), product of:
            0.24500148 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.043933928 = queryNorm
            0.98581105 = fieldWeight in 3147, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.125 = fieldNorm(doc=3147)
      0.25 = coord(1/4)
    
  5. Crestani, F.; Rijsbergen, C.J. van: Information retrieval by imaging (1996) 0.05
    0.05421463 = product of:
      0.10842926 = sum of:
        0.09057194 = weight(_text_:van in 6967) [ClassicSimilarity], result of:
          0.09057194 = score(doc=6967,freq=2.0), product of:
            0.24500148 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.043933928 = queryNorm
            0.36967915 = fieldWeight in 6967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.046875 = fieldNorm(doc=6967)
        0.01785732 = product of:
          0.03571464 = sum of:
            0.03571464 = weight(_text_:22 in 6967) [ClassicSimilarity], result of:
              0.03571464 = score(doc=6967,freq=2.0), product of:
                0.15384912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043933928 = queryNorm
                0.23214069 = fieldWeight in 6967, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6967)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  6. Ng, K.B.; Loewenstern, D.; Basu, C.; Hirsh, H.; Kantor, P.B.: Data fusion of machine-learning methods for the TREC5 routing tak (and other work) (1997) 0.04
    0.043758858 = product of:
      0.087517716 = sum of:
        0.05775551 = weight(_text_:c in 3107) [ClassicSimilarity], result of:
          0.05775551 = score(doc=3107,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.381109 = fieldWeight in 3107, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.078125 = fieldNorm(doc=3107)
        0.029762203 = product of:
          0.059524406 = sum of:
            0.059524406 = weight(_text_:22 in 3107) [ClassicSimilarity], result of:
              0.059524406 = score(doc=3107,freq=2.0), product of:
                0.15384912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043933928 = queryNorm
                0.38690117 = fieldWeight in 3107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3107)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    27. 2.1999 20:59:22
  7. Vegt, A. van der; Zuccon, G.; Koopman, B.: Do better search engines really equate to better clinical decisions? : If not, why not? (2021) 0.04
    0.040765855 = product of:
      0.08153171 = sum of:
        0.07547662 = weight(_text_:van in 150) [ClassicSimilarity], result of:
          0.07547662 = score(doc=150,freq=2.0), product of:
            0.24500148 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.043933928 = queryNorm
            0.30806595 = fieldWeight in 150, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.0390625 = fieldNorm(doc=150)
        0.0060550915 = product of:
          0.012110183 = sum of:
            0.012110183 = weight(_text_:der in 150) [ClassicSimilarity], result of:
              0.012110183 = score(doc=150,freq=2.0), product of:
                0.098138146 = queryWeight, product of:
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.043933928 = queryNorm
                0.12339935 = fieldWeight in 150, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=150)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
  8. Becks, D.; Mandl, T.; Womser-Hacker, C.: Spezielle Anforderungen bei der Evaluierung von Patent-Retrieval-Systemen (2010) 0.04
    0.03896984 = product of:
      0.07793968 = sum of:
        0.057175044 = weight(_text_:c in 4667) [ClassicSimilarity], result of:
          0.057175044 = score(doc=4667,freq=4.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.3772787 = fieldWeight in 4667, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4667)
        0.020764638 = product of:
          0.041529275 = sum of:
            0.041529275 = weight(_text_:der in 4667) [ClassicSimilarity], result of:
              0.041529275 = score(doc=4667,freq=12.0), product of:
                0.098138146 = queryWeight, product of:
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.043933928 = queryNorm
                0.4231716 = fieldWeight in 4667, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4667)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Innerhalb der informationswissenschaftlich geprägten Fachinformation nimmt die Patentdomäne eine gewisse Sonderstellung ein, da sie über eine Reihe von Besonderheiten verfügt, die es notwendig machen, die klassischen Methoden der Bewertung zu überarbeiten bzw. zu adaptieren. Dies belegen unter anderem die Ergebnisse des Intellectual Property Track, der seit 2009 im Rahmen der Evaluierungskampagne CLEF stattfindet. Der vorliegende Artikel beschreibt die innerhalb des zuvor genannten Track erzielten Ergebnisse. Darüber hinaus werden die Konsequenzen für die Evaluierung von Patent-Retrieval-Systemen herausgearbeitet.
    Source
    Information und Wissen: global, sozial und frei? Proceedings des 12. Internationalen Symposiums für Informationswissenschaft (ISI 2011) ; Hildesheim, 9. - 11. März 2011. Hrsg.: J. Griesbaum, T. Mandl u. C. Womser-Hacker
  9. Crestani, F.; Ruthven, I.; Sanderson, M.; Rijsbergen, C.J. van: ¬The troubles with using a logical model of IR on a large collection of documents : experimenting retrieval by logical imaging on TREC (1996) 0.04
    0.03773831 = product of:
      0.15095323 = sum of:
        0.15095323 = weight(_text_:van in 7522) [ClassicSimilarity], result of:
          0.15095323 = score(doc=7522,freq=2.0), product of:
            0.24500148 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.043933928 = queryNorm
            0.6161319 = fieldWeight in 7522, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.078125 = fieldNorm(doc=7522)
      0.25 = coord(1/4)
    
  10. Petras, V.; Womser-Hacker, C.: Evaluation im Information Retrieval (2023) 0.04
    0.03630328 = product of:
      0.07260656 = sum of:
        0.06002129 = weight(_text_:c in 808) [ClassicSimilarity], result of:
          0.06002129 = score(doc=808,freq=6.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.3960601 = fieldWeight in 808, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.046875 = fieldNorm(doc=808)
        0.012585273 = product of:
          0.025170546 = sum of:
            0.025170546 = weight(_text_:der in 808) [ClassicSimilarity], result of:
              0.025170546 = score(doc=808,freq=6.0), product of:
                0.098138146 = queryWeight, product of:
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.043933928 = queryNorm
                0.25648075 = fieldWeight in 808, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.046875 = fieldNorm(doc=808)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Das Ziel einer Evaluation ist die Überprüfung, ob bzw. in welchem Ausmaß ein Informationssystem die an das System gestellten Anforderungen erfüllt. Informationssysteme können aus verschiedenen Perspektiven evaluiert werden. Für eine ganzheitliche Evaluation (als Synonym wird auch Evaluierung benutzt), die unterschiedliche Qualitätsaspekte betrachtet (z. B. wie gut ein System relevante Dokumente rankt, wie schnell ein System die Suche durchführt, wie die Ergebnispräsentation gestaltet ist oder wie Suchende durch das System geführt werden) und die Erfüllung mehrerer Anforderungen überprüft, empfiehlt es sich, sowohl eine perspektivische als auch methodische Triangulation (d. h. der Einsatz von mehreren Ansätzen zur Qualitätsüberprüfung) vorzunehmen. Im Information Retrieval (IR) konzentriert sich die Evaluation auf die Qualitätseinschätzung der Suchfunktion eines Information-Retrieval-Systems (IRS), wobei oft zwischen systemzentrierter und nutzerzentrierter Evaluation unterschieden wird. Dieses Kapitel setzt den Fokus auf die systemzentrierte Evaluation, während andere Kapitel dieses Handbuchs andere Evaluationsansätze diskutieren (s. Kapitel C 4 Interaktives Information Retrieval, C 7 Cross-Language Information Retrieval und D 1 Information Behavior).
    Source
    Grundlagen der Informationswissenschaft. Hrsg.: Rainer Kuhlen, Dirk Lewandowski, Wolfgang Semar und Christa Womser-Hacker. 7., völlig neu gefasste Ausg
  11. Krause, J.; Womser-Hacker, C.: PADOK-II : Retrievaltests zur Bewertung von Volltextindexierungsvarianten für das deutsche Patentinformationssystem (1990) 0.04
    0.03591842 = product of:
      0.07183684 = sum of:
        0.04620441 = weight(_text_:c in 2653) [ClassicSimilarity], result of:
          0.04620441 = score(doc=2653,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.3048872 = fieldWeight in 2653, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.0625 = fieldNorm(doc=2653)
        0.025632428 = product of:
          0.051264856 = sum of:
            0.051264856 = weight(_text_:der in 2653) [ClassicSimilarity], result of:
              0.051264856 = score(doc=2653,freq=14.0), product of:
                0.098138146 = queryWeight, product of:
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.043933928 = queryNorm
                0.5223744 = fieldWeight in 2653, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2653)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Vorgestellt werden die Ergebnisse extensiver Retrievaltests von zwei Varianten von Inhalteserschließungen (Freitext und PASSAT) für das deutsche Patentinformationssystem auf der Basis von Volltexten. Die Tests führte die Fachgruppe Linguistische Informationswissenschaft der Universität Regensburg von 1986-1989 in Zusammenarbeit mit dem Deutschen Patentamt, dem Fachinformationszentrum Karlsruhe und meheren industrieellen Partnern durch. Der Schwerpunkt des Berichts liegt auf dem allgemeinen Ansatz der Bewertung der Ziele des Projekts und auf der Darstellung der statistischen Evaluierungsergebnisse.
  12. Ellis, D.: Progress and problems in information retrieval (1996) 0.04
    0.035007086 = product of:
      0.07001417 = sum of:
        0.04620441 = weight(_text_:c in 789) [ClassicSimilarity], result of:
          0.04620441 = score(doc=789,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.3048872 = fieldWeight in 789, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.0625 = fieldNorm(doc=789)
        0.023809763 = product of:
          0.047619525 = sum of:
            0.047619525 = weight(_text_:22 in 789) [ClassicSimilarity], result of:
              0.047619525 = score(doc=789,freq=2.0), product of:
                0.15384912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043933928 = queryNorm
                0.30952093 = fieldWeight in 789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=789)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    26. 7.2002 20:22:46
    Footnote
    Rez. in: Managing information 3(1996) no.10, S.49 (D. Bawden); Program 32(1998) no.2, S.190-192 (C. Revie)
  13. Bauer, G.; Schneider, C.: PADOK-II : Untersuchungen zur Volltextproblematik und zur interpretativen Analyse der Retrievalprotokolle (1990) 0.03
    0.034967713 = product of:
      0.069935426 = sum of:
        0.04620441 = weight(_text_:c in 4164) [ClassicSimilarity], result of:
          0.04620441 = score(doc=4164,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.3048872 = fieldWeight in 4164, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.0625 = fieldNorm(doc=4164)
        0.023731016 = product of:
          0.04746203 = sum of:
            0.04746203 = weight(_text_:der in 4164) [ClassicSimilarity], result of:
              0.04746203 = score(doc=4164,freq=12.0), product of:
                0.098138146 = queryWeight, product of:
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.043933928 = queryNorm
                0.4836247 = fieldWeight in 4164, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4164)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Dieser Beitrag baut auf dem Bericht über das methodische Konzept, über die Durchführung und die Ergebnisse der PADOK-II-Retrievaltests auf (Krause/Wormser-Hacker). Hier werden die Ergebnisse von Zusatztests zum Einfluß des Umfangs der zugrundeliegenden Dokumente (Volltext vs. Titel+Abstract) beschrieben, die eine deutliche Beeinträchtigung der Recall-Werte bei reduziertem Dokumentenumfang zeigen. Zur interpretativen Analyse der Retrievalprotokolle werden vor allem die methodische Einbindung, Ansatzpunkte der Analyse und erste Ergebnisse vorgestelt.
  14. Womser-Hacker, C.: Evaluierung im Information Retrieval (2013) 0.03
    0.034932848 = product of:
      0.069865696 = sum of:
        0.05775551 = weight(_text_:c in 728) [ClassicSimilarity], result of:
          0.05775551 = score(doc=728,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.381109 = fieldWeight in 728, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.078125 = fieldNorm(doc=728)
        0.012110183 = product of:
          0.024220366 = sum of:
            0.024220366 = weight(_text_:der in 728) [ClassicSimilarity], result of:
              0.024220366 = score(doc=728,freq=2.0), product of:
                0.098138146 = queryWeight, product of:
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.043933928 = queryNorm
                0.2467987 = fieldWeight in 728, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.078125 = fieldNorm(doc=728)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Grundlagen der praktischen Information und Dokumentation. Handbuch zur Einführung in die Informationswissenschaft und -praxis. 6., völlig neu gefaßte Ausgabe. Hrsg. von R. Kuhlen, W. Semar u. D. Strauch. Begründet von Klaus Laisiepen, Ernst Lutterbeck, Karl-Heinrich Meyer-Uhlenried
  15. Kluck, M.; Winter, M.: Topic-Entwicklung und Relevanzbewertung bei GIRT : ein Werkstattbericht (2006) 0.03
    0.031492386 = product of:
      0.06298477 = sum of:
        0.04620441 = weight(_text_:c in 5967) [ClassicSimilarity], result of:
          0.04620441 = score(doc=5967,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.3048872 = fieldWeight in 5967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.0625 = fieldNorm(doc=5967)
        0.016780363 = product of:
          0.033560727 = sum of:
            0.033560727 = weight(_text_:der in 5967) [ClassicSimilarity], result of:
              0.033560727 = score(doc=5967,freq=6.0), product of:
                0.098138146 = queryWeight, product of:
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.043933928 = queryNorm
                0.34197432 = fieldWeight in 5967, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5967)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Der Zusammenhang zwischen Topic-Entwicklung und Relevanzbewertung wird anhand einiger Fallbeispiele aus der CLEF-Evaluierungskampagne 2005 diskutiert. Im fachspezifischen Retrievaltest für multilinguale Systeme wurden die Topics am Dokumentenbestand von GIRT entwickelt. Die Zusammenhänge von Topic-Formulierung und Interpretationsspielräumen bei der Relevanzbewertung werden untersucht.
    Source
    Effektive Information Retrieval Verfahren in Theorie und Praxis: ausgewählte und erweiterte Beiträge des Vierten Hildesheimer Evaluierungs- und Retrievalworkshop (HIER 2005), Hildesheim, 20.7.2005. Hrsg.: T. Mandl u. C. Womser-Hacker
  16. Mandl, T.: Neue Entwicklungen bei den Evaluierungsinitiativen im Information Retrieval (2006) 0.03
    0.031492386 = product of:
      0.06298477 = sum of:
        0.04620441 = weight(_text_:c in 5975) [ClassicSimilarity], result of:
          0.04620441 = score(doc=5975,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.3048872 = fieldWeight in 5975, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.0625 = fieldNorm(doc=5975)
        0.016780363 = product of:
          0.033560727 = sum of:
            0.033560727 = weight(_text_:der in 5975) [ClassicSimilarity], result of:
              0.033560727 = score(doc=5975,freq=6.0), product of:
                0.098138146 = queryWeight, product of:
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.043933928 = queryNorm
                0.34197432 = fieldWeight in 5975, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5975)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Im Information Retrieval tragen Evaluierungsinitiativen erheblich zur empirisch fundierten Forschung bei. Mit umfangreichen Kollektionen und Aufgaben unterstützen sie die Standardisierung und damit die Systementwicklung. Die wachsenden Anforderungen hinsichtlich der Korpora und Anwendungsszenarien führten zu einer starken Diversifizierung innerhalb der Evaluierungsinitiativen. Dieser Artikel gibt einen Überblick über den aktuellen Stand der wichtigsten Evaluierungsinitiativen und neuen Trends.
    Source
    Effektive Information Retrieval Verfahren in Theorie und Praxis: ausgewählte und erweiterte Beiträge des Vierten Hildesheimer Evaluierungs- und Retrievalworkshop (HIER 2005), Hildesheim, 20.7.2005. Hrsg.: T. Mandl u. C. Womser-Hacker
  17. Wolff, C.: Leistungsvergleich der Retrievaloberflächen zwischen Web und klassischen Expertensystemen (2001) 0.03
    0.031428616 = product of:
      0.06285723 = sum of:
        0.04042886 = weight(_text_:c in 5870) [ClassicSimilarity], result of:
          0.04042886 = score(doc=5870,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.2667763 = fieldWeight in 5870, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5870)
        0.022428373 = product of:
          0.044856746 = sum of:
            0.044856746 = weight(_text_:der in 5870) [ClassicSimilarity], result of:
              0.044856746 = score(doc=5870,freq=14.0), product of:
                0.098138146 = queryWeight, product of:
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.043933928 = queryNorm
                0.4570776 = fieldWeight in 5870, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5870)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Die meisten Web-Auftritte der Hosts waren bisher für den Retrieval-Laien gedacht. Im Hintergrund steht dabei das Ziel: mehr Nutzung durch einfacheres Retrieval. Dieser Ansatz steht aber im Konflikt mit der wachsenden Datenmenge und Dokumentgröße, die eigentlich ein immer ausgefeilteres Retrieval verlangen. Häufig wird von Information Professionals die Kritik geäußert, dass die Webanwendungen einen Verlust an Relevanz bringen. Wie weit der Nutzer tatsächlich einen Kompromiss zwischen Relevanz und Vollständigkeit eingehen muss, soll in diesem Beitrag anhand verschiedener Host-Rechner quantifiziert werden
    Series
    Tagungen der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis; 4
    Source
    Information Research & Content Management: Orientierung, Ordnung und Organisation im Wissensmarkt; 23. DGI-Online-Tagung der DGI und 53. Jahrestagung der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e.V. DGI, Frankfurt am Main, 8.-10.5.2001. Proceedings. Hrsg.: R. Schmidt
  18. Kluck, M.; Mandl, T.; Womser-Hacker, C.: Cross-Language Evaluation Forum (CLEF) : Europäische Initiative zur Bewertung sprachübergreifender Retrievalverfahren (2002) 0.03
    0.027555838 = product of:
      0.055111676 = sum of:
        0.04042886 = weight(_text_:c in 266) [ClassicSimilarity], result of:
          0.04042886 = score(doc=266,freq=2.0), product of:
            0.15154591 = queryWeight, product of:
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.043933928 = queryNorm
            0.2667763 = fieldWeight in 266, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.4494052 = idf(docFreq=3817, maxDocs=44218)
              0.0546875 = fieldNorm(doc=266)
        0.014682818 = product of:
          0.029365636 = sum of:
            0.029365636 = weight(_text_:der in 266) [ClassicSimilarity], result of:
              0.029365636 = score(doc=266,freq=6.0), product of:
                0.098138146 = queryWeight, product of:
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.043933928 = queryNorm
                0.29922754 = fieldWeight in 266, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=266)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Seit einigen Jahren hat sich in Europa eine Initiative zur Bewertung von Information Retrieval in mehrsprachigen Kontexten etabliert. Das Cross Language Evaluation forum (CLEF) wird von der EU gefördert und kooperiert mit Evaluierungsprojekten in den USA (TREC) und in Japan (NTCIR). Dieser Artikel stellt das CLEF in den Rahmen der anderen internationalen Initiativen. Neue Entwicklungen sowohl bei den Information Retrieval Systemen als auch bei den Evaluierungsmethoden werden aufgezeit. Die hohe Anzahl von Teilnehmern aus Forschungsinstitutionen und der Industrie beweist die steigende Bedeutung des sprachübergreifenden Retrievals
  19. Information retrieval experiment (1981) 0.03
    0.026416814 = product of:
      0.105667256 = sum of:
        0.105667256 = weight(_text_:van in 2653) [ClassicSimilarity], result of:
          0.105667256 = score(doc=2653,freq=2.0), product of:
            0.24500148 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.043933928 = queryNorm
            0.43129233 = fieldWeight in 2653, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2653)
      0.25 = coord(1/4)
    
    Content
    Enthält die Beiträge: ROBERTSON, S.E.: The methodology of information retrieval experiment; RIJSBERGEN, C.J. van: Retrieval effectiveness; BELKIN, N.: Ineffable concepts in information retrieval; TAGUE, J.M.: The pragmatics of information retrieval experimentation; LANCASTER, F.W.: Evaluation within the environment of an operating information service; BARRACLOUGH, E.D.: Opportunities for testing with online systems; KEEN, M.E.: Laboratory tests of manual systems; ODDY, R.N.: Laboratory tests: automatic systems; HEINE, M.D.: Simulation, and simulation experiments; COOPER, W.S.: Gedanken experimentation: an alternative to traditional system testing?; SPARCK JONES, K.: Actual tests - retrieval system tests; EVANS, L.: An experiment: search strategy variation in SDI profiles; SALTON, G.: The Smart environment for retrieval system evaluation - advantage and problem areas
  20. Sparck Jones, K.; Rijsbergen, C.J. van: Progress in documentation : Information retrieval test collection (1976) 0.03
    0.026416814 = product of:
      0.105667256 = sum of:
        0.105667256 = weight(_text_:van in 4161) [ClassicSimilarity], result of:
          0.105667256 = score(doc=4161,freq=2.0), product of:
            0.24500148 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.043933928 = queryNorm
            0.43129233 = fieldWeight in 4161, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4161)
      0.25 = coord(1/4)
    

Languages

  • e 73
  • d 46
  • chi 1
  • f 1
  • m 1
  • More… Less…

Types

  • a 103
  • el 6
  • s 6
  • m 5
  • r 5
  • x 5
  • p 1
  • More… Less…