Search (139 results, page 1 of 7)

  • × theme_ss:"Retrievalstudien"
  1. Rijsbergen, C.J. van: Foundations of evaluation (1974) 0.08
    0.075373136 = product of:
      0.30149254 = sum of:
        0.30149254 = weight(_text_:van in 1078) [ClassicSimilarity], result of:
          0.30149254 = score(doc=1078,freq=2.0), product of:
            0.24466558 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.043873694 = queryNorm
            1.2322638 = fieldWeight in 1078, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.15625 = fieldNorm(doc=1078)
      0.25 = coord(1/4)
    
  2. Rijsbergen, C.J. van: ¬A test for the separation of relevant and non-relevant documents in experimental retrieval collections (1973) 0.07
    0.072187066 = product of:
      0.14437413 = sum of:
        0.12059701 = weight(_text_:van in 5002) [ClassicSimilarity], result of:
          0.12059701 = score(doc=5002,freq=2.0), product of:
            0.24466558 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.043873694 = queryNorm
            0.49290553 = fieldWeight in 5002, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.0625 = fieldNorm(doc=5002)
        0.02377712 = product of:
          0.04755424 = sum of:
            0.04755424 = weight(_text_:22 in 5002) [ClassicSimilarity], result of:
              0.04755424 = score(doc=5002,freq=2.0), product of:
                0.1536382 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043873694 = queryNorm
                0.30952093 = fieldWeight in 5002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5002)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    19. 3.1996 11:22:12
  3. Rijsbergen, C.J. van: Retrieval effectiveness (1981) 0.06
    0.060298506 = product of:
      0.24119402 = sum of:
        0.24119402 = weight(_text_:van in 3147) [ClassicSimilarity], result of:
          0.24119402 = score(doc=3147,freq=2.0), product of:
            0.24466558 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.043873694 = queryNorm
            0.98581105 = fieldWeight in 3147, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.125 = fieldNorm(doc=3147)
      0.25 = coord(1/4)
    
  4. Abdou, S.; Savoy, J.: Searching in Medline : query expansion and manual indexing evaluation (2008) 0.06
    0.0599064 = product of:
      0.1198128 = sum of:
        0.029365042 = weight(_text_:j in 2062) [ClassicSimilarity], result of:
          0.029365042 = score(doc=2062,freq=2.0), product of:
            0.1394085 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.043873694 = queryNorm
            0.21064025 = fieldWeight in 2062, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.046875 = fieldNorm(doc=2062)
        0.09044776 = weight(_text_:van in 2062) [ClassicSimilarity], result of:
          0.09044776 = score(doc=2062,freq=2.0), product of:
            0.24466558 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.043873694 = queryNorm
            0.36967915 = fieldWeight in 2062, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.046875 = fieldNorm(doc=2062)
      0.5 = coord(2/4)
    
    Abstract
    Based on a relatively large subset representing one third of the Medline collection, this paper evaluates ten different IR models, including recent developments in both probabilistic and language models. We show that the best performing IR models is a probabilistic model developed within the Divergence from Randomness framework [Amati, G., & van Rijsbergen, C.J. (2002) Probabilistic models of information retrieval based on measuring the divergence from randomness. ACM-Transactions on Information Systems 20(4), 357-389], which result in 170% enhancements in mean average precision when compared to the classical tf idf vector-space model. This paper also reports on our impact evaluations on the retrieval effectiveness of manually assigned descriptors (MeSH or Medical Subject Headings), showing that by including these terms retrieval performance can improve from 2.4% to 13.5%, depending on the underling IR model. Finally, we design a new general blind-query expansion approach showing improved retrieval performances compared to those obtained using the Rocchio approach.
  5. Allan, J.; Callan, J.P.; Croft, W.B.; Ballesteros, L.; Broglio, J.; Xu, J.; Shu, H.: INQUERY at TREC-5 (1997) 0.06
    0.057245485 = product of:
      0.11449097 = sum of:
        0.08476957 = weight(_text_:j in 3103) [ClassicSimilarity], result of:
          0.08476957 = score(doc=3103,freq=6.0), product of:
            0.1394085 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.043873694 = queryNorm
            0.608066 = fieldWeight in 3103, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.078125 = fieldNorm(doc=3103)
        0.0297214 = product of:
          0.0594428 = sum of:
            0.0594428 = weight(_text_:22 in 3103) [ClassicSimilarity], result of:
              0.0594428 = score(doc=3103,freq=2.0), product of:
                0.1536382 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043873694 = queryNorm
                0.38690117 = fieldWeight in 3103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3103)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    27. 2.1999 20:55:22
  6. Tomaiuolo, N.G.; Parker, J.: Maximizing relevant retrieval : keyword and natural language searching (1998) 0.06
    0.055064194 = product of:
      0.11012839 = sum of:
        0.06851843 = weight(_text_:j in 6418) [ClassicSimilarity], result of:
          0.06851843 = score(doc=6418,freq=2.0), product of:
            0.1394085 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.043873694 = queryNorm
            0.4914939 = fieldWeight in 6418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.109375 = fieldNorm(doc=6418)
        0.041609958 = product of:
          0.083219916 = sum of:
            0.083219916 = weight(_text_:22 in 6418) [ClassicSimilarity], result of:
              0.083219916 = score(doc=6418,freq=2.0), product of:
                0.1536382 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043873694 = queryNorm
                0.5416616 = fieldWeight in 6418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6418)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Online. 22(1998) no.6, S.57-58
  7. Crestani, F.; Rijsbergen, C.J. van: Information retrieval by imaging (1996) 0.05
    0.0541403 = product of:
      0.1082806 = sum of:
        0.09044776 = weight(_text_:van in 6967) [ClassicSimilarity], result of:
          0.09044776 = score(doc=6967,freq=2.0), product of:
            0.24466558 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.043873694 = queryNorm
            0.36967915 = fieldWeight in 6967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.046875 = fieldNorm(doc=6967)
        0.017832838 = product of:
          0.035665676 = sum of:
            0.035665676 = weight(_text_:22 in 6967) [ClassicSimilarity], result of:
              0.035665676 = score(doc=6967,freq=2.0), product of:
                0.1536382 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043873694 = queryNorm
                0.23214069 = fieldWeight in 6967, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6967)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  8. Crestani, F.; Ruthven, I.; Sanderson, M.; Rijsbergen, C.J. van: ¬The troubles with using a logical model of IR on a large collection of documents : experimenting retrieval by logical imaging on TREC (1996) 0.04
    0.037686568 = product of:
      0.15074627 = sum of:
        0.15074627 = weight(_text_:van in 7522) [ClassicSimilarity], result of:
          0.15074627 = score(doc=7522,freq=2.0), product of:
            0.24466558 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.043873694 = queryNorm
            0.6161319 = fieldWeight in 7522, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.078125 = fieldNorm(doc=7522)
      0.25 = coord(1/4)
    
  9. Van der Walt, H.E.A.; Brakel, P.A. van: Method for the evaluation of the retrieval effectiveness of a CD-ROM bibliographic database (1991) 0.04
    0.0373078 = product of:
      0.1492312 = sum of:
        0.1492312 = weight(_text_:van in 3114) [ClassicSimilarity], result of:
          0.1492312 = score(doc=3114,freq=4.0), product of:
            0.24466558 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.043873694 = queryNorm
            0.60993946 = fieldWeight in 3114, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3114)
      0.25 = coord(1/4)
    
  10. Rapke, K.: Automatische Indexierung von Volltexten für die Gruner+Jahr Pressedatenbank (2001) 0.04
    0.035338324 = product of:
      0.07067665 = sum of:
        0.058730084 = weight(_text_:j in 6386) [ClassicSimilarity], result of:
          0.058730084 = score(doc=6386,freq=8.0), product of:
            0.1394085 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.043873694 = queryNorm
            0.4212805 = fieldWeight in 6386, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.046875 = fieldNorm(doc=6386)
        0.01194656 = product of:
          0.02389312 = sum of:
            0.02389312 = weight(_text_:den in 6386) [ClassicSimilarity], result of:
              0.02389312 = score(doc=6386,freq=2.0), product of:
                0.12575069 = queryWeight, product of:
                  2.866198 = idf(docFreq=6840, maxDocs=44218)
                  0.043873694 = queryNorm
                0.19000389 = fieldWeight in 6386, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.866198 = idf(docFreq=6840, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6386)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Retrieval Tests sind die anerkannteste Methode, um neue Verfahren der Inhaltserschließung gegenüber traditionellen Verfahren zu rechtfertigen. Im Rahmen einer Diplomarbeit wurden zwei grundsätzlich unterschiedliche Systeme der automatischen inhaltlichen Erschließung anhand der Pressedatenbank des Verlagshauses Gruner + Jahr (G+J) getestet und evaluiert. Untersucht wurde dabei natürlichsprachliches Retrieval im Vergleich zu Booleschem Retrieval. Bei den beiden Systemen handelt es sich zum einen um Autonomy von Autonomy Inc. und DocCat, das von IBM an die Datenbankstruktur der G+J Pressedatenbank angepasst wurde. Ersteres ist ein auf natürlichsprachlichem Retrieval basierendes, probabilistisches System. DocCat demgegenüber basiert auf Booleschem Retrieval und ist ein lernendes System, das auf Grund einer intellektuell erstellten Trainingsvorlage indexiert. Methodisch geht die Evaluation vom realen Anwendungskontext der Textdokumentation von G+J aus. Die Tests werden sowohl unter statistischen wie auch qualitativen Gesichtspunkten bewertet. Ein Ergebnis der Tests ist, dass DocCat einige Mängel gegenüber der intellektuellen Inhaltserschließung aufweist, die noch behoben werden müssen, während das natürlichsprachliche Retrieval von Autonomy in diesem Rahmen und für die speziellen Anforderungen der G+J Textdokumentation so nicht einsetzbar ist
  11. Rapke, K.: Automatische Indexierung von Volltexten für die Gruner+Jahr Pressedatenbank (2001) 0.03
    0.029448599 = product of:
      0.058897197 = sum of:
        0.04894173 = weight(_text_:j in 5863) [ClassicSimilarity], result of:
          0.04894173 = score(doc=5863,freq=8.0), product of:
            0.1394085 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.043873694 = queryNorm
            0.35106707 = fieldWeight in 5863, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5863)
        0.009955467 = product of:
          0.019910933 = sum of:
            0.019910933 = weight(_text_:den in 5863) [ClassicSimilarity], result of:
              0.019910933 = score(doc=5863,freq=2.0), product of:
                0.12575069 = queryWeight, product of:
                  2.866198 = idf(docFreq=6840, maxDocs=44218)
                  0.043873694 = queryNorm
                0.15833658 = fieldWeight in 5863, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.866198 = idf(docFreq=6840, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5863)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Retrievaltests sind die anerkannteste Methode, um neue Verfahren der Inhaltserschließung gegenüber traditionellen Verfahren zu rechtfertigen. Im Rahmen einer Diplomarbeit wurden zwei grundsätzlich unterschiedliche Systeme der automatischen inhaltlichen Erschließung anhand der Pressedatenbank des Verlagshauses Gruner + Jahr (G+J) getestet und evaluiert. Untersucht wurde dabei natürlichsprachliches Retrieval im Vergleich zu Booleschem Retrieval. Bei den beiden Systemen handelt es sich zum einen um Autonomy von Autonomy Inc. und DocCat, das von IBM an die Datenbankstruktur der G+J Pressedatenbank angepasst wurde. Ersteres ist ein auf natürlichsprachlichem Retrieval basierendes, probabilistisches System. DocCat demgegenüber basiert auf Booleschem Retrieval und ist ein lernendes System, das aufgrund einer intellektuell erstellten Trainingsvorlage indexiert. Methodisch geht die Evaluation vom realen Anwendungskontext der Textdokumentation von G+J aus. Die Tests werden sowohl unter statistischen wie auch qualitativen Gesichtspunkten bewertet. Ein Ergebnis der Tests ist, dass DocCat einige Mängel gegenüber der intellektuellen Inhaltserschließung aufweist, die noch behoben werden müssen, während das natürlichsprachliche Retrieval von Autonomy in diesem Rahmen und für die speziellen Anforderungen der G+J Textdokumentation so nicht einsetzbar ist
  12. Strzalkowski, T.; Guthrie, L.; Karlgren, J.; Leistensnider, J.; Lin, F.; Perez-Carballo, J.; Straszheim, T.; Wang, J.; Wilding, J.: Natural language information retrieval : TREC-5 report (1997) 0.03
    0.027359262 = product of:
      0.10943705 = sum of:
        0.10943705 = weight(_text_:j in 3100) [ClassicSimilarity], result of:
          0.10943705 = score(doc=3100,freq=10.0), product of:
            0.1394085 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.043873694 = queryNorm
            0.78500986 = fieldWeight in 3100, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.078125 = fieldNorm(doc=3100)
      0.25 = coord(1/4)
    
  13. Information retrieval experiment (1981) 0.03
    0.026380597 = product of:
      0.10552239 = sum of:
        0.10552239 = weight(_text_:van in 2653) [ClassicSimilarity], result of:
          0.10552239 = score(doc=2653,freq=2.0), product of:
            0.24466558 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.043873694 = queryNorm
            0.43129233 = fieldWeight in 2653, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2653)
      0.25 = coord(1/4)
    
    Content
    Enthält die Beiträge: ROBERTSON, S.E.: The methodology of information retrieval experiment; RIJSBERGEN, C.J. van: Retrieval effectiveness; BELKIN, N.: Ineffable concepts in information retrieval; TAGUE, J.M.: The pragmatics of information retrieval experimentation; LANCASTER, F.W.: Evaluation within the environment of an operating information service; BARRACLOUGH, E.D.: Opportunities for testing with online systems; KEEN, M.E.: Laboratory tests of manual systems; ODDY, R.N.: Laboratory tests: automatic systems; HEINE, M.D.: Simulation, and simulation experiments; COOPER, W.S.: Gedanken experimentation: an alternative to traditional system testing?; SPARCK JONES, K.: Actual tests - retrieval system tests; EVANS, L.: An experiment: search strategy variation in SDI profiles; SALTON, G.: The Smart environment for retrieval system evaluation - advantage and problem areas
  14. Sparck Jones, K.; Rijsbergen, C.J. van: Progress in documentation : Information retrieval test collection (1976) 0.03
    0.026380597 = product of:
      0.10552239 = sum of:
        0.10552239 = weight(_text_:van in 4161) [ClassicSimilarity], result of:
          0.10552239 = score(doc=4161,freq=2.0), product of:
            0.24466558 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.043873694 = queryNorm
            0.43129233 = fieldWeight in 4161, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4161)
      0.25 = coord(1/4)
    
  15. Oberhauser, O.; Labner, J.: OPAC-Erweiterung durch automatische Indexierung : Empirische Untersuchung mit Daten aus dem Österreichischen Verbundkatalog (2002) 0.03
    0.025028545 = product of:
      0.05005709 = sum of:
        0.029365042 = weight(_text_:j in 883) [ClassicSimilarity], result of:
          0.029365042 = score(doc=883,freq=2.0), product of:
            0.1394085 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.043873694 = queryNorm
            0.21064025 = fieldWeight in 883, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.046875 = fieldNorm(doc=883)
        0.020692049 = product of:
          0.041384097 = sum of:
            0.041384097 = weight(_text_:den in 883) [ClassicSimilarity], result of:
              0.041384097 = score(doc=883,freq=6.0), product of:
                0.12575069 = queryWeight, product of:
                  2.866198 = idf(docFreq=6840, maxDocs=44218)
                  0.043873694 = queryNorm
                0.32909638 = fieldWeight in 883, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.866198 = idf(docFreq=6840, maxDocs=44218)
                  0.046875 = fieldNorm(doc=883)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In Anlehnung an die in den neunziger Jahren durchgeführten Erschließungsprojekte MILOS I und MILOS II, die die Eignung eines Verfahrens zur automatischen Indexierung für Bibliothekskataloge zum Thema hatten, wurde eine empirische Untersuchung anhand einer repräsentativen Stichprobe von Titelsätzen aus dem Österreichischen Verbundkatalog durchgeführt. Ziel war die Prüfung und Bewertung der Einsatzmöglichkeit dieses Verfahrens in den Online-Katalogen des Verbundes. Der Realsituation der OPAC-Benutzung gemäß wurde ausschließlich die Auswirkung auf den automatisch generierten Begriffen angereicherten Basic Index ("Alle Felder") untersucht. Dazu wurden 100 Suchanfragen zunächst im ursprünglichen Basic Index und sodann im angereicherten Basic Index in einem OPAC unter Aleph 500 durchgeführt. Die Tests erbrachten einen Zuwachs an relevanten Treffern bei nur leichten Verlusten an Precision, eine Reduktion der Nulltreffer-Ergebnisse sowie Aufschlüsse über die Auswirkung einer vorhandenen verbalen Sacherschließung.
  16. Sievert, M.E.; McKinin, E.J.: Why full-text misses some relevant documents : an analysis of documents not retrieved by CCML or MEDIS (1989) 0.02
    0.02359894 = product of:
      0.04719788 = sum of:
        0.029365042 = weight(_text_:j in 3564) [ClassicSimilarity], result of:
          0.029365042 = score(doc=3564,freq=2.0), product of:
            0.1394085 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.043873694 = queryNorm
            0.21064025 = fieldWeight in 3564, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.046875 = fieldNorm(doc=3564)
        0.017832838 = product of:
          0.035665676 = sum of:
            0.035665676 = weight(_text_:22 in 3564) [ClassicSimilarity], result of:
              0.035665676 = score(doc=3564,freq=2.0), product of:
                0.1536382 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043873694 = queryNorm
                0.23214069 = fieldWeight in 3564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3564)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    9. 1.1996 10:22:31
    Source
    ASIS'89. Managing information and technology. Proceedings of the 52nd annual meeting of the American Society for Information Science, Washington D.C., 30.10.-2.11.1989. Vol.26. Ed.by J. Katzer and G.B. Newby
  17. Reichert, S.; Mayr, P.: Untersuchung von Relevanzeigenschaften in einem kontrollierten Eyetracking-Experiment (2012) 0.02
    0.02359894 = product of:
      0.04719788 = sum of:
        0.029365042 = weight(_text_:j in 328) [ClassicSimilarity], result of:
          0.029365042 = score(doc=328,freq=2.0), product of:
            0.1394085 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.043873694 = queryNorm
            0.21064025 = fieldWeight in 328, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.046875 = fieldNorm(doc=328)
        0.017832838 = product of:
          0.035665676 = sum of:
            0.035665676 = weight(_text_:22 in 328) [ClassicSimilarity], result of:
              0.035665676 = score(doc=328,freq=2.0), product of:
                0.1536382 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043873694 = queryNorm
                0.23214069 = fieldWeight in 328, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=328)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    Vgl.: http://www.degruyter.com/view/j/iwp.2012.63.issue-3/iwp-2012-0029/iwp-2012-0029.xml?format=INT. Vgl. auch: http://www.ib.hu-berlin.de/~mayr/arbeiten/iwp-2012-final.pdf.
    Date
    22. 7.2012 19:25:54
  18. Ruthven, I.; Lalmas, M.; Rijsbergen, K. van: Combining and selecting characteristics of information use (2002) 0.02
    0.021318743 = product of:
      0.08527497 = sum of:
        0.08527497 = weight(_text_:van in 5208) [ClassicSimilarity], result of:
          0.08527497 = score(doc=5208,freq=4.0), product of:
            0.24466558 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.043873694 = queryNorm
            0.34853685 = fieldWeight in 5208, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.03125 = fieldNorm(doc=5208)
      0.25 = coord(1/4)
    
    Abstract
    Ruthven, Lalmas, and van Rijsbergen use traditional term importance measures like inverse document frequency, noise, based upon in-document frequency, and term frequency supplemented by theme value which is calculated from differences of expected positions of words in a text from their actual positions, on the assumption that even distribution indicates term association with a main topic, and context, which is based on a query term's distance from the nearest other query term relative to the average expected distribution of all query terms in the document. They then define document characteristics like specificity, the sum of all idf values in a document over the total terms in the document, or document complexity, measured by the documents average idf value; and information to noise ratio, info-noise, tokens after stopping and stemming over tokens before these processes, measuring the ratio of useful and non-useful information in a document. Retrieval tests are then carried out using each characteristic, combinations of the characteristics, and relevance feedback to determine the correct combination of characteristics. A file ranks independently of query terms by both specificity and info-noise, but if presence of a query term is required unique rankings are generated. Tested on five standard collections the traditional characteristics out preformed the new characteristics, which did, however, out preform random retrieval. All possible combinations of characteristics were also tested both with and without a set of scaling weights applied. All characteristics can benefit by combination with another characteristic or set of characteristics and performance as a single characteristic is a good indicator of performance in combination. Larger combinations tended to be more effective than smaller ones and weighting increased precision measures of middle ranking combinations but decreased the ranking of poorer combinations. The best combinations vary for each collection, and in some collections with the addition of weighting. Finally, with all documents ranked by the all characteristics combination, they take the top 30 documents and calculate the characteristic scores for each term in both the relevant and the non-relevant sets. Then taking for each query term the characteristics whose average was higher for relevant than non-relevant documents the documents are re-ranked. The relevance feedback method of selecting characteristics can select a good set of characteristics for query terms.
  19. Allan, J.; Croft, W.B.; Callan, J.: ¬The University of Massachusetts and a dozen TRECs (2005) 0.02
    0.02076422 = product of:
      0.08305688 = sum of:
        0.08305688 = weight(_text_:j in 5086) [ClassicSimilarity], result of:
          0.08305688 = score(doc=5086,freq=4.0), product of:
            0.1394085 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.043873694 = queryNorm
            0.5957806 = fieldWeight in 5086, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.09375 = fieldNorm(doc=5086)
      0.25 = coord(1/4)
    
  20. Griesbaum, J.: Evaluierung hybrider Suchsysteme im WWW (2000) 0.02
    0.020655802 = product of:
      0.041311603 = sum of:
        0.029365042 = weight(_text_:j in 2482) [ClassicSimilarity], result of:
          0.029365042 = score(doc=2482,freq=2.0), product of:
            0.1394085 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.043873694 = queryNorm
            0.21064025 = fieldWeight in 2482, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.046875 = fieldNorm(doc=2482)
        0.01194656 = product of:
          0.02389312 = sum of:
            0.02389312 = weight(_text_:den in 2482) [ClassicSimilarity], result of:
              0.02389312 = score(doc=2482,freq=2.0), product of:
                0.12575069 = queryWeight, product of:
                  2.866198 = idf(docFreq=6840, maxDocs=44218)
                  0.043873694 = queryNorm
                0.19000389 = fieldWeight in 2482, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.866198 = idf(docFreq=6840, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2482)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Der Ausgangspunkt dieser Arbeit ist die Suchproblematik im World Wide Web. Suchmaschinen sind einerseits unverzichtbar für erfolgreiches Information Retrieval, andererseits wird ihnen eine mäßige Leistungsfähigkeit vorgeworfen. Das Thema dieser Arbeit ist die Untersuchung der Retrievaleffektivität deutschsprachiger Suchmaschinen. Es soll festgestellt werden, welche Retrievaleffektivität Nutzer derzeit erwarten können. Ein Ansatz, um die Retrievaleffektivität von Suchmaschinen zu erhöhen besteht darin, redaktionell von Menschen erstellte und automatisch generierte Suchergebnisse in einer Trefferliste zu vermengen. Ziel dieser Arbeit ist es, die Retrievaleffektivität solcher hybrider Systeme im Vergleich zu rein roboterbasierten Suchmaschinen zu evaluieren. Zunächst werden hierzu die grundlegenden Problembereiche bei der Evaluation von Retrievalsystemen analysiert. In Anlehnung an die von Tague-Sutcliff vorgeschlagene Methodik wird unter Beachtung der webspezifischen Besonderheiten eine mögliche Vorgehensweise erschlossen. Darauf aufbauend wird das konkrete Setting für die Durchführung der Evaluation erarbeitet und ein Retrievaleffektivitätstest bei den Suchmaschinen Lycos.de, AItaVista.de und QualiGo durchgeführt.

Languages

  • e 97
  • d 36
  • f 2
  • chi 1
  • m 1
  • nl 1
  • More… Less…

Types

  • a 118
  • s 6
  • m 5
  • r 5
  • x 5
  • el 4
  • p 2
  • More… Less…