Search (262 results, page 1 of 14)

  • × theme_ss:"Retrievalstudien"
  1. Breuer, T.; Tavakolpoursaleh, N.; Schaer, P.; Hienert, D.; Schaible, J.; Castro, L.J.: Online Information Retrieval Evaluation using the STELLA Framework (2022) 0.03
    0.034900956 = product of:
      0.13087858 = sum of:
        0.009659718 = product of:
          0.019319436 = sum of:
            0.019319436 = weight(_text_:online in 640) [ClassicSimilarity], result of:
              0.019319436 = score(doc=640,freq=2.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.20118743 = fieldWeight in 640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=640)
          0.5 = coord(1/2)
        0.046685066 = weight(_text_:software in 640) [ClassicSimilarity], result of:
          0.046685066 = score(doc=640,freq=4.0), product of:
            0.12552431 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031640913 = queryNorm
            0.3719205 = fieldWeight in 640, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=640)
        0.052194204 = weight(_text_:evaluation in 640) [ClassicSimilarity], result of:
          0.052194204 = score(doc=640,freq=4.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.3932532 = fieldWeight in 640, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.046875 = fieldNorm(doc=640)
        0.022339594 = weight(_text_:web in 640) [ClassicSimilarity], result of:
          0.022339594 = score(doc=640,freq=2.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.21634221 = fieldWeight in 640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=640)
      0.26666668 = coord(4/15)
    
    Abstract
    Involving users in early phases of software development has become a common strategy as it enables developers to consider user needs from the beginning. Once a system is in production, new opportunities to observe, evaluate and learn from users emerge as more information becomes available. Gathering information from users to continuously evaluate their behavior is a common practice for commercial software, while the Cranfield paradigm remains the preferred option for Information Retrieval (IR) and recommendation systems in the academic world. Here we introduce the Infrastructures for Living Labs STELLA project which aims to create an evaluation infrastructure allowing experimental systems to run along production web-based academic search systems with real users. STELLA combines user interactions and log files analyses to enable large-scale A/B experiments for academic search.
  2. Lespinasse, K.: TREC: une conference pour l'evaluation des systemes de recherche d'information (1997) 0.03
    0.025564382 = product of:
      0.1278219 = sum of:
        0.041082006 = product of:
          0.08216401 = sum of:
            0.08216401 = weight(_text_:recherche in 744) [ClassicSimilarity], result of:
              0.08216401 = score(doc=744,freq=2.0), product of:
                0.17150146 = queryWeight, product of:
                  5.4202437 = idf(docFreq=531, maxDocs=44218)
                  0.031640913 = queryNorm
                0.47908637 = fieldWeight in 744, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4202437 = idf(docFreq=531, maxDocs=44218)
                  0.0625 = fieldNorm(doc=744)
          0.5 = coord(1/2)
        0.06959227 = weight(_text_:evaluation in 744) [ClassicSimilarity], result of:
          0.06959227 = score(doc=744,freq=4.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.5243376 = fieldWeight in 744, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0625 = fieldNorm(doc=744)
        0.017147627 = product of:
          0.034295253 = sum of:
            0.034295253 = weight(_text_:22 in 744) [ClassicSimilarity], result of:
              0.034295253 = score(doc=744,freq=2.0), product of:
                0.110801086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031640913 = queryNorm
                0.30952093 = fieldWeight in 744, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=744)
          0.5 = coord(1/2)
      0.2 = coord(3/15)
    
    Abstract
    TREC ia an annual conference held in the USA devoted to electronic systems for large full text information searching. The conference deals with evaluation and comparison techniques developed since 1992 by participants from the research and industrial fields. The work of the conference is destined for designers (rather than users) of systems which access full text information. Describes the context, objectives, organization, evaluation methods and limits of TREC
    Date
    1. 8.1996 22:01:00
  3. Dresel, R.; Hörnig, D.; Kaluza, H.; Peter, A.; Roßmann, A.; Sieber, W.: Evaluation deutscher Web-Suchwerkzeuge : Ein vergleichender Retrievaltest (2001) 0.02
    0.021696148 = product of:
      0.10848074 = sum of:
        0.049209163 = weight(_text_:evaluation in 261) [ClassicSimilarity], result of:
          0.049209163 = score(doc=261,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.37076265 = fieldWeight in 261, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0625 = fieldNorm(doc=261)
        0.042123944 = weight(_text_:web in 261) [ClassicSimilarity], result of:
          0.042123944 = score(doc=261,freq=4.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.4079388 = fieldWeight in 261, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=261)
        0.017147627 = product of:
          0.034295253 = sum of:
            0.034295253 = weight(_text_:22 in 261) [ClassicSimilarity], result of:
              0.034295253 = score(doc=261,freq=2.0), product of:
                0.110801086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031640913 = queryNorm
                0.30952093 = fieldWeight in 261, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=261)
          0.5 = coord(1/2)
      0.2 = coord(3/15)
    
    Abstract
    Die deutschen Suchmaschinen, Abacho, Acoon, Fireball und Lycos sowie die Web-Kataloge Web.de und Yahoo! werden einem Qualitätstest nach relativem Recall, Precision und Availability unterzogen. Die Methoden der Retrievaltests werden vorgestellt. Im Durchschnitt werden bei einem Cut-Off-Wert von 25 ein Recall von rund 22%, eine Precision von knapp 19% und eine Verfügbarkeit von 24% erreicht
  4. Behnert, C.; Lewandowski, D.: ¬A framework for designing retrieval effectiveness studies of library information systems using human relevance assessments (2017) 0.02
    0.019177739 = product of:
      0.09588869 = sum of:
        0.008049765 = product of:
          0.01609953 = sum of:
            0.01609953 = weight(_text_:online in 3700) [ClassicSimilarity], result of:
              0.01609953 = score(doc=3700,freq=2.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.16765618 = fieldWeight in 3700, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3700)
          0.5 = coord(1/2)
        0.061511453 = weight(_text_:evaluation in 3700) [ClassicSimilarity], result of:
          0.061511453 = score(doc=3700,freq=8.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.4634533 = fieldWeight in 3700, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3700)
        0.026327467 = weight(_text_:web in 3700) [ClassicSimilarity], result of:
          0.026327467 = score(doc=3700,freq=4.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.25496176 = fieldWeight in 3700, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3700)
      0.2 = coord(3/15)
    
    Abstract
    Purpose This paper demonstrates how to apply traditional information retrieval evaluation methods based on standards from the Text REtrieval Conference (TREC) and web search evaluation to all types of modern library information systems including online public access catalogs, discovery systems, and digital libraries that provide web search features to gather information from heterogeneous sources. Design/methodology/approach We apply conventional procedures from information retrieval evaluation to the library information system context considering the specific characteristics of modern library materials. Findings We introduce a framework consisting of five parts: (1) search queries, (2) search results, (3) assessors, (4) testing, and (5) data analysis. We show how to deal with comparability problems resulting from diverse document types, e.g., electronic articles vs. printed monographs and what issues need to be considered for retrieval tests in the library context. Practical implications The framework can be used as a guideline for conducting retrieval effectiveness studies in the library context. Originality/value Although a considerable amount of research has been done on information retrieval evaluation, and standards for conducting retrieval effectiveness studies do exist, to our knowledge this is the first attempt to provide a systematic framework for evaluating the retrieval effectiveness of twenty-first-century library information systems. We demonstrate which issues must be considered and what decisions must be made by researchers prior to a retrieval test.
  5. Lazonder, A.W.; Biemans, H.J.A.; Wopereis, I.G.J.H.: Differences between novice and experienced users in searching information on the World Wide Web (2000) 0.02
    0.018600041 = product of:
      0.1395003 = sum of:
        0.049952857 = weight(_text_:web in 4598) [ClassicSimilarity], result of:
          0.049952857 = score(doc=4598,freq=10.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.48375595 = fieldWeight in 4598, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4598)
        0.08954745 = weight(_text_:site in 4598) [ClassicSimilarity], result of:
          0.08954745 = score(doc=4598,freq=4.0), product of:
            0.1738463 = queryWeight, product of:
              5.494352 = idf(docFreq=493, maxDocs=44218)
              0.031640913 = queryNorm
            0.5150955 = fieldWeight in 4598, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.494352 = idf(docFreq=493, maxDocs=44218)
              0.046875 = fieldNorm(doc=4598)
      0.13333334 = coord(2/15)
    
    Abstract
    Searching for information on the WWW basically comes down to locating an appropriate Web site and to retrieving relevant information from that site. This study examined the effect of a user's WWW experience on both phases of the search process. 35 students from 2 schools for Dutch pre-university education were observed while performing 3 search tasks. The results indicate that subjects with WWW-experience are more proficient in locating Web sites than are novice WWW-users. The observed differences were ascribed to the experts' superior skills in operating Web search engines. However, on tasks that required subjects to locate information on specific Web sites, the performance of experienced and novice users was equivalent - a result that is in line with hypertext research. Based on these findings, implications for training and supporting students in searching for information on the WWW are identified. Finally, the role of the subjects' level of domain expertise is discussed and directions for future research are proposed
  6. Hierl, S.: Bezugsrahmen für die Evaluation von Information Retrieval Systemen mit Visualisierungskomponenten (2007) 0.02
    0.018298194 = product of:
      0.13723645 = sum of:
        0.098418325 = weight(_text_:evaluation in 3040) [ClassicSimilarity], result of:
          0.098418325 = score(doc=3040,freq=8.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.7415253 = fieldWeight in 3040, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0625 = fieldNorm(doc=3040)
        0.03881812 = product of:
          0.07763624 = sum of:
            0.07763624 = weight(_text_:analyse in 3040) [ClassicSimilarity], result of:
              0.07763624 = score(doc=3040,freq=2.0), product of:
                0.16670908 = queryWeight, product of:
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.031640913 = queryNorm
                0.46569893 = fieldWeight in 3040, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3040)
          0.5 = coord(1/2)
      0.13333334 = coord(2/15)
    
    Abstract
    Folgender Beitrag beschäftigt sich mit der Konzeption und Durchführung von nachhaltigen Evaluationen von Information Retrieval Systemen mit Visualisierungskomponenten. Bisherige Evaluationsansätze differieren sowohl in der Methodenauswahl als auch Untersuchungsanlage, wie eine State-of-the-Art-Analyse aufzeigt. Im Anschluss werden die größten Herausforderungen, die sich bei Evaluationen dieser Art ergeben mit Vorschlägen zu potenziellen Lösungsansätzen diskutiert. Auf der Grundlage eines morphologischen Rahmens wird ein Bezugsrahmen für die Evaluation von Information Retrieval Systemen mit Visualisierungskomponenten vorgeschlagen, das einen integrierten Ansatz zur Kombination geeigneter Methoden aus dem Bereich der Usability-Evaluation und der Retrievaleffektivitäts-Evaluation verfolgt.
  7. Smithson, S.: Information retrieval evaluation in practice : a case study approach (1994) 0.02
    0.018238755 = product of:
      0.13679065 = sum of:
        0.121786475 = weight(_text_:evaluation in 7302) [ClassicSimilarity], result of:
          0.121786475 = score(doc=7302,freq=16.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.9175908 = fieldWeight in 7302, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7302)
        0.015004174 = product of:
          0.030008348 = sum of:
            0.030008348 = weight(_text_:22 in 7302) [ClassicSimilarity], result of:
              0.030008348 = score(doc=7302,freq=2.0), product of:
                0.110801086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031640913 = queryNorm
                0.2708308 = fieldWeight in 7302, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7302)
          0.5 = coord(1/2)
      0.13333334 = coord(2/15)
    
    Abstract
    The evaluation of information retrieval systems is an important yet difficult operation. This paper describes an exploratory evaluation study that takes an interpretive approach to evaluation. The longitudinal study examines evaluation through the information-seeking behaviour of 22 case studies of 'real' users. The eclectic approach to data collection produced behavioral data that is compared with relevance judgements and satisfaction ratings. The study demonstrates considerable variations among the cases, among different evaluation measures within the same case, and among the same measures at different stages within a single case. It is argued that those involved in evaluation should be aware of the difficulties, and base any evaluation on a good understanding of the cases in question
  8. MacFarlane, A.: Evaluation of web search for the information practitioner (2007) 0.02
    0.017722504 = product of:
      0.13291878 = sum of:
        0.07381375 = weight(_text_:evaluation in 817) [ClassicSimilarity], result of:
          0.07381375 = score(doc=817,freq=8.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.556144 = fieldWeight in 817, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.046875 = fieldNorm(doc=817)
        0.059105016 = weight(_text_:web in 817) [ClassicSimilarity], result of:
          0.059105016 = score(doc=817,freq=14.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.57238775 = fieldWeight in 817, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=817)
      0.13333334 = coord(2/15)
    
    Abstract
    Purpose - The aim of the paper is to put forward a structured mechanism for web search evaluation. The paper seeks to point to useful scientific research and show how information practitioners can use these methods in evaluation of search on the web for their users. Design/methodology/approach - The paper puts forward an approach which utilizes traditional laboratory-based evaluation measures such as average precision/precision at N documents, augmented with diagnostic measures such as link broken, etc., which are used to show why precision measures are depressed as well as the quality of the search engines crawling mechanism. Findings - The paper shows how to use diagnostic measures in conjunction with precision in order to evaluate web search. Practical implications - The methodology presented in this paper will be useful to any information professional who regularly uses web search as part of their information seeking and needs to evaluate web search services. Originality/value - The paper argues that the use of diagnostic measures is essential in web search, as precision measures on their own do not allow a searcher to understand why search results differ between search engines.
  9. Dzeyk, W.: Effektiv und nutzerfreundlich : Einsatz von semantischen Technologien und Usability-Methoden zur Verbesserung der medizinischen Literatursuche (2010) 0.02
    0.01730944 = product of:
      0.086547196 = sum of:
        0.03911765 = weight(_text_:suchmaschine in 4416) [ClassicSimilarity], result of:
          0.03911765 = score(doc=4416,freq=2.0), product of:
            0.17890577 = queryWeight, product of:
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.031640913 = queryNorm
            0.21864946 = fieldWeight in 4416, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4416)
        0.030446619 = weight(_text_:evaluation in 4416) [ClassicSimilarity], result of:
          0.030446619 = score(doc=4416,freq=4.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.2293977 = fieldWeight in 4416, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4416)
        0.016982928 = product of:
          0.033965856 = sum of:
            0.033965856 = weight(_text_:analyse in 4416) [ClassicSimilarity], result of:
              0.033965856 = score(doc=4416,freq=2.0), product of:
                0.16670908 = queryWeight, product of:
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.031640913 = queryNorm
                0.20374328 = fieldWeight in 4416, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4416)
          0.5 = coord(1/2)
      0.2 = coord(3/15)
    
    Abstract
    In der vorliegenden Arbeit werden die Ergebnisse des MorphoSaurus-Projekts der Deutschen Zentralbibliothek für Medizin (ZB MED) vorgestellt. Ziel des Forschungsprojekts war die substanzielle Verbesserung des Information-Retrievals der medizinischen Suchmaschine MEDPILOT mithilfe computerlinguistischer Ansätze sowie die Optimierung der Gebrauchstauglichkeit (Usability) der Suchmaschinenoberfläche. Das Projekt wurde in Kooperation mit der Averbis GmbH aus Freiburg im Zeitraum von Juni 2007 bis Dezember 2008 an der ZB MED in Köln durchgeführt. Ermöglicht wurde die Realisierung des Projekts durch eine Förderung des Paktes für Forschung und Innovation. Während Averbis die MorphoSaurus-Technologie zur Verarbeitung problematischer Sprachaspekte von Suchanfragen einbrachte und wesentliche Datenbanken der ZB MED in ein Testsystem mit moderner Suchmaschinentechnologie implementierte, evaluierte ein Team der ZB MED das Potenzial dieser Technologie. Neben einem Vergleich der Leistungsfähigkeit zwischen der bisherigen MEDPILOT-Suche und der neuen Sucharchitektur wurde ein Benchmarking mit konkurrierenden Suchmaschinen wie PubMed, Scirus, Google und Google Scholar sowie GoPubMed durchgeführt. Für die Evaluation wurden verschiedene Testkollektionen erstellt, deren Items bzw. Suchphrasen aus einer Inhaltsanalyse realer Suchanfragen des MEDPILOT-Systems gewonnen wurden. Eine Überprüfung der Relevanz der Treffer der Testsuchmaschine als wesentliches Kriterium für die Qualität der Suche zeigte folgendes Ergebnis: Durch die Anwendung der MorphoSaurus-Technologie ist eine im hohen Maße unabhängige Verarbeitung fremdsprachlicher medizinischer Inhalte möglich geworden. Darüber hinaus zeigt die neue Technik insbesondere dort ihre Stärken, wo es um die gleichwertige Verarbeitung von Laien- und Expertensprache, die Analyse von Komposita, Synonymen und grammatikalischen Varianten geht. Zudem sind Module zur Erkennung von Rechtschreibfehlern und zur Auflösung von Akronymen und medizinischen Abkürzungen implementiert worden, die eine weitere Leistungssteigerung des Systems versprechen. Ein Vergleich auf der Basis von MEDLINE-Daten zeigte: Den Suchmaschinen MED-PILOT, PubMed, GoPubMed und Scirus war die Averbis-Testsuchumgebung klar überlegen. Die Trefferrelevanz war größer, es wurden insgesamt mehr Treffer gefunden und die Anzahl der Null-Treffer-Meldungen war im Vergleich zu den anderen Suchmaschinen am geringsten.
    Bei einem Vergleich unter Berücksichtigung aller verfügbaren Quellen gelang es mithilfe der MorphoSaurus-Technik - bei wesentlich geringerem Datenbestand - ähnlich gute Resul-tate zu erzielen, wie mit den Suchmaschinen Google oder Google Scholar. Die Ergebnisse der Evaluation lassen den Schluss zu, dass durch den MorphoSaurus-Ansatz die Leistungsfähigkeit von Google oder Google Scholar im Bereich der medizinischen Literatursuche durch eine Erweiterung der vorhandenen Datenbasis sogar deutlich übertroffen werden kann. Zusätzlich zu den Retrieval-Tests wurde eine Usability-Untersuchung der Testsuchmaschine mit Probanden aus der Medizin durchgeführt. Die Testpersonen attestierten dem Such-interface eine hohe Gebrauchstauglichkeit und Nützlichkeit. Der szenariobasierte Usability-Test hat zudem gezeigt, dass die Testpersonen bzw. User die integrierten Unterstützungs-maßnahmen zur Erhöhung der Benutzerfreundlichkeit während der Suche als sehr positiv und nützlich bewerten. In der Testsuchmaschine wurde diese Unterstützung z. B. durch das Aufklappen und Präsentieren von verwandten MeSH- und ICD-10-Begriffen realisiert. Die Einführung eines Schiebereglers zur effektiven Eingrenzung des Suchraums wurde ebenfalls überwiegend positiv bewertet. Zudem wurden nach Abschicken der Suchanfrage sogenannte Verwandte Suchbegriffe aus verschiedenen medizinischen Teilbereichen angezeigt. Diese Facetten-Funktion diente der Eingrenzung bzw. Verfeinerung der Suche und wurde von den Testpersonen mehrheitlich als ein sinnvolles Hilfsangebot bewertet. Insgesamt stellt das MorphoSaurus-Projekt - mit seinem spezifischen Ansatz - ein gelungenes Beispiel für die Innovationsfähigkeit von Bibliotheken im Bereich der öffentlichen Informationsversorgung dar. Durch die mögliche Anpassung der MorphoSaurus-Technologie mittels fachspezifischer Thesauri ist zudem eine hohe Anschlussfähigkeit für Suchmaschinen-projekte anderer Inhaltsdomänen gegeben.
  10. Ravana, S.D.; Taheri, M.S.; Rajagopal, P.: Document-based approach to improve the accuracy of pairwise comparison in evaluating information retrieval systems (2015) 0.02
    0.016520817 = product of:
      0.08260408 = sum of:
        0.053270485 = weight(_text_:evaluation in 2587) [ClassicSimilarity], result of:
          0.053270485 = score(doc=2587,freq=6.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.40136236 = fieldWeight in 2587, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2587)
        0.01861633 = weight(_text_:web in 2587) [ClassicSimilarity], result of:
          0.01861633 = score(doc=2587,freq=2.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.18028519 = fieldWeight in 2587, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2587)
        0.010717267 = product of:
          0.021434534 = sum of:
            0.021434534 = weight(_text_:22 in 2587) [ClassicSimilarity], result of:
              0.021434534 = score(doc=2587,freq=2.0), product of:
                0.110801086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031640913 = queryNorm
                0.19345059 = fieldWeight in 2587, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2587)
          0.5 = coord(1/2)
      0.2 = coord(3/15)
    
    Abstract
    Purpose The purpose of this paper is to propose a method to have more accurate results in comparing performance of the paired information retrieval (IR) systems with reference to the current method, which is based on the mean effectiveness scores of the systems across a set of identified topics/queries. Design/methodology/approach Based on the proposed approach, instead of the classic method of using a set of topic scores, the documents level scores are considered as the evaluation unit. These document scores are the defined document's weight, which play the role of the mean average precision (MAP) score of the systems as a significance test's statics. The experiments were conducted using the TREC 9 Web track collection. Findings The p-values generated through the two types of significance tests, namely the Student's t-test and Mann-Whitney show that by using the document level scores as an evaluation unit, the difference between IR systems is more significant compared with utilizing topic scores. Originality/value Utilizing a suitable test collection is a primary prerequisite for IR systems comparative evaluation. However, in addition to reusable test collections, having an accurate statistical testing is a necessity for these evaluations. The findings of this study will assist IR researchers to evaluate their retrieval systems and algorithms more accurately.
    Date
    20. 1.2015 18:30:22
  11. Hawking, D.; Craswell, N.: ¬The very large collection and Web tracks (2005) 0.02
    0.015799059 = product of:
      0.11849294 = sum of:
        0.07381375 = weight(_text_:evaluation in 5085) [ClassicSimilarity], result of:
          0.07381375 = score(doc=5085,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.556144 = fieldWeight in 5085, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.09375 = fieldNorm(doc=5085)
        0.044679187 = weight(_text_:web in 5085) [ClassicSimilarity], result of:
          0.044679187 = score(doc=5085,freq=2.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.43268442 = fieldWeight in 5085, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=5085)
      0.13333334 = coord(2/15)
    
    Source
    TREC: experiment and evaluation in information retrieval. Ed.: E.M. Voorhees, u. D.K. Harman
  12. Huffman, G.D.; Vital, D.A.; Bivins, R.G.: Generating indices with lexical association methods : term uniqueness (1990) 0.02
    0.015541943 = product of:
      0.07770971 = sum of:
        0.008049765 = product of:
          0.01609953 = sum of:
            0.01609953 = weight(_text_:online in 4152) [ClassicSimilarity], result of:
              0.01609953 = score(doc=4152,freq=2.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.16765618 = fieldWeight in 4152, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4152)
          0.5 = coord(1/2)
        0.03890422 = weight(_text_:software in 4152) [ClassicSimilarity], result of:
          0.03890422 = score(doc=4152,freq=4.0), product of:
            0.12552431 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031640913 = queryNorm
            0.30993375 = fieldWeight in 4152, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4152)
        0.030755727 = weight(_text_:evaluation in 4152) [ClassicSimilarity], result of:
          0.030755727 = score(doc=4152,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.23172665 = fieldWeight in 4152, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4152)
      0.2 = coord(3/15)
    
    Abstract
    A software system has been developed which orders citations retrieved from an online database in terms of relevancy. The system resulted from an effort generated by NASA's Technology Utilization Program to create new advanced software tools to largely automate the process of determining relevancy of database citations retrieved to support large technology transfer studies. The ranking is based on the generation of an enriched vocabulary using lexical association methods, a user assessment of the vocabulary and a combination of the user assessment and the lexical metric. One of the key elements in relevancy ranking is the enriched vocabulary -the terms mst be both unique and descriptive. This paper examines term uniqueness. Six lexical association methods were employed to generate characteristic word indices. A limited subset of the terms - the highest 20,40,60 and 7,5% of the uniquess words - we compared and uniquess factors developed. Computational times were also measured. It was found that methods based on occurrences and signal produced virtually the same terms. The limited subset of terms producedby the exact and centroid discrimination value were also nearly identical. Unique terms sets were produced by teh occurrence, variance and discrimination value (centroid), An end-user evaluation showed that the generated terms were largely distinct and had values of word precision which were consistent with values of the search precision.
  13. Oppenheim, C.; Morris, A.; McKnight, C.: ¬The evaluation of WWW search engines (2000) 0.01
    0.013982118 = product of:
      0.10486588 = sum of:
        0.08252628 = weight(_text_:evaluation in 4546) [ClassicSimilarity], result of:
          0.08252628 = score(doc=4546,freq=10.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.6217879 = fieldWeight in 4546, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.046875 = fieldNorm(doc=4546)
        0.022339594 = weight(_text_:web in 4546) [ClassicSimilarity], result of:
          0.022339594 = score(doc=4546,freq=2.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.21634221 = fieldWeight in 4546, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4546)
      0.13333334 = coord(2/15)
    
    Abstract
    The literature of the evaluation of Internet search engines is reviewed. Although there have been many studies, there has been little consistency in the way such studies have been carried out. This problem is exacerbated by the fact that recall is virtually impossible to calculate in the fast changing Internet environment, and therefore the traditional Cranfield type of evaluation is not usually possible. A variety of alternative evaluation methods has been suggested to overcome this difficulty. The authors recommend that a standardised set of tools is developed for the evaluation of web search engines so that, in future, comparisons can be made between search engines more effectively, and that variations in performance of any given search engine over time can be tracked. The paper itself does not provide such a standard set of tools, but it investigates the issues and makes preliminary recommendations of the types of tools needed
  14. Kaizik, A.; Gödert, W.; Oßwald, A.: Evaluation von Subject Gateways des Internet (EJECT) : Projektbericht (2001) 0.01
    0.013723646 = product of:
      0.10292734 = sum of:
        0.07381375 = weight(_text_:evaluation in 1476) [ClassicSimilarity], result of:
          0.07381375 = score(doc=1476,freq=8.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.556144 = fieldWeight in 1476, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.046875 = fieldNorm(doc=1476)
        0.029113589 = product of:
          0.058227178 = sum of:
            0.058227178 = weight(_text_:analyse in 1476) [ClassicSimilarity], result of:
              0.058227178 = score(doc=1476,freq=2.0), product of:
                0.16670908 = queryWeight, product of:
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.031640913 = queryNorm
                0.3492742 = fieldWeight in 1476, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1476)
          0.5 = coord(1/2)
      0.13333334 = coord(2/15)
    
    Abstract
    Der Umfang und die Heterogenität des Informationsangebotes erfordert immer differenzierte Methoden und Hilfsmittel für das gezielte und möglichst ballastfreie Auffinden von Informationsquellen im Kontext eines bestimmten Fachgebietes oder einer wissenschaftlichen Disziplin. Um dieses Ziel zu errei-chen, wurden in der letzten Zeit eine Reihe sog. Subject Gateways entwickelt. Bislang liegen weder viele Untersuchungen zur Qualität derartiger Hilfsmittel vor noch ist eine differenzierte Methodik für solche Bewertungen entwickelt worden. Das Projekt Evaluation von Subject Gateways des Internet (EJECT) verfolgte daher die Ziele:· Durch Analyse bereits realisierter Subject Gateways die Verwendungsvielfalt des Begriffes aufzuzeigen und zu einer Präzisierung der Begriffsbildung beizutragen; Einen methodischen Weg zur qualitativen Bewertung von Subject Gateways aufzuzeigen;· Diesen Weg anhand einer Evaluation des Subject Gateways EULER zu testen, das im Rahmen eines EU-Projektes für das Fachgebiet Mathematik entwickelt wurde. Die Resultate der Evaluation werden in dieser Studie ausführlich vorgestellt und es wird aufgezeigt, inwieweit eine Übertragung auf die Bewertung anderer Gateways möglich ist.
  15. Palmquist, R.A.; Kim, K.-S.: Cognitive style and on-line database search experience as predictors of Web search performance (2000) 0.01
    0.013601724 = product of:
      0.102012925 = sum of:
        0.038693316 = weight(_text_:web in 4605) [ClassicSimilarity], result of:
          0.038693316 = score(doc=4605,freq=6.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.37471575 = fieldWeight in 4605, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4605)
        0.06331961 = weight(_text_:site in 4605) [ClassicSimilarity], result of:
          0.06331961 = score(doc=4605,freq=2.0), product of:
            0.1738463 = queryWeight, product of:
              5.494352 = idf(docFreq=493, maxDocs=44218)
              0.031640913 = queryNorm
            0.3642275 = fieldWeight in 4605, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.494352 = idf(docFreq=493, maxDocs=44218)
              0.046875 = fieldNorm(doc=4605)
      0.13333334 = coord(2/15)
    
    Abstract
    This study sought to investigate the effects of cognitive style (field dependent and field independent) and on-line database search experience (novice and experienced) on the WWW search performance of undergraduate college students (n=48). It also attempted to find user factors that could be used to predict search efficiency. search performance, the dependent variable was defined in 2 ways: (1) time required for retrieving a relevant information item, and (2) the number of nodes traversed for retrieving a relevant information item. the search tasks required were carried out on a University Web site, and included a factual task and a topical search task of interest to the participant. Results indicated that while cognitive style (FD/FI) significantly influenced the search performance of novice searchers, the influence was greatly reduced in those searchers who had on-line database search experience. Based on the findings, suggestions for possible changes to the design of the current Web interface and to user training programs are provided
  16. Landoni, M.; Bell, S.: Information retrieval techniques for evaluating search engines : a critical overview (2000) 0.01
    0.012820447 = product of:
      0.09615335 = sum of:
        0.07381375 = weight(_text_:evaluation in 716) [ClassicSimilarity], result of:
          0.07381375 = score(doc=716,freq=8.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.556144 = fieldWeight in 716, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.046875 = fieldNorm(doc=716)
        0.022339594 = weight(_text_:web in 716) [ClassicSimilarity], result of:
          0.022339594 = score(doc=716,freq=2.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.21634221 = fieldWeight in 716, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=716)
      0.13333334 = coord(2/15)
    
    Abstract
    The objective of this paper is to highlight the importance of a scientifically sounded approach to search engine evaluation. Nowadays there is a flourishing literature which describes various attempts at conducting such evaluation by following all sort of approaches, but very often only the final results are published with little, if any, information about the methodology and the procedures adopted. These various experiments have been critically investigated and catalogued according to their scientific foundation by Bell [1] in the attempt to provide a valuable framework for future studies in this area. This paper reconsiders some of Bell's ideas in the light of the crisis of classic evaluation techniques for information retrieval and tries to envisage some form of collaboration between the IR and web communities in order to design a better and more consistent platform for the evaluation of tools for interactive information retrieval.
  17. King, D.W.: Blazing new trails : in celebration of an audacious career (2000) 0.01
    0.01245244 = product of:
      0.0622622 = sum of:
        0.008049765 = product of:
          0.01609953 = sum of:
            0.01609953 = weight(_text_:online in 1184) [ClassicSimilarity], result of:
              0.01609953 = score(doc=1184,freq=2.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.16765618 = fieldWeight in 1184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1184)
          0.5 = coord(1/2)
        0.043495167 = weight(_text_:evaluation in 1184) [ClassicSimilarity], result of:
          0.043495167 = score(doc=1184,freq=4.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.327711 = fieldWeight in 1184, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1184)
        0.010717267 = product of:
          0.021434534 = sum of:
            0.021434534 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
              0.021434534 = score(doc=1184,freq=2.0), product of:
                0.110801086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031640913 = queryNorm
                0.19345059 = fieldWeight in 1184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1184)
          0.5 = coord(1/2)
      0.2 = coord(3/15)
    
    Abstract
    I had the distinct pleasure of working with Pauline Atherton (Cochrane) during the 1960s, a period that can be considered the heyday of automated information system design and evaluation in the United States. I first met Pauline at the 1962 American Documentation Institute annual meeting in North Hollywood, Florida. My company, Westat Research Analysts, had recently been awarded a contract by the U.S. Patent Office to provide statistical support for the design of experiments with automated information retrieval systems. I was asked to attend the meeting to learn more about information retrieval systems and to begin informing others of U.S. Patent Office activities in this area. At one session, Pauline and I questioned a speaker about the research that he presented. Pauline's questions concerned the logic of their approach and mine, the statistical aspects. After the session, she came over to talk to me and we began a professional and personal friendship that continues to this day. During the 1960s, Pauline was involved in several important information-retrieval projects including a series of studies for the American Institute of Physics, a dissertation examining the relevance of retrieved documents, and development and evaluation of an online information-retrieval system. I had the opportunity to work with Pauline and her colleagues an four of those projects and will briefly describe her work in the 1960s.
    Date
    22. 9.1997 19:16:05
  18. Schirrmeister, N.-P.; Keil, S.: Aufbau einer Infrastruktur für Information Retrieval-Evaluationen (2012) 0.01
    0.012429901 = product of:
      0.09322426 = sum of:
        0.0440151 = weight(_text_:software in 3097) [ClassicSimilarity], result of:
          0.0440151 = score(doc=3097,freq=2.0), product of:
            0.12552431 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031640913 = queryNorm
            0.35064998 = fieldWeight in 3097, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0625 = fieldNorm(doc=3097)
        0.049209163 = weight(_text_:evaluation in 3097) [ClassicSimilarity], result of:
          0.049209163 = score(doc=3097,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.37076265 = fieldWeight in 3097, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0625 = fieldNorm(doc=3097)
      0.13333334 = coord(2/15)
    
    Abstract
    Das Projekt "Aufbau einer Infrastruktur für Information Retrieval-Evaluationen" (AIIRE) bietet eine Softwareinfrastruktur zur Unterstützung von Information Retrieval-Evaluationen (IR-Evaluationen). Die Infrastruktur basiert auf einem Tool-Kit, das bei GESIS im Rahmen des DFG-Projekts IRM entwickelt wurde. Ziel ist es, ein System zu bieten, das zur Forschung und Lehre am Fachbereich Media für IR-Evaluationen genutzt werden kann. This paper describes some aspects of a project called "Aufbau einer Infrastruktur für Information Retrieval-Evaluationen" (AIIRE). Its goal is to build a software-infrastructure which supports the evaluation of information retrieval algorithms.
  19. Schabas, A.H.: ¬A comparative evaluation of the retrieval effectiveness of titles, Library of Congress Subject Headings and PRECIS strings for computer searching of UK MARC data (1979) 0.01
    0.012417759 = product of:
      0.09313319 = sum of:
        0.019319436 = product of:
          0.03863887 = sum of:
            0.03863887 = weight(_text_:online in 5277) [ClassicSimilarity], result of:
              0.03863887 = score(doc=5277,freq=2.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.40237486 = fieldWeight in 5277, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5277)
          0.5 = coord(1/2)
        0.07381375 = weight(_text_:evaluation in 5277) [ClassicSimilarity], result of:
          0.07381375 = score(doc=5277,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.556144 = fieldWeight in 5277, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.09375 = fieldNorm(doc=5277)
      0.13333334 = coord(2/15)
    
    Theme
    Verbale Doksprachen im Online-Retrieval
  20. Pemberton, J.K.; Ojala, M.; Garman, N.: Head to head : searching the Web versus traditional services (1998) 0.01
    0.0119626755 = product of:
      0.059813377 = sum of:
        0.012879624 = product of:
          0.025759248 = sum of:
            0.025759248 = weight(_text_:online in 3572) [ClassicSimilarity], result of:
              0.025759248 = score(doc=3572,freq=2.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.2682499 = fieldWeight in 3572, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3572)
          0.5 = coord(1/2)
        0.029786127 = weight(_text_:web in 3572) [ClassicSimilarity], result of:
          0.029786127 = score(doc=3572,freq=2.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.2884563 = fieldWeight in 3572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=3572)
        0.017147627 = product of:
          0.034295253 = sum of:
            0.034295253 = weight(_text_:22 in 3572) [ClassicSimilarity], result of:
              0.034295253 = score(doc=3572,freq=2.0), product of:
                0.110801086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031640913 = queryNorm
                0.30952093 = fieldWeight in 3572, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3572)
          0.5 = coord(1/2)
      0.2 = coord(3/15)
    
    Source
    Online. 22(1998) no.3, S.24-26,28

Authors

Languages

Types

  • a 235
  • m 10
  • s 9
  • r 6
  • el 5
  • x 4
  • d 1
  • p 1
  • More… Less…