Search (14 results, page 1 of 1)

  • × theme_ss:"Retrievalstudien"
  • × theme_ss:"Suchmaschinen"
  1. Dresel, R.; Hörnig, D.; Kaluza, H.; Peter, A.; Roßmann, A.; Sieber, W.: Evaluation deutscher Web-Suchwerkzeuge : Ein vergleichender Retrievaltest (2001) 0.04
    0.036590114 = product of:
      0.07318023 = sum of:
        0.0071393843 = weight(_text_:in in 261) [ClassicSimilarity], result of:
          0.0071393843 = score(doc=261,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.120230645 = fieldWeight in 261, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=261)
        0.042382717 = weight(_text_:und in 261) [ClassicSimilarity], result of:
          0.042382717 = score(doc=261,freq=10.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.438048 = fieldWeight in 261, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=261)
        0.02365813 = product of:
          0.04731626 = sum of:
            0.04731626 = weight(_text_:22 in 261) [ClassicSimilarity], result of:
              0.04731626 = score(doc=261,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.30952093 = fieldWeight in 261, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=261)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    Die deutschen Suchmaschinen, Abacho, Acoon, Fireball und Lycos sowie die Web-Kataloge Web.de und Yahoo! werden einem Qualitätstest nach relativem Recall, Precision und Availability unterzogen. Die Methoden der Retrievaltests werden vorgestellt. Im Durchschnitt werden bei einem Cut-Off-Wert von 25 ein Recall von rund 22%, eine Precision von knapp 19% und eine Verfügbarkeit von 24% erreicht
    Footnote
    Vgl. auch den Bericht in: nfd 53(2002) H.2, S.71
    Source
    nfd Information - Wissenschaft und Praxis. 52(2001) H.7, S.381-392
  2. Günther, M.: Vermitteln Suchmaschinen vollständige Bilder aktueller Themen? : Untersuchung der Gewichtung inhaltlicher Aspekte von Suchmaschinenergebnissen in Deutschland und den USA (2016) 0.02
    0.01721226 = product of:
      0.05163678 = sum of:
        0.008924231 = weight(_text_:in in 3068) [ClassicSimilarity], result of:
          0.008924231 = score(doc=3068,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15028831 = fieldWeight in 3068, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3068)
        0.04271255 = weight(_text_:und in 3068) [ClassicSimilarity], result of:
          0.04271255 = score(doc=3068,freq=26.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.441457 = fieldWeight in 3068, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3068)
      0.33333334 = coord(2/6)
    
    Abstract
    Zielsetzung - Vor dem Hintergrund von Suchmaschinenverzerrungen sollte herausgefunden werden, ob sich die von Google und Bing vermittelten Bilder aktueller internationaler Themen in Deutschland und den USA hinsichtlich (1) Vollständigkeit, (2) Abdeckung und (3) Gewichtung der jeweiligen inhaltlichen Aspekte unterscheiden. Forschungsmethoden - Für die empirische Untersuchung wurde eine Methode aus Ansätzen der empirischen Sozialwissenschaften (Inhaltsanalyse) und der Informationswissenschaft (Retrievaltests) entwickelt und angewandt. Ergebnisse - Es zeigte sich, dass Google und Bing in Deutschland und den USA (1) keine vollständigen Bilder aktueller internationaler Themen vermitteln, dass sie (2) auf den ersten Trefferpositionen nicht die drei wichtigsten inhaltlichen Aspekte abdecken, und dass es (3) bei der Gewichtung der inhaltlichen Aspekte keine signifikanten Unterschiede gibt. Allerdings erfahren diese Ergebnisse Einschränkungen durch die Methodik und die Auswertung der empirischen Untersuchung. Schlussfolgerungen - Es scheinen tatsächlich inhaltliche Suchmaschinenverzerrungen vorzuliegen - diese könnten Auswirkungen auf die Meinungsbildung der Suchmaschinennutzer haben. Trotz großem Aufwand bei manueller, und qualitativ schlechteren Ergebnissen bei automatischer Untersuchung sollte dieses Thema weiter erforscht werden.
    Content
    Vgl.: https://yis.univie.ac.at/index.php/yis/article/view/1355. Diesem Beitrag liegt folgende Abschlussarbeit zugrunde: Günther, Markus: Welches Weltbild vermitteln Suchmaschinen? Untersuchung der Gewichtung inhaltlicher Aspekte von Google- und Bing-Ergebnissen in Deutschland und den USA zu aktuellen internationalen Themen . Masterarbeit (M.A.), Hochschule für Angewandte Wissenschaften Hamburg, 2015. Volltext: http://edoc.sub.uni-hamburg.de/haw/volltexte/2016/332.
  3. Griesbaum, J.: Evaluierung hybrider Suchsysteme im WWW (2000) 0.01
    0.010731532 = product of:
      0.032194596 = sum of:
        0.0075724614 = weight(_text_:in in 2482) [ClassicSimilarity], result of:
          0.0075724614 = score(doc=2482,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.12752387 = fieldWeight in 2482, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2482)
        0.024622133 = weight(_text_:und in 2482) [ClassicSimilarity], result of:
          0.024622133 = score(doc=2482,freq=6.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.2544829 = fieldWeight in 2482, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=2482)
      0.33333334 = coord(2/6)
    
    Abstract
    Der Ausgangspunkt dieser Arbeit ist die Suchproblematik im World Wide Web. Suchmaschinen sind einerseits unverzichtbar für erfolgreiches Information Retrieval, andererseits wird ihnen eine mäßige Leistungsfähigkeit vorgeworfen. Das Thema dieser Arbeit ist die Untersuchung der Retrievaleffektivität deutschsprachiger Suchmaschinen. Es soll festgestellt werden, welche Retrievaleffektivität Nutzer derzeit erwarten können. Ein Ansatz, um die Retrievaleffektivität von Suchmaschinen zu erhöhen besteht darin, redaktionell von Menschen erstellte und automatisch generierte Suchergebnisse in einer Trefferliste zu vermengen. Ziel dieser Arbeit ist es, die Retrievaleffektivität solcher hybrider Systeme im Vergleich zu rein roboterbasierten Suchmaschinen zu evaluieren. Zunächst werden hierzu die grundlegenden Problembereiche bei der Evaluation von Retrievalsystemen analysiert. In Anlehnung an die von Tague-Sutcliff vorgeschlagene Methodik wird unter Beachtung der webspezifischen Besonderheiten eine mögliche Vorgehensweise erschlossen. Darauf aufbauend wird das konkrete Setting für die Durchführung der Evaluation erarbeitet und ein Retrievaleffektivitätstest bei den Suchmaschinen Lycos.de, AItaVista.de und QualiGo durchgeführt.
  4. Griesbaum, J.; Rittberger, M.; Bekavac, B.: Deutsche Suchmaschinen im Vergleich : AltaVista.de, Fireball.de, Google.de und Lycos.de (2002) 0.01
    0.0068394816 = product of:
      0.04103689 = sum of:
        0.04103689 = weight(_text_:und in 1159) [ClassicSimilarity], result of:
          0.04103689 = score(doc=1159,freq=6.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.42413816 = fieldWeight in 1159, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=1159)
      0.16666667 = coord(1/6)
    
    Source
    Information und Mobilität: Optimierung und Vermeidung von Mobilität durch Information. Proceedings des 8. Internationalen Symposiums für Informationswissenschaft (ISI 2002), 7.-10.10.2002, Regensburg. Hrsg.: Rainer Hammwöhner, Christian Wolff, Christa Womser-Hacker
  5. Grasso, L.L.; Wahlig, H.: Google und seine Suchparameter : Eine Top 20-Precision Analyse anhand repräsentativ ausgewählter Anfragen (2005) 0.01
    0.006318042 = product of:
      0.037908252 = sum of:
        0.037908252 = weight(_text_:und in 3275) [ClassicSimilarity], result of:
          0.037908252 = score(doc=3275,freq=8.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.39180204 = fieldWeight in 3275, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=3275)
      0.16666667 = coord(1/6)
    
    Abstract
    Im Aufsatz werden zunächst führende Precision-Analysen zusammengefasst und kritisch bewertet. Darauf aufbauend werden Methodik und Ergebnisse dieser auf Google beschränkten Untersuchung vorgestellt. Im Mittelpunkt der Untersuchung werden die von Google angebotenen Retrievaloperatoren einer Qualitätsmessung unterzogen. Als methodisches Mittel dazu dient eine Top20-Precision-Analyse von acht Suchanfragen verschiedener vorab definierter Nutzertypen.
    Source
    Information - Wissenschaft und Praxis. 56(2005) H.2, S.77-86
  6. Eastman, C.M.: 30,000 hits may be better than 300 : precision anomalies in Internet searches (2002) 0.00
    0.0023517415 = product of:
      0.014110449 = sum of:
        0.014110449 = weight(_text_:in in 5231) [ClassicSimilarity], result of:
          0.014110449 = score(doc=5231,freq=20.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2376267 = fieldWeight in 5231, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5231)
      0.16666667 = coord(1/6)
    
    Abstract
    In this issue we begin with a paper where Eastman points out that conventional narrower queries (the use of conjunctions and phrases) in a web engine search will reduce returned number of hits but not necessarily increase precision in the top ranked documents in the return. Thus by precision anomalies Eastman means that search narrowing activity results in no precision change or a decrease in precision. Multiple queries with multiple engines were run by students for a three-year period and the formulation/engine combination was recorded as was the number of hits. Relevance was also recorded for the top ten and top twenty ranked retrievals. While narrower searches reduced total hits they did not usually improve precision. Initial high precision and poor query reformulation account for some of the results, as did Alta Vista's failure to use the ranking algorithm incorporated in its regular search in its advanced search feature. However, since the top listed returns often reoccurred in all formulations, it would seem that the ranking algorithms are doing a consistent job of practical precision ranking that is not improved by reformulation.
  7. Mettrop, W.; Nieuwenhuysen, P.: Internet search engines : fluctuations in document accessibility (2001) 0.00
    0.0019676082 = product of:
      0.011805649 = sum of:
        0.011805649 = weight(_text_:in in 4481) [ClassicSimilarity], result of:
          0.011805649 = score(doc=4481,freq=14.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.19881277 = fieldWeight in 4481, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4481)
      0.16666667 = coord(1/6)
    
    Abstract
    An empirical investigation of the consistency of retrieval through Internet search engines is reported. Thirteen engines are evaluated: AltaVista, EuroFerret, Excite, HotBot, InfoSeek, Lycos, MSN, NorthernLight, Snap, WebCrawler and three national Dutch engines: Ilse, Search.nl and Vindex. The focus is on a characteristics related to size: the degree of consistency to which an engine retrieves documents. Does an engine always present the same relevant documents that are, or were, available in its databases? We observed and identified three types of fluctuations in the result sets of several kinds of searches, many of them significant. These should be taken into account by users who apply an Internet search engine, for instance to retrieve as many relevant documents as possible, or to retrieve a document that was already found in a previous search, or to perform scientometric/bibliometric measurements. The fluctuations should also be considered as a complication of other research on the behaviour and performance of Internet search engines. In conclusion: in view of the increasing importance of the Internet as a publication/communication medium, the fluctuations in the result sets of Internet search engines can no longer be neglected.
  8. Oppenheim, C.; Morris, A.; McKnight, C.: ¬The evaluation of WWW search engines (2000) 0.00
    0.0017848461 = product of:
      0.010709076 = sum of:
        0.010709076 = weight(_text_:in in 4546) [ClassicSimilarity], result of:
          0.010709076 = score(doc=4546,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18034597 = fieldWeight in 4546, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4546)
      0.16666667 = coord(1/6)
    
    Abstract
    The literature of the evaluation of Internet search engines is reviewed. Although there have been many studies, there has been little consistency in the way such studies have been carried out. This problem is exacerbated by the fact that recall is virtually impossible to calculate in the fast changing Internet environment, and therefore the traditional Cranfield type of evaluation is not usually possible. A variety of alternative evaluation methods has been suggested to overcome this difficulty. The authors recommend that a standardised set of tools is developed for the evaluation of web search engines so that, in future, comparisons can be made between search engines more effectively, and that variations in performance of any given search engine over time can be tracked. The paper itself does not provide such a standard set of tools, but it investigates the issues and makes preliminary recommendations of the types of tools needed
  9. Landoni, M.; Bell, S.: Information retrieval techniques for evaluating search engines : a critical overview (2000) 0.00
    0.0017848461 = product of:
      0.010709076 = sum of:
        0.010709076 = weight(_text_:in in 716) [ClassicSimilarity], result of:
          0.010709076 = score(doc=716,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18034597 = fieldWeight in 716, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=716)
      0.16666667 = coord(1/6)
    
    Abstract
    The objective of this paper is to highlight the importance of a scientifically sounded approach to search engine evaluation. Nowadays there is a flourishing literature which describes various attempts at conducting such evaluation by following all sort of approaches, but very often only the final results are published with little, if any, information about the methodology and the procedures adopted. These various experiments have been critically investigated and catalogued according to their scientific foundation by Bell [1] in the attempt to provide a valuable framework for future studies in this area. This paper reconsiders some of Bell's ideas in the light of the crisis of classic evaluation techniques for information retrieval and tries to envisage some form of collaboration between the IR and web communities in order to design a better and more consistent platform for the evaluation of tools for interactive information retrieval.
  10. Sarigil, E.; Sengor Altingovde, I.; Blanco, R.; Barla Cambazoglu, B.; Ozcan, R.; Ulusoy, Ö.: Characterizing, predicting, and handling web search queries that match very few or no results (2018) 0.00
    0.0016629322 = product of:
      0.009977593 = sum of:
        0.009977593 = weight(_text_:in in 4039) [ClassicSimilarity], result of:
          0.009977593 = score(doc=4039,freq=10.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.16802745 = fieldWeight in 4039, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4039)
      0.16666667 = coord(1/6)
    
    Abstract
    A non-negligible fraction of user queries end up with very few or even no matching results in leading commercial web search engines. In this work, we provide a detailed characterization of such queries and show that search engines try to improve such queries by showing the results of related queries. Through a user study, we show that these query suggestions are usually perceived as relevant. Also, through a query log analysis, we show that the users are dissatisfied after submitting a query that match no results at least 88.5% of the time. As a first step towards solving these no-answer queries, we devised a large number of features that can be used to identify such queries and built machine-learning models. These models can be useful for scenarios such as the mobile- or meta-search, where identifying a query that will retrieve no results at the client device (i.e., even before submitting it to the search engine) may yield gains in terms of the bandwidth usage, power consumption, and/or monetary costs. Experiments over query logs indicate that, despite the heavy skew in class sizes, our models achieve good prediction quality, with accuracy (in terms of area under the curve) up to 0.95.
  11. Serrano Cobos, J.; Quintero Orta, A.: Design, development and management of an information recovery system for an Internet Website : from documentary theory to practice (2003) 0.00
    0.0015457221 = product of:
      0.009274333 = sum of:
        0.009274333 = weight(_text_:in in 2726) [ClassicSimilarity], result of:
          0.009274333 = score(doc=2726,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1561842 = fieldWeight in 2726, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2726)
      0.16666667 = coord(1/6)
    
    Abstract
    A real case study is shown, explaining in a timeline the whole process of design, development and evaluation of a search engine used as a navigational help tool for end users and clients an a content website, e-commerce driven. The nature of the website is a community website, which will determine the core design of the information service. This study will involve several steps, such as information recovery system analysis, comparative analysis of other commercial search engines, service design, functionalities and scope; software selection, design of the project, project management, future service administration and conclusions.
    Series
    Advances in knowledge organization; vol.8
    Source
    Challenges in knowledge representation and organization for the 21st century: Integration of knowledge across boundaries. Proceedings of the 7th ISKO International Conference Granada, Spain, July 10-13, 2002. Ed.: M. López-Huertas
  12. Schaer, P.; Mayr, P.; Sünkler, S.; Lewandowski, D.: How relevant is the long tail? : a relevance assessment study on million short (2016) 0.00
    0.0014873719 = product of:
      0.008924231 = sum of:
        0.008924231 = weight(_text_:in in 3144) [ClassicSimilarity], result of:
          0.008924231 = score(doc=3144,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15028831 = fieldWeight in 3144, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3144)
      0.16666667 = coord(1/6)
    
    Abstract
    Users of web search engines are known to mostly focus on the top ranked results of the search engine result page. While many studies support this well known information seeking pattern only few studies concentrate on the question what users are missing by neglecting lower ranked results. To learn more about the relevance distributions in the so-called long tail we conducted a relevance assessment study with the Million Short long-tail web search engine. While we see a clear difference in the content between the head and the tail of the search engine result list we see no statistical significant differences in the binary relevance judgments and weak significant differences when using graded relevance. The tail contains different but still valuable results. We argue that the long tail can be a rich source for the diversification of web search engine result lists but it needs more evaluation to clearly describe the differences.
    Footnote
    To appear in Experimental IR Meets Multilinguality, Multimodality, and Interaction. 7th International Conference of the CLEF Association, CLEF 2016, \'Evora, Portugal, September 5-8, 2016.
  13. Radev, D.R.; Libner, K.; Fan, W.: Getting answers to natural language questions on the Web (2002) 0.00
    0.0012881019 = product of:
      0.007728611 = sum of:
        0.007728611 = weight(_text_:in in 5204) [ClassicSimilarity], result of:
          0.007728611 = score(doc=5204,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1301535 = fieldWeight in 5204, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5204)
      0.16666667 = coord(1/6)
    
    Abstract
    Seven hundred natural language questions from TREC-8 and TREC-9 were sent by Radev, Libner, and Fan to each of nine web search engines. The top 40 sites returned by each system were stored for evaluation of their productivity of correct answers. Each question per engine was scored as the sum of the reciprocal ranks of identified correct answers. The large number of zero scores gave a positive skew violating the normality assumption for ANOVA, so values were transformed to zero for no hit and one for one or more hits. The non-zero values were then square-root transformed to remove the remaining positive skew. Interactions were observed between search engine and answer type (name, place, date, et cetera), search engine and number of proper nouns in the query, search engine and the need for time limitation, and search engine and total query words. All effects were significant. Shortest queries had the highest mean scores. One or more proper nouns present provides a significant advantage. Non-time dependent queries have an advantage. Place, name, person, and text description had mean scores between .85 and .9 with date at .81 and number at .59. There were significant differences in score by search engine. Search engines found at least one correct answer in between 87.7 and 75.45 of the cases. Google and Northern Light were just short of a 90% hit rate. No evidence indicated that a particular engine was better at answering any particular sort of question.
  14. Vegt, A. van der; Zuccon, G.; Koopman, B.: Do better search engines really equate to better clinical decisions? : If not, why not? (2021) 0.00
    0.0012881019 = product of:
      0.007728611 = sum of:
        0.007728611 = weight(_text_:in in 150) [ClassicSimilarity], result of:
          0.007728611 = score(doc=150,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1301535 = fieldWeight in 150, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=150)
      0.16666667 = coord(1/6)
    
    Abstract
    Previous research has found that improved search engine effectiveness-evaluated using a batch-style approach-does not always translate to significant improvements in user task performance; however, these prior studies focused on simple recall and precision-based search tasks. We investigated the same relationship, but for realistic, complex search tasks required in clinical decision making. One hundred and nine clinicians and final year medical students answered 16 clinical questions. Although the search engine did improve answer accuracy by 20 percentage points, there was no significant difference when participants used a more effective, state-of-the-art search engine. We also found that the search engine effectiveness difference, identified in the lab, was diminished by around 70% when the search engines were used with real users. Despite the aid of the search engine, half of the clinical questions were answered incorrectly. We further identified the relative contribution of search engine effectiveness to the overall end task success. We found that the ability to interpret documents correctly was a much more important factor impacting task success. If these findings are representative, information retrieval research may need to reorient its emphasis towards helping users to better understand information, rather than just finding it for them.