Search (11 results, page 1 of 1)

  • × theme_ss:"Retrievalstudien"
  • × theme_ss:"Suchmaschinen"
  • × type_ss:"a"
  1. Mettrop, W.; Nieuwenhuysen, P.: Internet search engines : fluctuations in document accessibility (2001) 0.02
    0.018386986 = product of:
      0.06435445 = sum of:
        0.019307088 = weight(_text_:retrieval in 4481) [ClassicSimilarity], result of:
          0.019307088 = score(doc=4481,freq=2.0), product of:
            0.11553899 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03819578 = queryNorm
            0.16710453 = fieldWeight in 4481, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4481)
        0.045047358 = weight(_text_:internet in 4481) [ClassicSimilarity], result of:
          0.045047358 = score(doc=4481,freq=12.0), product of:
            0.11276311 = queryWeight, product of:
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.03819578 = queryNorm
            0.39948666 = fieldWeight in 4481, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4481)
      0.2857143 = coord(2/7)
    
    Abstract
    An empirical investigation of the consistency of retrieval through Internet search engines is reported. Thirteen engines are evaluated: AltaVista, EuroFerret, Excite, HotBot, InfoSeek, Lycos, MSN, NorthernLight, Snap, WebCrawler and three national Dutch engines: Ilse, Search.nl and Vindex. The focus is on a characteristics related to size: the degree of consistency to which an engine retrieves documents. Does an engine always present the same relevant documents that are, or were, available in its databases? We observed and identified three types of fluctuations in the result sets of several kinds of searches, many of them significant. These should be taken into account by users who apply an Internet search engine, for instance to retrieve as many relevant documents as possible, or to retrieve a document that was already found in a previous search, or to perform scientometric/bibliometric measurements. The fluctuations should also be considered as a complication of other research on the behaviour and performance of Internet search engines. In conclusion: in view of the increasing importance of the Internet as a publication/communication medium, the fluctuations in the result sets of Internet search engines can no longer be neglected.
  2. Agata, T.: ¬A measure for evaluating search engines on the World Wide Web : retrieval test with ESL (Expected Search Length) (1997) 0.01
    0.0066195736 = product of:
      0.046337012 = sum of:
        0.046337012 = weight(_text_:retrieval in 3892) [ClassicSimilarity], result of:
          0.046337012 = score(doc=3892,freq=2.0), product of:
            0.11553899 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03819578 = queryNorm
            0.40105087 = fieldWeight in 3892, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=3892)
      0.14285715 = coord(1/7)
    
  3. Landoni, M.; Bell, S.: Information retrieval techniques for evaluating search engines : a critical overview (2000) 0.01
    0.0057327193 = product of:
      0.040129032 = sum of:
        0.040129032 = weight(_text_:retrieval in 716) [ClassicSimilarity], result of:
          0.040129032 = score(doc=716,freq=6.0), product of:
            0.11553899 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03819578 = queryNorm
            0.34732026 = fieldWeight in 716, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=716)
      0.14285715 = coord(1/7)
    
    Abstract
    The objective of this paper is to highlight the importance of a scientifically sounded approach to search engine evaluation. Nowadays there is a flourishing literature which describes various attempts at conducting such evaluation by following all sort of approaches, but very often only the final results are published with little, if any, information about the methodology and the procedures adopted. These various experiments have been critically investigated and catalogued according to their scientific foundation by Bell [1] in the attempt to provide a valuable framework for future studies in this area. This paper reconsiders some of Bell's ideas in the light of the crisis of classic evaluation techniques for information retrieval and tries to envisage some form of collaboration between the IR and web communities in order to design a better and more consistent platform for the evaluation of tools for interactive information retrieval.
  4. Oppenheim, C.; Morris, A.; McKnight, C.: ¬The evaluation of WWW search engines (2000) 0.00
    0.004458532 = product of:
      0.031209724 = sum of:
        0.031209724 = weight(_text_:internet in 4546) [ClassicSimilarity], result of:
          0.031209724 = score(doc=4546,freq=4.0), product of:
            0.11276311 = queryWeight, product of:
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.03819578 = queryNorm
            0.27677247 = fieldWeight in 4546, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.046875 = fieldNorm(doc=4546)
      0.14285715 = coord(1/7)
    
    Abstract
    The literature of the evaluation of Internet search engines is reviewed. Although there have been many studies, there has been little consistency in the way such studies have been carried out. This problem is exacerbated by the fact that recall is virtually impossible to calculate in the fast changing Internet environment, and therefore the traditional Cranfield type of evaluation is not usually possible. A variety of alternative evaluation methods has been suggested to overcome this difficulty. The authors recommend that a standardised set of tools is developed for the evaluation of web search engines so that, in future, comparisons can be made between search engines more effectively, and that variations in performance of any given search engine over time can be tracked. The paper itself does not provide such a standard set of tools, but it investigates the issues and makes preliminary recommendations of the types of tools needed
  5. Clarke, S.J.; Willett, P.: Estimating the recall performance of Web search engines (1997) 0.00
    0.004413049 = product of:
      0.030891342 = sum of:
        0.030891342 = weight(_text_:retrieval in 760) [ClassicSimilarity], result of:
          0.030891342 = score(doc=760,freq=2.0), product of:
            0.11553899 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03819578 = queryNorm
            0.26736724 = fieldWeight in 760, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=760)
      0.14285715 = coord(1/7)
    
    Abstract
    Reports a comparison of the retrieval effectiveness of the AltaVista, Excite and Lycos Web search engines. Describes a method for comparing the recall of the 3 sets of searches, despite the fact that they are carried out on non identical sets of Web pages. It is thus possible, unlike previous comparative studies of Web search engines, to consider both recall and precision when evaluating the effectiveness of search engines
  6. Serrano Cobos, J.; Quintero Orta, A.: Design, development and management of an information recovery system for an Internet Website : from documentary theory to practice (2003) 0.00
    0.0031526582 = product of:
      0.022068607 = sum of:
        0.022068607 = weight(_text_:internet in 2726) [ClassicSimilarity], result of:
          0.022068607 = score(doc=2726,freq=2.0), product of:
            0.11276311 = queryWeight, product of:
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.03819578 = queryNorm
            0.1957077 = fieldWeight in 2726, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.046875 = fieldNorm(doc=2726)
      0.14285715 = coord(1/7)
    
  7. Vegt, A. van der; Zuccon, G.; Koopman, B.: Do better search engines really equate to better clinical decisions? : If not, why not? (2021) 0.00
    0.0027581556 = product of:
      0.019307088 = sum of:
        0.019307088 = weight(_text_:retrieval in 150) [ClassicSimilarity], result of:
          0.019307088 = score(doc=150,freq=2.0), product of:
            0.11553899 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03819578 = queryNorm
            0.16710453 = fieldWeight in 150, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=150)
      0.14285715 = coord(1/7)
    
    Abstract
    Previous research has found that improved search engine effectiveness-evaluated using a batch-style approach-does not always translate to significant improvements in user task performance; however, these prior studies focused on simple recall and precision-based search tasks. We investigated the same relationship, but for realistic, complex search tasks required in clinical decision making. One hundred and nine clinicians and final year medical students answered 16 clinical questions. Although the search engine did improve answer accuracy by 20 percentage points, there was no significant difference when participants used a more effective, state-of-the-art search engine. We also found that the search engine effectiveness difference, identified in the lab, was diminished by around 70% when the search engines were used with real users. Despite the aid of the search engine, half of the clinical questions were answered incorrectly. We further identified the relative contribution of search engine effectiveness to the overall end task success. We found that the ability to interpret documents correctly was a much more important factor impacting task success. If these findings are representative, information retrieval research may need to reorient its emphasis towards helping users to better understand information, rather than just finding it for them.
  8. Eastman, C.M.: 30,000 hits may be better than 300 : precision anomalies in Internet searches (2002) 0.00
    0.0026272153 = product of:
      0.018390507 = sum of:
        0.018390507 = weight(_text_:internet in 5231) [ClassicSimilarity], result of:
          0.018390507 = score(doc=5231,freq=2.0), product of:
            0.11276311 = queryWeight, product of:
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.03819578 = queryNorm
            0.16308975 = fieldWeight in 5231, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.9522398 = idf(docFreq=6276, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5231)
      0.14285715 = coord(1/7)
    
  9. Bar-Ilan, J.: Methods for measuring search engine performance over time (2002) 0.00
    0.0019893246 = product of:
      0.013925271 = sum of:
        0.013925271 = product of:
          0.04177581 = sum of:
            0.04177581 = weight(_text_:29 in 305) [ClassicSimilarity], result of:
              0.04177581 = score(doc=305,freq=2.0), product of:
                0.13436082 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03819578 = queryNorm
                0.31092256 = fieldWeight in 305, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=305)
          0.33333334 = coord(1/3)
      0.14285715 = coord(1/7)
    
    Date
    23. 3.2002 9:50:29
  10. Dresel, R.; Hörnig, D.; Kaluza, H.; Peter, A.; Roßmann, A.; Sieber, W.: Evaluation deutscher Web-Suchwerkzeuge : Ein vergleichender Retrievaltest (2001) 0.00
    0.001971429 = product of:
      0.013800003 = sum of:
        0.013800003 = product of:
          0.041400008 = sum of:
            0.041400008 = weight(_text_:22 in 261) [ClassicSimilarity], result of:
              0.041400008 = score(doc=261,freq=2.0), product of:
                0.13375512 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03819578 = queryNorm
                0.30952093 = fieldWeight in 261, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=261)
          0.33333334 = coord(1/3)
      0.14285715 = coord(1/7)
    
    Abstract
    Die deutschen Suchmaschinen, Abacho, Acoon, Fireball und Lycos sowie die Web-Kataloge Web.de und Yahoo! werden einem Qualitätstest nach relativem Recall, Precision und Availability unterzogen. Die Methoden der Retrievaltests werden vorgestellt. Im Durchschnitt werden bei einem Cut-Off-Wert von 25 ein Recall von rund 22%, eine Precision von knapp 19% und eine Verfügbarkeit von 24% erreicht
  11. Günther, M.: Vermitteln Suchmaschinen vollständige Bilder aktueller Themen? : Untersuchung der Gewichtung inhaltlicher Aspekte von Suchmaschinenergebnissen in Deutschland und den USA (2016) 0.00
    0.0012433277 = product of:
      0.008703294 = sum of:
        0.008703294 = product of:
          0.026109882 = sum of:
            0.026109882 = weight(_text_:29 in 3068) [ClassicSimilarity], result of:
              0.026109882 = score(doc=3068,freq=2.0), product of:
                0.13436082 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03819578 = queryNorm
                0.19432661 = fieldWeight in 3068, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3068)
          0.33333334 = coord(1/3)
      0.14285715 = coord(1/7)
    
    Source
    Young information scientists. 1(2016), S.13-29