Search (2 results, page 1 of 1)

  • × author_ss:"Yaari, E."
  • × author_ss:"Bar-Ilan, J."
  1. Bar-Ilan, J.; Keenoy, K.; Yaari, E.; Levene, M.: User rankings of search engine results (2007) 0.00
    0.0033821356 = product of:
      0.0067642713 = sum of:
        0.0067642713 = product of:
          0.020292813 = sum of:
            0.020292813 = weight(_text_:12 in 470) [ClassicSimilarity], result of:
              0.020292813 = score(doc=470,freq=2.0), product of:
                0.13281173 = queryWeight, product of:
                  2.765864 = idf(docFreq=7562, maxDocs=44218)
                  0.048018172 = queryNorm
                0.15279384 = fieldWeight in 470, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.765864 = idf(docFreq=7562, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=470)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    In this study, we investigate the similarities and differences between rankings of search results by users and search engines. Sixty-seven students took part in a 3-week-long experiment, during which they were asked to identify and rank the top 10 documents from the set of URLs that were retrieved by three major search engines (Google, MSN Search, and Yahoo!) for 12 selected queries. The URLs and accompanying snippets were displayed in random order, without disclosing which search engine(s) retrieved any specific URL for the query. We computed the similarity of the rankings of the users and search engines using four nonparametric correlation measures in [0,1] that complement each other. The findings show that the similarities between the users' choices and the rankings of the search engines are low. We examined the effects of the presentation order of the results, and of the thinking styles of the participants. Presentation order influences the rankings, but overall the results indicate that there is no "average user," and even if the users have the same basic knowledge of a topic, they evaluate information in their own context, which is influenced by cognitive, affective, and physical factors. This is the first large-scale experiment in which users were asked to rank the results of identical queries. The analysis of the experimental results demonstrates the potential for personalized search.
  2. Bar-Ilan, J.; Keenoy, K.; Levene, M.; Yaari, E.: Presentation bias is significant in determining user preference for search results : a user study (2009) 0.00
    0.0033821356 = product of:
      0.0067642713 = sum of:
        0.0067642713 = product of:
          0.020292813 = sum of:
            0.020292813 = weight(_text_:12 in 2703) [ClassicSimilarity], result of:
              0.020292813 = score(doc=2703,freq=2.0), product of:
                0.13281173 = queryWeight, product of:
                  2.765864 = idf(docFreq=7562, maxDocs=44218)
                  0.048018172 = queryNorm
                0.15279384 = fieldWeight in 2703, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.765864 = idf(docFreq=7562, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2703)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    We describe the results of an experiment designed to study user preferences for different orderings of search results from three major search engines. In the experiment, 65 users were asked to choose the best ordering from two different orderings of the same set of search results: Each pair consisted of the search engine's original top-10 ordering and a synthetic ordering created from the same top-10 results retrieved by the search engine. This process was repeated for 12 queries and nine different synthetic orderings. The results show that there is a slight overall preference for the search engines' original orderings, but the preference is rarely significant. Users' choice of the best result from each of the different orderings indicates that placement on the page (i.e., whether the result appears near the top) is the most important factor used in determining the quality of the result, not the actual content displayed in the top-10 snippets. In addition to the placement bias, we detected a small bias due to the reputation of the sites appearing in the search results.