Search (7 results, page 1 of 1)

  • × author_ss:"Levene, M."
  1. Bar-Ilan, J.; Keenoy, K.; Yaari, E.; Levene, M.: User rankings of search engine results (2007) 0.12
    0.118524216 = product of:
      0.17778632 = sum of:
        0.10609328 = weight(_text_:search in 470) [ClassicSimilarity], result of:
          0.10609328 = score(doc=470,freq=20.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.60717577 = fieldWeight in 470, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=470)
        0.07169304 = product of:
          0.14338608 = sum of:
            0.14338608 = weight(_text_:engines in 470) [ClassicSimilarity], result of:
              0.14338608 = score(doc=470,freq=8.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.5613568 = fieldWeight in 470, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=470)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In this study, we investigate the similarities and differences between rankings of search results by users and search engines. Sixty-seven students took part in a 3-week-long experiment, during which they were asked to identify and rank the top 10 documents from the set of URLs that were retrieved by three major search engines (Google, MSN Search, and Yahoo!) for 12 selected queries. The URLs and accompanying snippets were displayed in random order, without disclosing which search engine(s) retrieved any specific URL for the query. We computed the similarity of the rankings of the users and search engines using four nonparametric correlation measures in [0,1] that complement each other. The findings show that the similarities between the users' choices and the rankings of the search engines are low. We examined the effects of the presentation order of the results, and of the thinking styles of the participants. Presentation order influences the rankings, but overall the results indicate that there is no "average user," and even if the users have the same basic knowledge of a topic, they evaluate information in their own context, which is influenced by cognitive, affective, and physical factors. This is the first large-scale experiment in which users were asked to rank the results of identical queries. The analysis of the experimental results demonstrates the potential for personalized search.
    Object
    MSN Search
  2. Zhitomirsky-Geffet, M.; Bar-Ilan, J.; Levene, M.: Testing the stability of "wisdom of crowds" judgments of search results over time and their similarity with the search engine rankings (2016) 0.11
    0.110079154 = product of:
      0.16511872 = sum of:
        0.08051914 = weight(_text_:search in 3071) [ClassicSimilarity], result of:
          0.08051914 = score(doc=3071,freq=18.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.460814 = fieldWeight in 3071, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=3071)
        0.08459958 = sum of:
          0.057354435 = weight(_text_:engines in 3071) [ClassicSimilarity], result of:
            0.057354435 = score(doc=3071,freq=2.0), product of:
              0.25542772 = queryWeight, product of:
                5.080822 = idf(docFreq=746, maxDocs=44218)
                0.05027291 = queryNorm
              0.22454272 = fieldWeight in 3071, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.080822 = idf(docFreq=746, maxDocs=44218)
                0.03125 = fieldNorm(doc=3071)
          0.027245143 = weight(_text_:22 in 3071) [ClassicSimilarity], result of:
            0.027245143 = score(doc=3071,freq=2.0), product of:
              0.17604718 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05027291 = queryNorm
              0.15476047 = fieldWeight in 3071, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=3071)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose - One of the under-explored aspects in the process of user information seeking behaviour is influence of time on relevance evaluation. It has been shown in previous studies that individual users might change their assessment of search results over time. It is also known that aggregated judgements of multiple individual users can lead to correct and reliable decisions; this phenomenon is known as the "wisdom of crowds". The purpose of this paper is to examine whether aggregated judgements will be more stable and thus more reliable over time than individual user judgements. Design/methodology/approach - In this study two simple measures are proposed to calculate the aggregated judgements of search results and compare their reliability and stability to individual user judgements. In addition, the aggregated "wisdom of crowds" judgements were used as a means to compare the differences between human assessments of search results and search engine's rankings. A large-scale user study was conducted with 87 participants who evaluated two different queries and four diverse result sets twice, with an interval of two months. Two types of judgements were considered in this study: relevance on a four-point scale, and ranking on a ten-point scale without ties. Findings - It was found that aggregated judgements are much more stable than individual user judgements, yet they are quite different from search engine rankings. Practical implications - The proposed "wisdom of crowds"-based approach provides a reliable reference point for the evaluation of search engines. This is also important for exploring the need of personalisation and adapting search engine's ranking over time to changes in users preferences. Originality/value - This is a first study that applies the notion of "wisdom of crowds" to examine an under-explored in the literature phenomenon of "change in time" in user evaluation of relevance.
    Date
    20. 1.2015 18:30:22
  3. Bar-Ilan, J.; Levene, M.; Mat-Hassan, M.: Methods for evaluating dynamic changes in search engine rankings : a case study (2006) 0.11
    0.10716495 = product of:
      0.16074742 = sum of:
        0.08487463 = weight(_text_:search in 616) [ClassicSimilarity], result of:
          0.08487463 = score(doc=616,freq=20.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.48574063 = fieldWeight in 616, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=616)
        0.075872794 = product of:
          0.15174559 = sum of:
            0.15174559 = weight(_text_:engines in 616) [ClassicSimilarity], result of:
              0.15174559 = score(doc=616,freq=14.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.59408426 = fieldWeight in 616, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.03125 = fieldNorm(doc=616)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose - The objective of this paper is to characterize the changes in the rankings of the top ten results of major search engines over time and to compare the rankings between these engines. Design/methodology/approach - The papers compare rankings of the top-ten results of the search engines Google and AlltheWeb on ten identical queries over a period of three weeks. Only the top-ten results were considered, since users do not normally inspect more than the first results page returned by a search engine. The experiment was repeated twice, in October 2003 and in January 2004, in order to assess changes to the top-ten results of some of the queries during the three months interval. In order to assess the changes in the rankings, three measures were computed for each data collection point and each search engine. Findings - The findings in this paper show that the rankings of AlltheWeb were highly stable over each period, while the rankings of Google underwent constant yet minor changes, with occasional major ones. Changes over time can be explained by the dynamic nature of the web or by fluctuations in the search engines' indexes. The top-ten results of the two search engines had surprisingly low overlap. With such small overlap, the task of comparing the rankings of the two engines becomes extremely challenging. Originality/value - The paper shows that because of the abundance of information on the web, ranking search results is of extreme importance. The paper compares several measures for computing the similarity between rankings of search tools, and shows that none of the measures is fully satisfactory as a standalone measure. It also demonstrates the apparent differences in the ranking algorithms of two widely used search engines.
  4. Bar-Ilan, J.; Keenoy, K.; Levene, M.; Yaari, E.: Presentation bias is significant in determining user preference for search results : a user study (2009) 0.10
    0.09705824 = product of:
      0.14558735 = sum of:
        0.09489272 = weight(_text_:search in 2703) [ClassicSimilarity], result of:
          0.09489272 = score(doc=2703,freq=16.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.54307455 = fieldWeight in 2703, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2703)
        0.05069464 = product of:
          0.10138928 = sum of:
            0.10138928 = weight(_text_:engines in 2703) [ClassicSimilarity], result of:
              0.10138928 = score(doc=2703,freq=4.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.39693922 = fieldWeight in 2703, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2703)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    We describe the results of an experiment designed to study user preferences for different orderings of search results from three major search engines. In the experiment, 65 users were asked to choose the best ordering from two different orderings of the same set of search results: Each pair consisted of the search engine's original top-10 ordering and a synthetic ordering created from the same top-10 results retrieved by the search engine. This process was repeated for 12 queries and nine different synthetic orderings. The results show that there is a slight overall preference for the search engines' original orderings, but the preference is rarely significant. Users' choice of the best result from each of the different orderings indicates that placement on the page (i.e., whether the result appears near the top) is the most important factor used in determining the quality of the result, not the actual content displayed in the top-10 snippets. In addition to the placement bias, we detected a small bias due to the reputation of the sites appearing in the search results.
  5. Mat-Hassan, M.; Levene, M.: Associating search and navigation behavior through log analysis (2005) 0.03
    0.029588005 = product of:
      0.08876401 = sum of:
        0.08876401 = weight(_text_:search in 3681) [ClassicSimilarity], result of:
          0.08876401 = score(doc=3681,freq=14.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.5079997 = fieldWeight in 3681, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3681)
      0.33333334 = coord(1/3)
    
    Abstract
    We report on a study that was undertaken to better understand search and navigation behavior by exploiting the close association between the process underlying users' query submission and the navigational trails emanating from query clickthroughs. To our knowledge, there has been little research towards bridging the gap between these two important processes pertaining to users' online information searching activity. Based an log data obtained from a search and navigation documentation system called AutoDoc, we propose a model of user search sessions and provide analysis an users' link or clickthrough selection behavior, reformulation activities, and search strategy patterns. We also conducted a simple user study to gauge users' perceptions of their information seeking activity when interacting with the system. The results obtained show that analyzing both the query submissions and navigation starting from query clickthrough, reveals much more interesting patterns than analyzing these two processes independently. On average, AutoDoc users submitted only one query per search session and entered approximately two query terms. Specifically, our results show how AutoDoc users are more inclined to submit new queries or resubmit modified queries than to navigate by linkfollowing. We also show that users' behavior within this search system can be approximated by Zipf's Law distribution.
  6. Zhitomirsky-Geffet, M.; Bar-Ilan, J.; Levene, M.: Analysis of change in users' assessment of search results over time (2017) 0.02
    0.022366427 = product of:
      0.06709928 = sum of:
        0.06709928 = weight(_text_:search in 3593) [ClassicSimilarity], result of:
          0.06709928 = score(doc=3593,freq=8.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.3840117 = fieldWeight in 3593, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3593)
      0.33333334 = coord(1/3)
    
    Abstract
    We present the first systematic study of the influence of time on user judgements for rankings and relevance grades of web search engine results. The goal of this study is to evaluate the change in user assessment of search results and explore how users' judgements change. To this end, we conducted a large-scale user study with 86 participants who evaluated 2 different queries and 4 diverse result sets twice with an interval of 2 months. To analyze the results we investigate whether 2 types of patterns of user behavior from the theory of categorical thinking hold for the case of evaluation of search results: (a) coarseness and (b) locality. To quantify these patterns we devised 2 new measures of change in user judgements and distinguish between local (when users swap between close ranks and relevance values) and nonlocal changes. Two types of judgements were considered in this study: (a) relevance on a 4-point scale, and (b) ranking on a 10-point scale without ties. We found that users tend to change their judgements of the results over time in about 50% of cases for relevance and in 85% of cases for ranking. However, the majority of these changes were local.
  7. Zhitomirsky-Geffet, M.; Bar-Ilan, J.; Levene, M.: Categorical relevance judgment (2018) 0.02
    0.022366427 = product of:
      0.06709928 = sum of:
        0.06709928 = weight(_text_:search in 4457) [ClassicSimilarity], result of:
          0.06709928 = score(doc=4457,freq=8.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.3840117 = fieldWeight in 4457, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4457)
      0.33333334 = coord(1/3)
    
    Abstract
    In this study we aim to explore users' behavior when assessing search results relevance based on the hypothesis of categorical thinking. To investigate how users categories search engine results, we perform several experiments where users are asked to group a list of 20 search results into several categories, while attaching a relevance judgment to each formed category. Moreover, to determine how users change their minds over time, each experiment was repeated three times under the same conditions, with a gap of one month between rounds. The results show that on average users form 4-5 categories. Within each round the size of a category decreases with the relevance of a category. To measure the agreement between the search engine's ranking and the users' relevance judgments, we defined two novel similarity measures, the average concordance and the MinMax swap ratio. Similarity is shown to be the highest for the third round as the users' opinion stabilizes. Qualitative analysis uncovered some interesting points that users tended to categories results by type and reliability of their source, and particularly, found commercial sites less trustworthy, and attached high relevance to Wikipedia when their prior domain knowledge was limited.