Search (3 results, page 1 of 1)

  • × author_ss:"Levene, M."
  • × theme_ss:"Benutzerstudien"
  1. Mat-Hassan, M.; Levene, M.: Associating search and navigation behavior through log analysis (2005) 0.00
    0.0025760243 = product of:
      0.010304097 = sum of:
        0.010304097 = weight(_text_:information in 3681) [ClassicSimilarity], result of:
          0.010304097 = score(doc=3681,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.16796975 = fieldWeight in 3681, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3681)
      0.25 = coord(1/4)
    
    Abstract
    We report on a study that was undertaken to better understand search and navigation behavior by exploiting the close association between the process underlying users' query submission and the navigational trails emanating from query clickthroughs. To our knowledge, there has been little research towards bridging the gap between these two important processes pertaining to users' online information searching activity. Based an log data obtained from a search and navigation documentation system called AutoDoc, we propose a model of user search sessions and provide analysis an users' link or clickthrough selection behavior, reformulation activities, and search strategy patterns. We also conducted a simple user study to gauge users' perceptions of their information seeking activity when interacting with the system. The results obtained show that analyzing both the query submissions and navigation starting from query clickthrough, reveals much more interesting patterns than analyzing these two processes independently. On average, AutoDoc users submitted only one query per search session and entered approximately two query terms. Specifically, our results show how AutoDoc users are more inclined to submit new queries or resubmit modified queries than to navigate by linkfollowing. We also show that users' behavior within this search system can be approximated by Zipf's Law distribution.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.9, S.913-934
  2. Bar-Ilan, J.; Keenoy, K.; Yaari, E.; Levene, M.: User rankings of search engine results (2007) 0.00
    0.0021033147 = product of:
      0.008413259 = sum of:
        0.008413259 = weight(_text_:information in 470) [ClassicSimilarity], result of:
          0.008413259 = score(doc=470,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.13714671 = fieldWeight in 470, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=470)
      0.25 = coord(1/4)
    
    Abstract
    In this study, we investigate the similarities and differences between rankings of search results by users and search engines. Sixty-seven students took part in a 3-week-long experiment, during which they were asked to identify and rank the top 10 documents from the set of URLs that were retrieved by three major search engines (Google, MSN Search, and Yahoo!) for 12 selected queries. The URLs and accompanying snippets were displayed in random order, without disclosing which search engine(s) retrieved any specific URL for the query. We computed the similarity of the rankings of the users and search engines using four nonparametric correlation measures in [0,1] that complement each other. The findings show that the similarities between the users' choices and the rankings of the search engines are low. We examined the effects of the presentation order of the results, and of the thinking styles of the participants. Presentation order influences the rankings, but overall the results indicate that there is no "average user," and even if the users have the same basic knowledge of a topic, they evaluate information in their own context, which is influenced by cognitive, affective, and physical factors. This is the first large-scale experiment in which users were asked to rank the results of identical queries. The analysis of the experimental results demonstrates the potential for personalized search.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.9, S.1254-1266
  3. Zhitomirsky-Geffet, M.; Bar-Ilan, J.; Levene, M.: Analysis of change in users' assessment of search results over time (2017) 0.00
    0.0014872681 = product of:
      0.0059490725 = sum of:
        0.0059490725 = weight(_text_:information in 3593) [ClassicSimilarity], result of:
          0.0059490725 = score(doc=3593,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.09697737 = fieldWeight in 3593, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3593)
      0.25 = coord(1/4)
    
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.5, S.1137-1148