Search (1 results, page 1 of 1)

  • × author_ss:"Chen, H.L."
  • × theme_ss:"Benutzerstudien"
  1. Su, L.T.; Chen, H.L.: Evaluation of Web search engines by undergraduate students (1999) 0.01
    0.008669936 = product of:
      0.017339872 = sum of:
        0.017339872 = product of:
          0.034679744 = sum of:
            0.034679744 = weight(_text_:web in 6546) [ClassicSimilarity], result of:
              0.034679744 = score(doc=6546,freq=4.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.2039694 = fieldWeight in 6546, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=6546)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This research continues to explore the user's evaluation of Web search engines using a methodology proposed by Su (1997) and tested in a pilot study (Su, Chen, & Dong, 1998). It seeks to generate useful insight for system design and improvement, and for engine choice. The researchers were interested in how undergraduate students used four selected engines to retrieve information for their studies or personal interests and how they evaluated the interaction and search results retrieved by the four engines. Measures used were based on five evaluation criteria: relevance, efficiency, utility, user satisfaction, and connectivity. Thirty-six undergraduate juniors and seniors were recruited from the disciplines of sciences, social sciences and humanities. Each searched his/her own topic on all four engines in an assigned order and each made relevance judgements of retrieved items in relation to his/her information need or problem. The study found some significant differences among the four engines but none dominated in every aspect of the multidimensional evaluation. Alta Vista had the highest number of relevant and partially relevant documents, the best relative recall and the highest precision ratio based on PR1, Alter Vista had significantly better scores for these three measures than for Lycos. Infoseek had the highest satisfaction rating for response time. Both Infoseek and Excite had significantly higher satisfaction ratings for response time than Lycos. Excite had the best score for output display. Excite and Alta Vista had significantly better scores for output display than Lycos. Excite had the best rating for time saving while Alta Vista achieved the best score for value of search results as a whole and for overall performance. Alta Vista and Excite had significantly better ratings for these three measures than Lycos. Lycos achieved the best relevance ranking performance. Further work will provide more complete picture for engine comparison and choice by taking into account participant characteristics and identify factors contributing to the user's satisfaction to gain better insight for system design and improvement