Search (5 results, page 1 of 1)

  • × author_ss:"Su, L.T."
  1. Su, L.T.: ¬A comprehensive and systematic model of user evaluation of Web search engines : Il. An evaluation by undergraduates (2003) 0.11
    0.10858273 = product of:
      0.21716546 = sum of:
        0.2019939 = weight(_text_:engines in 2117) [ClassicSimilarity], result of:
          0.2019939 = score(doc=2117,freq=20.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.88758314 = fieldWeight in 2117, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2117)
        0.015171562 = product of:
          0.030343125 = sum of:
            0.030343125 = weight(_text_:22 in 2117) [ClassicSimilarity], result of:
              0.030343125 = score(doc=2117,freq=2.0), product of:
                0.15685207 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04479146 = queryNorm
                0.19345059 = fieldWeight in 2117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2117)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper presents an application of the model described in Part I to the evaluation of Web search engines by undergraduates. The study observed how 36 undergraduate used four major search engines to find information for their own individual problems and how they evaluated these engines based an actual interaction with the search engines. User evaluation was based an 16 performance measures representing five evaluation criteria: relevance, efficiency, utility, user satisfaction, and connectivity. Non-performance (user-related) measures were also applied. Each participant searched his/ her own topic an all four engines and provided satisfaction ratings for system features and interaction and reasons for satisfaction. Each also made relevance judgements of retrieved items in relation to his/her own information need and participated in post-search Interviews to provide reactions to the search results and overall performance. The study found significant differences in precision PR1 relative recall, user satisfaction with output display, time saving, value of search results, and overall performance among the four engines and also significant engine by discipline interactions an all these measures. In addition, the study found significant differences in user satisfaction with response time among four engines, and significant engine by discipline interaction in user satisfaction with search interface. None of the four search engines dominated in every aspect of the multidimensional evaluation. Content analysis of verbal data identified a number of user criteria and users evaluative comments based an these criteria. Results from both quantitative analysis and content analysis provide insight for system design and development, and useful feedback an strengths and weaknesses of search engines for system improvement
    Date
    24. 1.2004 18:27:22
  2. Dong, X.; Su, L.T.: Search engines on the World Wide Web and information retrieval from the Internet : a review and evaluation (1997) 0.04
    0.044254646 = product of:
      0.17701858 = sum of:
        0.17701858 = weight(_text_:engines in 155) [ClassicSimilarity], result of:
          0.17701858 = score(doc=155,freq=6.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.7778389 = fieldWeight in 155, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.0625 = fieldNorm(doc=155)
      0.25 = coord(1/4)
    
    Abstract
    Describes the categories and special features of WWW databases and compares them with traditional databases and presents a state of the art review of the literature on the testing and evaluation of WWW based search engines. Describes the different methodologies and measures used in previous studies and summarizes their findings. Presents some evaluative comments on previous studies and suggests areas for future investigation, particularly evaluation of Web based search engines from the end user's perspective
  3. Su, L.T.: Developing a comprehensive and systematic model of user evaluation of Web-based search engines (1997) 0.04
    0.03832564 = product of:
      0.15330257 = sum of:
        0.15330257 = weight(_text_:engines in 317) [ClassicSimilarity], result of:
          0.15330257 = score(doc=317,freq=2.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.67362815 = fieldWeight in 317, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.09375 = fieldNorm(doc=317)
      0.25 = coord(1/4)
    
  4. Su, L.T.; Chen, H.L.: Evaluation of Web search engines by undergraduate students (1999) 0.03
    0.031292755 = product of:
      0.12517102 = sum of:
        0.12517102 = weight(_text_:engines in 6546) [ClassicSimilarity], result of:
          0.12517102 = score(doc=6546,freq=12.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.5500151 = fieldWeight in 6546, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.03125 = fieldNorm(doc=6546)
      0.25 = coord(1/4)
    
    Abstract
    This research continues to explore the user's evaluation of Web search engines using a methodology proposed by Su (1997) and tested in a pilot study (Su, Chen, & Dong, 1998). It seeks to generate useful insight for system design and improvement, and for engine choice. The researchers were interested in how undergraduate students used four selected engines to retrieve information for their studies or personal interests and how they evaluated the interaction and search results retrieved by the four engines. Measures used were based on five evaluation criteria: relevance, efficiency, utility, user satisfaction, and connectivity. Thirty-six undergraduate juniors and seniors were recruited from the disciplines of sciences, social sciences and humanities. Each searched his/her own topic on all four engines in an assigned order and each made relevance judgements of retrieved items in relation to his/her information need or problem. The study found some significant differences among the four engines but none dominated in every aspect of the multidimensional evaluation. Alta Vista had the highest number of relevant and partially relevant documents, the best relative recall and the highest precision ratio based on PR1, Alter Vista had significantly better scores for these three measures than for Lycos. Infoseek had the highest satisfaction rating for response time. Both Infoseek and Excite had significantly higher satisfaction ratings for response time than Lycos. Excite had the best score for output display. Excite and Alta Vista had significantly better scores for output display than Lycos. Excite had the best rating for time saving while Alta Vista achieved the best score for value of search results as a whole and for overall performance. Alta Vista and Excite had significantly better ratings for these three measures than Lycos. Lycos achieved the best relevance ranking performance. Further work will provide more complete picture for engine comparison and choice by taking into account participant characteristics and identify factors contributing to the user's satisfaction to gain better insight for system design and improvement
  5. Su, L.T.: ¬A comprehensive and systematic model of user evaluation of Web search engines : I. Theory and background (2003) 0.03
    0.027659154 = product of:
      0.110636614 = sum of:
        0.110636614 = weight(_text_:engines in 5164) [ClassicSimilarity], result of:
          0.110636614 = score(doc=5164,freq=6.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.4861493 = fieldWeight in 5164, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5164)
      0.25 = coord(1/4)
    
    Abstract
    The project proposes and tests a comprehensive and systematic model of user evaluation of Web search engines. The project contains two parts. Part I describes the background and the model including a set of criteria and measures, and a method for implementation. It includes a literature review for two periods. The early period (1995-1996) portrays the settings for developing the model and the later period (1997-2000) places two applications of the model among contemporary evaluation work. Part II presents one of the applications that investigated the evaluation of four major search engines by 36 undergraduates from three academic disciplines. It reports results from statistical analyses of quantitative data for the entire sample and among disciplines, and content analysis of verbal data containing users' reasons for satisfaction. The proposed model aims to provide systematic feedback to engine developers or service providers for system improvement and to generate useful insight for system design and tool choice. The model can be applied to evaluating other compatible information retrieval systems or information retrieval (IR) techniques. It intends to contribute to developing a theory of relevance that goes beyond topicality to include value and usefulness for designing user-oriented information retrieval systems.