Search (7 results, page 1 of 1)

  • × author_ss:"Su, L.T."
  • × language_ss:"e"
  1. Su, L.T.: ¬A comprehensive and systematic model of user evaluation of Web search engines : Il. An evaluation by undergraduates (2003) 0.24
    0.24457537 = product of:
      0.36686304 = sum of:
        0.10609328 = weight(_text_:search in 2117) [ClassicSimilarity], result of:
          0.10609328 = score(doc=2117,freq=20.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.60717577 = fieldWeight in 2117, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2117)
        0.26076975 = sum of:
          0.22671333 = weight(_text_:engines in 2117) [ClassicSimilarity], result of:
            0.22671333 = score(doc=2117,freq=20.0), product of:
              0.25542772 = queryWeight, product of:
                5.080822 = idf(docFreq=746, maxDocs=44218)
                0.05027291 = queryNorm
              0.88758314 = fieldWeight in 2117, product of:
                4.472136 = tf(freq=20.0), with freq of:
                  20.0 = termFreq=20.0
                5.080822 = idf(docFreq=746, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2117)
          0.03405643 = weight(_text_:22 in 2117) [ClassicSimilarity], result of:
            0.03405643 = score(doc=2117,freq=2.0), product of:
              0.17604718 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05027291 = queryNorm
              0.19345059 = fieldWeight in 2117, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2117)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper presents an application of the model described in Part I to the evaluation of Web search engines by undergraduates. The study observed how 36 undergraduate used four major search engines to find information for their own individual problems and how they evaluated these engines based an actual interaction with the search engines. User evaluation was based an 16 performance measures representing five evaluation criteria: relevance, efficiency, utility, user satisfaction, and connectivity. Non-performance (user-related) measures were also applied. Each participant searched his/ her own topic an all four engines and provided satisfaction ratings for system features and interaction and reasons for satisfaction. Each also made relevance judgements of retrieved items in relation to his/her own information need and participated in post-search Interviews to provide reactions to the search results and overall performance. The study found significant differences in precision PR1 relative recall, user satisfaction with output display, time saving, value of search results, and overall performance among the four engines and also significant engine by discipline interactions an all these measures. In addition, the study found significant differences in user satisfaction with response time among four engines, and significant engine by discipline interaction in user satisfaction with search interface. None of the four search engines dominated in every aspect of the multidimensional evaluation. Content analysis of verbal data identified a number of user criteria and users evaluative comments based an these criteria. Results from both quantitative analysis and content analysis provide insight for system design and development, and useful feedback an strengths and weaknesses of search engines for system improvement
    Date
    24. 1.2004 18:27:22
  2. Dong, X.; Su, L.T.: Search engines on the World Wide Web and information retrieval from the Internet : a review and evaluation (1997) 0.13
    0.12821087 = product of:
      0.1923163 = sum of:
        0.0929755 = weight(_text_:search in 155) [ClassicSimilarity], result of:
          0.0929755 = score(doc=155,freq=6.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.5321022 = fieldWeight in 155, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0625 = fieldNorm(doc=155)
        0.099340804 = product of:
          0.19868161 = sum of:
            0.19868161 = weight(_text_:engines in 155) [ClassicSimilarity], result of:
              0.19868161 = score(doc=155,freq=6.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.7778389 = fieldWeight in 155, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0625 = fieldNorm(doc=155)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Describes the categories and special features of WWW databases and compares them with traditional databases and presents a state of the art review of the literature on the testing and evaluation of WWW based search engines. Describes the different methodologies and measures used in previous studies and summarizes their findings. Presents some evaluative comments on previous studies and suggests areas for future investigation, particularly evaluation of Web based search engines from the end user's perspective
  3. Su, L.T.: Developing a comprehensive and systematic model of user evaluation of Web-based search engines (1997) 0.11
    0.11103386 = product of:
      0.16655079 = sum of:
        0.08051914 = weight(_text_:search in 317) [ClassicSimilarity], result of:
          0.08051914 = score(doc=317,freq=2.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.460814 = fieldWeight in 317, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.09375 = fieldNorm(doc=317)
        0.08603165 = product of:
          0.1720633 = sum of:
            0.1720633 = weight(_text_:engines in 317) [ClassicSimilarity], result of:
              0.1720633 = score(doc=317,freq=2.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.67362815 = fieldWeight in 317, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.09375 = fieldNorm(doc=317)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
  4. Su, L.T.; Chen, H.L.: Evaluation of Web search engines by undergraduate students (1999) 0.08
    0.08261599 = product of:
      0.12392397 = sum of:
        0.053679425 = weight(_text_:search in 6546) [ClassicSimilarity], result of:
          0.053679425 = score(doc=6546,freq=8.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.30720934 = fieldWeight in 6546, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=6546)
        0.07024455 = product of:
          0.1404891 = sum of:
            0.1404891 = weight(_text_:engines in 6546) [ClassicSimilarity], result of:
              0.1404891 = score(doc=6546,freq=12.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.5500151 = fieldWeight in 6546, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.03125 = fieldNorm(doc=6546)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This research continues to explore the user's evaluation of Web search engines using a methodology proposed by Su (1997) and tested in a pilot study (Su, Chen, & Dong, 1998). It seeks to generate useful insight for system design and improvement, and for engine choice. The researchers were interested in how undergraduate students used four selected engines to retrieve information for their studies or personal interests and how they evaluated the interaction and search results retrieved by the four engines. Measures used were based on five evaluation criteria: relevance, efficiency, utility, user satisfaction, and connectivity. Thirty-six undergraduate juniors and seniors were recruited from the disciplines of sciences, social sciences and humanities. Each searched his/her own topic on all four engines in an assigned order and each made relevance judgements of retrieved items in relation to his/her information need or problem. The study found some significant differences among the four engines but none dominated in every aspect of the multidimensional evaluation. Alta Vista had the highest number of relevant and partially relevant documents, the best relative recall and the highest precision ratio based on PR1, Alter Vista had significantly better scores for these three measures than for Lycos. Infoseek had the highest satisfaction rating for response time. Both Infoseek and Excite had significantly higher satisfaction ratings for response time than Lycos. Excite had the best score for output display. Excite and Alta Vista had significantly better scores for output display than Lycos. Excite had the best rating for time saving while Alta Vista achieved the best score for value of search results as a whole and for overall performance. Alta Vista and Excite had significantly better ratings for these three measures than Lycos. Lycos achieved the best relevance ranking performance. Further work will provide more complete picture for engine comparison and choice by taking into account participant characteristics and identify factors contributing to the user's satisfaction to gain better insight for system design and improvement
  5. Su, L.T.: ¬A comprehensive and systematic model of user evaluation of Web search engines : I. Theory and background (2003) 0.08
    0.0801318 = product of:
      0.12019769 = sum of:
        0.058109686 = weight(_text_:search in 5164) [ClassicSimilarity], result of:
          0.058109686 = score(doc=5164,freq=6.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.33256388 = fieldWeight in 5164, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5164)
        0.062088005 = product of:
          0.12417601 = sum of:
            0.12417601 = weight(_text_:engines in 5164) [ClassicSimilarity], result of:
              0.12417601 = score(doc=5164,freq=6.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.4861493 = fieldWeight in 5164, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5164)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The project proposes and tests a comprehensive and systematic model of user evaluation of Web search engines. The project contains two parts. Part I describes the background and the model including a set of criteria and measures, and a method for implementation. It includes a literature review for two periods. The early period (1995-1996) portrays the settings for developing the model and the later period (1997-2000) places two applications of the model among contemporary evaluation work. Part II presents one of the applications that investigated the evaluation of four major search engines by 36 undergraduates from three academic disciplines. It reports results from statistical analyses of quantitative data for the entire sample and among disciplines, and content analysis of verbal data containing users' reasons for satisfaction. The proposed model aims to provide systematic feedback to engine developers or service providers for system improvement and to generate useful insight for system design and tool choice. The model can be applied to evaluating other compatible information retrieval systems or information retrieval (IR) techniques. It intends to contribute to developing a theory of relevance that goes beyond topicality to include value and usefulness for designing user-oriented information retrieval systems.
  6. Su, L.T.: Value of search results as a whole as a measure of information retrieval performance (1996) 0.02
    0.023243874 = product of:
      0.06973162 = sum of:
        0.06973162 = weight(_text_:search in 7439) [ClassicSimilarity], result of:
          0.06973162 = score(doc=7439,freq=6.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.39907667 = fieldWeight in 7439, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=7439)
      0.33333334 = coord(1/3)
    
    Abstract
    Examines: the conceptual categories or dimensions of the users' reasons for assigning particular ratings on the value of search results, and the relationships between these dimensions of value and the dimensions of success identified in an earlier study. 40 end users with individual information problems from an academic environment were observed, interacting with 6 professional intermediaries searching on their behalf in large operational systems at the users' own costs. A search was conducted for each individual problem in the users' presence and with user participation. Quantitative data consisting of scores for all measures studied and verbal data containing reasons for assigning certain ratings to selected measures were collected. The portion of the verbal data including users' reasons for assigning particular value ratings from the previous study will be trancribed and content analyzed for the current study
  7. Su, L.T.: ¬The relevance of recall and precision in user evaluation (1994) 0.02
    0.015815454 = product of:
      0.04744636 = sum of:
        0.04744636 = weight(_text_:search in 6933) [ClassicSimilarity], result of:
          0.04744636 = score(doc=6933,freq=4.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.27153727 = fieldWeight in 6933, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6933)
      0.33333334 = coord(1/3)
    
    Abstract
    The appropriateness of evaluation criteria and measures have been a subject of debate and a vital concern in the information retrieval evaluation literature. A study was conducted to investigate the appropriateness of 20 measures for evaluating interactive information retrieval performance, representing 4 major evaluation criteria. Among the 20 measures studied were the 2 most well-known relevance-based measures of effectiveness, recall and precision. The user's judgment of information success was used as the devised criterion measure with which all other 20 measures were to be correlated. A sample of 40 end-users with individual information problems from an academic environment were observed, interacting with 6 professional intermediaries searching on their behalf in large operational systems. Quantitative data consisting of values for all measures studies and verbal data containing users' reasons for assigning certain values to selected measures were collected. Statistical analysis of the quantitative data showed that precision, one of the most important traditional measures of effectiveness, is not sifnificantly correlated with the user's judgment of success. Users appear to be more concerned with absolute recall than with precision, although absolute recall was not directly tested in this study. 4 related measures of recall and precision are found to be significantly correlated with success. Among these are user's satisfaction with completeness of search results and user's satisfaction with precision of the search. This article explores the possible explanations for this outcome through content analysis of users' verbal data. The analysis shows that high precision does not always mean high quality (relevancy, completeness, etc.) to users because of different users' expectations. The user's purpose in obtaining information is suggested to be the primary cause for the high concern for recall. Implications for research and practice are discussed