Search (7 results, page 1 of 1)

  • × author_ss:"Su, L.T."
  • × year_i:[1990 TO 2000}
  1. Su, L.T.: Developing a comprehensive and systematic model of user evaluation of Web-based search engines (1997) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 317) [ClassicSimilarity], result of:
              0.011481222 = score(doc=317,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 317, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=317)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  2. Su, L.T.: Evaluation measures for interactive information retrieval (1992) 0.00
    0.00270615 = product of:
      0.0054123 = sum of:
        0.0054123 = product of:
          0.0108246 = sum of:
            0.0108246 = weight(_text_:a in 3645) [ClassicSimilarity], result of:
              0.0108246 = score(doc=3645,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20383182 = fieldWeight in 3645, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=3645)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  3. Dong, X.; Su, L.T.: Search engines on the World Wide Web and information retrieval from the Internet : a review and evaluation (1997) 0.00
    0.0023435948 = product of:
      0.0046871896 = sum of:
        0.0046871896 = product of:
          0.009374379 = sum of:
            0.009374379 = weight(_text_:a in 155) [ClassicSimilarity], result of:
              0.009374379 = score(doc=155,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17652355 = fieldWeight in 155, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=155)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Describes the categories and special features of WWW databases and compares them with traditional databases and presents a state of the art review of the literature on the testing and evaluation of WWW based search engines. Describes the different methodologies and measures used in previous studies and summarizes their findings. Presents some evaluative comments on previous studies and suggests areas for future investigation, particularly evaluation of Web based search engines from the end user's perspective
    Type
    a
  4. Su, L.T.: Is relevance an adequate criterion for retrieval system evaluation : an empirical enquiry into the user's evaluation (1993) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 7959) [ClassicSimilarity], result of:
              0.008118451 = score(doc=7959,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 7959, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7959)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Considers whether relevance is an adequate criterion for retrieval system evaluation. Addresses this question by providing a brief review of the information retrieval literature and by presenting some empirical evidence collected in an earlier study concerning user's assessment of retrieval system performance. Identifies a total of 26 success dimensions through content analysis of 203 users' reasons for system success. The user's judgement of system performance is a multi-dimensional assessment. Although relevance is an important criterion, there are many other considerations affecting assessments of the system success. Discusses dimensions or categories of success related to as well as those not related to relevance. Compares the results from content analysis of verbal data with those from factor analysis of quantitative data. Discusses implications and future research
    Type
    a
  5. Su, L.T.: Value of search results as a whole as a measure of information retrieval performance (1996) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 7439) [ClassicSimilarity], result of:
              0.008118451 = score(doc=7439,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 7439, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7439)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Examines: the conceptual categories or dimensions of the users' reasons for assigning particular ratings on the value of search results, and the relationships between these dimensions of value and the dimensions of success identified in an earlier study. 40 end users with individual information problems from an academic environment were observed, interacting with 6 professional intermediaries searching on their behalf in large operational systems at the users' own costs. A search was conducted for each individual problem in the users' presence and with user participation. Quantitative data consisting of scores for all measures studied and verbal data containing reasons for assigning certain ratings to selected measures were collected. The portion of the verbal data including users' reasons for assigning particular value ratings from the previous study will be trancribed and content analyzed for the current study
    Type
    a
  6. Su, L.T.: ¬The relevance of recall and precision in user evaluation (1994) 0.00
    0.0018909799 = product of:
      0.0037819599 = sum of:
        0.0037819599 = product of:
          0.0075639198 = sum of:
            0.0075639198 = weight(_text_:a in 6933) [ClassicSimilarity], result of:
              0.0075639198 = score(doc=6933,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14243183 = fieldWeight in 6933, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6933)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The appropriateness of evaluation criteria and measures have been a subject of debate and a vital concern in the information retrieval evaluation literature. A study was conducted to investigate the appropriateness of 20 measures for evaluating interactive information retrieval performance, representing 4 major evaluation criteria. Among the 20 measures studied were the 2 most well-known relevance-based measures of effectiveness, recall and precision. The user's judgment of information success was used as the devised criterion measure with which all other 20 measures were to be correlated. A sample of 40 end-users with individual information problems from an academic environment were observed, interacting with 6 professional intermediaries searching on their behalf in large operational systems. Quantitative data consisting of values for all measures studies and verbal data containing users' reasons for assigning certain values to selected measures were collected. Statistical analysis of the quantitative data showed that precision, one of the most important traditional measures of effectiveness, is not sifnificantly correlated with the user's judgment of success. Users appear to be more concerned with absolute recall than with precision, although absolute recall was not directly tested in this study. 4 related measures of recall and precision are found to be significantly correlated with success. Among these are user's satisfaction with completeness of search results and user's satisfaction with precision of the search. This article explores the possible explanations for this outcome through content analysis of users' verbal data. The analysis shows that high precision does not always mean high quality (relevancy, completeness, etc.) to users because of different users' expectations. The user's purpose in obtaining information is suggested to be the primary cause for the high concern for recall. Implications for research and practice are discussed
    Type
    a
  7. Su, L.T.; Chen, H.L.: Evaluation of Web search engines by undergraduate students (1999) 0.00
    0.001353075 = product of:
      0.00270615 = sum of:
        0.00270615 = product of:
          0.0054123 = sum of:
            0.0054123 = weight(_text_:a in 6546) [ClassicSimilarity], result of:
              0.0054123 = score(doc=6546,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10191591 = fieldWeight in 6546, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=6546)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This research continues to explore the user's evaluation of Web search engines using a methodology proposed by Su (1997) and tested in a pilot study (Su, Chen, & Dong, 1998). It seeks to generate useful insight for system design and improvement, and for engine choice. The researchers were interested in how undergraduate students used four selected engines to retrieve information for their studies or personal interests and how they evaluated the interaction and search results retrieved by the four engines. Measures used were based on five evaluation criteria: relevance, efficiency, utility, user satisfaction, and connectivity. Thirty-six undergraduate juniors and seniors were recruited from the disciplines of sciences, social sciences and humanities. Each searched his/her own topic on all four engines in an assigned order and each made relevance judgements of retrieved items in relation to his/her information need or problem. The study found some significant differences among the four engines but none dominated in every aspect of the multidimensional evaluation. Alta Vista had the highest number of relevant and partially relevant documents, the best relative recall and the highest precision ratio based on PR1, Alter Vista had significantly better scores for these three measures than for Lycos. Infoseek had the highest satisfaction rating for response time. Both Infoseek and Excite had significantly higher satisfaction ratings for response time than Lycos. Excite had the best score for output display. Excite and Alta Vista had significantly better scores for output display than Lycos. Excite had the best rating for time saving while Alta Vista achieved the best score for value of search results as a whole and for overall performance. Alta Vista and Excite had significantly better ratings for these three measures than Lycos. Lycos achieved the best relevance ranking performance. Further work will provide more complete picture for engine comparison and choice by taking into account participant characteristics and identify factors contributing to the user's satisfaction to gain better insight for system design and improvement
    Type
    a