Search (4 results, page 1 of 1)

  • × author_ss:"Su, L.T."
  • × year_i:[1990 TO 2000}
  1. Su, L.T.: Evaluation measures for interactive information retrieval (1992) 0.02
    0.016956951 = product of:
      0.067827806 = sum of:
        0.067827806 = product of:
          0.13565561 = sum of:
            0.13565561 = weight(_text_:processing in 3645) [ClassicSimilarity], result of:
              0.13565561 = score(doc=3645,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.7156181 = fieldWeight in 3645, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.125 = fieldNorm(doc=3645)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 28(1992) no.4, S.503-516
  2. Su, L.T.: Value of search results as a whole as a measure of information retrieval performance (1996) 0.01
    0.013439858 = product of:
      0.053759433 = sum of:
        0.053759433 = weight(_text_:data in 7439) [ClassicSimilarity], result of:
          0.053759433 = score(doc=7439,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.3630661 = fieldWeight in 7439, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=7439)
      0.25 = coord(1/4)
    
    Abstract
    Examines: the conceptual categories or dimensions of the users' reasons for assigning particular ratings on the value of search results, and the relationships between these dimensions of value and the dimensions of success identified in an earlier study. 40 end users with individual information problems from an academic environment were observed, interacting with 6 professional intermediaries searching on their behalf in large operational systems at the users' own costs. A search was conducted for each individual problem in the users' presence and with user participation. Quantitative data consisting of scores for all measures studied and verbal data containing reasons for assigning certain ratings to selected measures were collected. The portion of the verbal data including users' reasons for assigning particular value ratings from the previous study will be trancribed and content analyzed for the current study
  3. Su, L.T.: ¬The relevance of recall and precision in user evaluation (1994) 0.01
    0.01293251 = product of:
      0.05173004 = sum of:
        0.05173004 = weight(_text_:data in 6933) [ClassicSimilarity], result of:
          0.05173004 = score(doc=6933,freq=8.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.34936053 = fieldWeight in 6933, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6933)
      0.25 = coord(1/4)
    
    Abstract
    The appropriateness of evaluation criteria and measures have been a subject of debate and a vital concern in the information retrieval evaluation literature. A study was conducted to investigate the appropriateness of 20 measures for evaluating interactive information retrieval performance, representing 4 major evaluation criteria. Among the 20 measures studied were the 2 most well-known relevance-based measures of effectiveness, recall and precision. The user's judgment of information success was used as the devised criterion measure with which all other 20 measures were to be correlated. A sample of 40 end-users with individual information problems from an academic environment were observed, interacting with 6 professional intermediaries searching on their behalf in large operational systems. Quantitative data consisting of values for all measures studies and verbal data containing users' reasons for assigning certain values to selected measures were collected. Statistical analysis of the quantitative data showed that precision, one of the most important traditional measures of effectiveness, is not sifnificantly correlated with the user's judgment of success. Users appear to be more concerned with absolute recall than with precision, although absolute recall was not directly tested in this study. 4 related measures of recall and precision are found to be significantly correlated with success. Among these are user's satisfaction with completeness of search results and user's satisfaction with precision of the search. This article explores the possible explanations for this outcome through content analysis of users' verbal data. The analysis shows that high precision does not always mean high quality (relevancy, completeness, etc.) to users because of different users' expectations. The user's purpose in obtaining information is suggested to be the primary cause for the high concern for recall. Implications for research and practice are discussed
  4. Su, L.T.: Is relevance an adequate criterion for retrieval system evaluation : an empirical enquiry into the user's evaluation (1993) 0.01
    0.010973599 = product of:
      0.043894395 = sum of:
        0.043894395 = weight(_text_:data in 7959) [ClassicSimilarity], result of:
          0.043894395 = score(doc=7959,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.29644224 = fieldWeight in 7959, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=7959)
      0.25 = coord(1/4)
    
    Abstract
    Considers whether relevance is an adequate criterion for retrieval system evaluation. Addresses this question by providing a brief review of the information retrieval literature and by presenting some empirical evidence collected in an earlier study concerning user's assessment of retrieval system performance. Identifies a total of 26 success dimensions through content analysis of 203 users' reasons for system success. The user's judgement of system performance is a multi-dimensional assessment. Although relevance is an important criterion, there are many other considerations affecting assessments of the system success. Discusses dimensions or categories of success related to as well as those not related to relevance. Compares the results from content analysis of verbal data with those from factor analysis of quantitative data. Discusses implications and future research