Search (2 results, page 1 of 1)

  • × author_ss:"Hastings, S.K."
  1. Hastings, S.K.: Evaluation of image retrieval systems : role of user feedback (1999) 0.01
    0.014147157 = product of:
      0.028294314 = sum of:
        0.028294314 = product of:
          0.056588627 = sum of:
            0.056588627 = weight(_text_:systems in 845) [ClassicSimilarity], result of:
              0.056588627 = score(doc=845,freq=6.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.35286134 = fieldWeight in 845, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=845)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Intellectual access to a growing number of networked image repositories is but a small part of the much larger problem of intellectual access to new information formats. As more and more information becomes available in digital formats, it is imperative that we understand how people retrieve and use images. Several studies have investigated how users search for images, but there are few evaluation studies of image retrieval systems. Preliminary findings from research in progress indicate a need for improved browsing tools, image manipulation software, feedback mechanisms, and query analysis. Comparisons are made to previous research results from a study of intellectual access to digital art images. This discussion will focus on the problems of image retrieval identified in current research projects, report on an evaluation project in process, and propose a framework for evaluation studies of image retrieval systems that emphasizes the role of user feedback.
  2. Liu, J.; Li, Y.; Hastings, S.K.: Simplified scheme of search task difficulty reasons (2019) 0.01
    0.011551105 = product of:
      0.02310221 = sum of:
        0.02310221 = product of:
          0.04620442 = sum of:
            0.04620442 = weight(_text_:systems in 5224) [ClassicSimilarity], result of:
              0.04620442 = score(doc=5224,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.28811008 = fieldWeight in 5224, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5224)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article reports on a study that aimed at simplifying a search task difficulty reason scheme. Liu, Kim, and Creel (2015) (denoted LKC15) developed a 21-item search task difficulty reason scheme using a controlled laboratory experiment. The current study simplified the scheme through another experiment that followed the same design as LKC15 and involved 32 university students. The study had one added questionnaire item that provided a list of the 21 difficulty reasons in the multiple-choice format. By comparing the current study with LKC15, a concept of primary top difficulty reasons was proposed, which reasonably simplified the 21-item scheme to an 8-item top reason list. This limited number of reasons is more manageable and makes it feasible for search systems to predict task difficulty reasons from observable user behaviors, which builds the basis for systems to improve user satisfaction based on predicted search difficulty reasons.

Authors