Search (4 results, page 1 of 1)

  • × author_ss:"Iyengar, S.S."
  • × year_i:[2000 TO 2010}
  1. Zachary, J.; Iyengar, S.S.; Barhen, J.: Content based image retrieval and information theory : a general approach (2001) 0.00
    0.003091229 = product of:
      0.012364916 = sum of:
        0.012364916 = weight(_text_:information in 6514) [ClassicSimilarity], result of:
          0.012364916 = score(doc=6514,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.20156369 = fieldWeight in 6514, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=6514)
      0.25 = coord(1/4)
    
    Abstract
    A fundamental aspect of content-based image retrieval (CBIR) is the extraction and the representation of a visual feature that is an effective discriminant between pairs of images. Among the many visual features that have been studied, the distribution of color pixels in an image is the most common visual feature studied. The standard representation of color for content-based indexing in image databases is the color histogram description and representation.Vector-based distance functions are used to compute the similarity between two images as the distance between points in the color histogram space. This paper proposes an alternative real valued representation of color based on the information theoretic concept of entropy. A theoretical presentation of image entropy is accompanied by a practical description of the merits and limitations of image entropy compared to color histograms. Specifically, the L, norm for color histograms is shown to provide an upper bound on the difference between image entropy values. Our initial results suggest that image entropy is a promising approach to image
    Source
    Journal of the American Society for Information Science and technology. 52(2001) no.10, S.840-852
  2. Zachary, J.; Iyengar, S.S.: Information theoretic similarity measures for content based image retrieval (2001) 0.00
    0.003091229 = product of:
      0.012364916 = sum of:
        0.012364916 = weight(_text_:information in 6523) [ClassicSimilarity], result of:
          0.012364916 = score(doc=6523,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.20156369 = fieldWeight in 6523, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=6523)
      0.25 = coord(1/4)
    
    Abstract
    Content-based image retrieval is based on the idea of extracting visual features from image and using them to index images in a database. The comparisons that determine similarity between images depend on the representations of the features and the definition of appropriate distance function. Most of the research literature uses vectors as the predominate representation given the rich theory of vector spaces. While vectors are an extremely useful representation, their use in large databases may be prohibitive given their usually large dimensions and similarity functions. In this paper, we propose similarity measures and an indexing algorithm based on information theory that permits an image to be represented as a single number. When use in conjunction with vectors, our method displays improved efficiency when querying large databases.
    Source
    Journal of the American Society for Information Science and technology. 52(2001) no.10, S.856-867
  3. Iyengar, S.S.: Visual based retrieval systems and Web mining (2001) 0.00
    0.0016826519 = product of:
      0.0067306077 = sum of:
        0.0067306077 = weight(_text_:information in 6520) [ClassicSimilarity], result of:
          0.0067306077 = score(doc=6520,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.10971737 = fieldWeight in 6520, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=6520)
      0.25 = coord(1/4)
    
    Abstract
    Relevance has been a difficult concept to define, let alone measure. In this paper, a simple operational definition of relevance is proposed for a Web-based library catalog: whether or not during a search session the user saves, prints, mails, or downloads a citation. If one of those actions is performed, the session is considered relevant to the user. An analysis is presented illustrating the advantages and disadvantages of this definition. With this definition and good transaction logging, it is possible to ascertain the relevance of a session. This was done for 905,970 sessions conducted with the University of California's Melvyl online catalog. Next, a methodology was developed to try to predict the relevance of a session. A number of variables were defined that characterize a session, none of which used any demographic information about the user. The values of the variables were computed for the sessions. Principal components analysis was used to extract a new set of variables out of the original set. A stratified random sampling technique was used to form ten strata such that each new strata of 90,570 sessions contained the same proportion of relevant to nonrelevant sessions. Logistic regression was used to ascertain the regression coefficients for nine of the ten strata. Then, the coefficients were used to predict the relevance of the sessions in the missing strata. Overall, 17.85% of the sessions were determined to be relevant. The predicted number of relevant sessions for all ten strata was 11 %, a 6.85% difference. The authors believe that the methodology can be further refined and the prediction improved. This methodology could also have significant application in improving user searching and also in predicting electronic commerce buying decisions without the use of personal demographic data
    Source
    Journal of the American Society for Information Science and technology. 52(2001) no.10, S.829-830
  4. Wu, Q.; Iyengar, S.S.; Zhu, M.: Web based image retrieval using self-organizing feature map (2001) 0.00
    0.0014872681 = product of:
      0.0059490725 = sum of:
        0.0059490725 = weight(_text_:information in 6930) [ClassicSimilarity], result of:
          0.0059490725 = score(doc=6930,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.09697737 = fieldWeight in 6930, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6930)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science and technology. 52(2001) no.10, S.868-875