Search (8 results, page 1 of 1)

  • × author_ss:"Rorissa, A."
  1. Rorissa, A.: ¬A comparative study of Flickr tags and index terms in a general image collection (2010) 0.00
    0.0025370158 = product of:
      0.0050740317 = sum of:
        0.0050740317 = product of:
          0.010148063 = sum of:
            0.010148063 = weight(_text_:a in 4100) [ClassicSimilarity], result of:
              0.010148063 = score(doc=4100,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19109234 = fieldWeight in 4100, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4100)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Web 2.0 and social/collaborative tagging have altered the traditional roles of indexer and user. Traditional indexing tools and systems assume the top-down approach to indexing in which a trained professional is responsible for assigning index terms to information sources with a potential user in mind. However, in today's Web, end users create, organize, index, and search for images and other information sources through social tagging and other collaborative activities. One of the impediments to user-centered indexing had been the cost of soliciting user-generated index terms or tags. Social tagging of images such as those on Flickr, an online photo management and sharing application, presents an opportunity that can be seized by designers of indexing tools and systems to bridge the semantic gap between indexer terms and user vocabularies. Empirical research on the differences and similarities between user-generated tags and index terms based on controlled vocabularies has the potential to inform future design of image indexing tools and systems. Toward this end, a random sample of Flickr images and the tags assigned to them were content analyzed and compared with another sample of index terms from a general image collection using established frameworks for image attributes and contents. The results show that there is a fundamental difference between the types of tags and types of index terms used. In light of this, implications for research into and design of user-centered image indexing tools and systems are discussed.
    Type
    a
  2. Iyer, H.; Rorissa, A.: Representative images for browsing large image collections : a cognitive perspective 0.00
    0.0024857575 = product of:
      0.004971515 = sum of:
        0.004971515 = product of:
          0.00994303 = sum of:
            0.00994303 = weight(_text_:a in 3557) [ClassicSimilarity], result of:
              0.00994303 = score(doc=3557,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18723148 = fieldWeight in 3557, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3557)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In large collections of images, one of the ways to facilitate browsing is by providing thumbnails of representative images. This paper seeks to examine the issue of choice of representative images within the categories. Towards this end, a study of free sorting of 50 images by 75 participants was conducted, in which they sorted the images into categories and selected a representative image for the categories and also indicated the prominent feature in the selected image. The results indicate that there is reasonable agreement in the choice of representative images and the selection of prominent features appearing in the images. The prominent feature seems to be one of the factors that have a bearing on the way people categorize.
    Type
    a
  3. Rorissa, A.: Relationships between perceived features and similarity of images : a test of Tversky's contrast model (2007) 0.00
    0.0022374375 = product of:
      0.004474875 = sum of:
        0.004474875 = product of:
          0.00894975 = sum of:
            0.00894975 = weight(_text_:a in 520) [ClassicSimilarity], result of:
              0.00894975 = score(doc=520,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1685276 = fieldWeight in 520, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=520)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The rapid growth of the numbers of images and their users as a result of the reduction in cost and increase in efficiency of the creation, storage, manipulation, and transmission of images poses challenges to those who organize and provide access to images. One of these challenges is similarity matching, a key component of current content-based image retrieval systems. Similarity matching often is implemented through similarity measures based on geometric models of similarity whose metric axioms are not satisfied by human similarity judgment data. This study is significant in that it is among the first known to test Tversky's contrast model, which equates the degree of similarity of two stimuli to a linear combination of their common and distinctive features, in the context of image representation and retrieval. Data were collected from 150 participants who performed an image description and a similarity judgment task. Structural equation modeling, correlation, and regression analyses confirmed the relationships between perceived features and similarity of objects hypothesized by Tversky. The results hold implications for future research that will attempt to further test the contrast model and assist designers of image organization and retrieval systems by pointing toward alternative document representations and similarity measures that more closely match human similarity judgments.
    Type
    a
  4. Rorissa, A.: User-generated descriptions of individual images versus labels of groups of images : a comparison using basic level theory (2008) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 2122) [ClassicSimilarity], result of:
              0.008285859 = score(doc=2122,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 2122, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2122)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Although images are visual information sources with little or no text associated with them, users still tend to use text to describe images and formulate queries. This is because digital libraries and search engines provide mostly text query options and rely on text annotations for representation and retrieval of the semantic content of images. While the main focus of image research is on indexing and retrieval of individual images, the general topic of image browsing and indexing, and retrieval of groups of images has not been adequately investigated. Comparisons of descriptions of individual images as well as labels of groups of images supplied by users using cognitive models are scarce. This work fills this gap. Using the basic level theory as a framework, a comparison of the descriptions of individual images and labels assigned to groups of images by 180 participants in three studies found a marked difference in their level of abstraction. Results confirm assertions by previous researchers in LIS and other fields that groups of images are labeled using more superordinate level terms while individual image descriptions are mainly at the basic level. Implications for design of image browsing interfaces, taxonomies, thesauri, and similar tools are discussed.
    Type
    a
  5. Assefa, S.G.; Rorissa, A.: ¬A bibliometric mapping of the structure of STEM education using co-word analysis (2013) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 1134) [ClassicSimilarity], result of:
              0.008285859 = score(doc=1134,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 1134, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1134)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    STEM, a set of fields that includes science, technology, engineering, and mathematics; allied disciplines ranging from environmental, agricultural, and earth sciences to life science and computer science; and education and training in these fields, is clearly at the top of the list of priority funding areas for governments, including the United States government. The U.S. has 11 federal agencies dedicated to supporting programs and providing funding for research and curriculum development. The domain of STEM education has significant implications in preparing the desired workforce with the requisite knowledge, developing appropriate curricula, providing teachers the necessary professional development, focusing research dollars on areas that have maximum impact, and developing national educational policy and standards. A complex undertaking such as STEM education, which attracts interest and valuable resources from a number of stakeholders needs to be well understood. In light of this, we attempt to describe the underlying structure of STEM education, its core areas, and their relationships through co-word analyses of the titles, keywords, and abstracts of the relevant literature using visualization and bibliometric mapping tools. Implications are drawn with respect to the nature of STEM education as well as curriculum and policy development.
    Type
    a
  6. Rorissa, A.; Clough, P.; Deselaers, T.: Exploring the relationship between feature and perceptual visual spaces (2008) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 1612) [ClassicSimilarity], result of:
              0.008118451 = score(doc=1612,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 1612, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1612)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The number and size of digital repositories containing visual information (images or videos) is increasing and thereby demanding appropriate ways to represent and search these information spaces. Their visualization often relies on reducing the dimensions of the information space to create a lower-dimensional feature space which, from the point-of-view of the end user, will be viewed and interpreted as a perceptual space. Critically for information visualization, the degree to which the feature and perceptual spaces correspond is still an open research question. In this paper we report the results of three studies which indicate that distance (or dissimilarity) matrices based on low-level visual features, in conjunction with various similarity measures commonly used in current CBIR systems, correlate with human similarity judgments.
    Type
    a
  7. Rorissa, A.; Yuan, X.: Visualizing and mapping the intellectual structure of information retrieval (2012) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 2744) [ClassicSimilarity], result of:
              0.008118451 = score(doc=2744,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 2744, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2744)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Information retrieval is a long established subfield of library and information science. Since its inception in the early- to mid -1950s, it has grown as a result, in part, of well-regarded retrieval system evaluation exercises/campaigns, the proliferation of Web search engines, and the expansion of digital libraries. Although researchers have examined the intellectual structure and nature of the general field of library and information science, the same cannot be said about the subfield of information retrieval. We address that in this work by sketching the information retrieval intellectual landscape through visualizations of citation behaviors. Citation data for 10 years (2000-2009) were retrieved from the Web of Science and analyzed using existing visualization techniques. Our results address information retrieval's co-authorship network, highly productive authors, highly cited journals and papers, author-assigned keywords, active institutions, and the import of ideas from other disciplines.
    Type
    a
  8. Rorissa, A.; Iyer, H.: Theories of cognition and image categorization : what category labels reveal about basic level theory (2008) 0.00
    0.0014351527 = product of:
      0.0028703054 = sum of:
        0.0028703054 = product of:
          0.005740611 = sum of:
            0.005740611 = weight(_text_:a in 1958) [ClassicSimilarity], result of:
              0.005740611 = score(doc=1958,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10809815 = fieldWeight in 1958, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1958)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a