Search (9 results, page 1 of 1)

  • × theme_ss:"Bilder"
  1. Park, J.-r.: Semantic interoperability and metadata quality : an analysis of metadata item records of digital image collections (2006) 0.01
    0.009880973 = product of:
      0.039523892 = sum of:
        0.039523892 = product of:
          0.079047784 = sum of:
            0.079047784 = weight(_text_:assessment in 172) [ClassicSimilarity], result of:
              0.079047784 = score(doc=172,freq=2.0), product of:
                0.25917634 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.04694356 = queryNorm
                0.30499613 = fieldWeight in 172, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=172)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This paper is a current assessment of the status of metadata creation and mapping between catalogerdefined field names and Dublin Core (DC) metadata elements across three digital image collections. The metadata elements that evince the most frequently inaccurate, inconsistent and incomplete DC metadata application are identified. As well, the most frequently occurring locally added metadata elements and associated pattern development are examined. For this, a randomly collected sample of 659 metadata item records from three digital image collections is analyzed. Implications and issues drawn from the evaluation of the current status of metadata creation and mapping are also discussed in relation to the issue of semantic interoperability of concept representation across digital image collections. The findings of the study suggest that conceptual ambiguities and semantic overlaps inherent among some DC metadata elements hinder semantic interoperability. The DC metadata scheme needs to be refined in order to disambiguate semantic relations of certain DC metadata elements that present semantic overlaps and conceptual ambiguities between element names and their corresponding definitions. The findings of the study also suggest that the development of mediation mechanisms such as concept networks that facilitate the metadata creation and mapping process are critically needed for enhancing metadata quality.
  2. Kim, C.-R.; Chung, C.-W.: XMage: An image retrieval method based on partial similarity (2006) 0.00
    0.0029427784 = product of:
      0.011771114 = sum of:
        0.011771114 = product of:
          0.047084454 = sum of:
            0.047084454 = weight(_text_:based in 973) [ClassicSimilarity], result of:
              0.047084454 = score(doc=973,freq=8.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.33289194 = fieldWeight in 973, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=973)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    XMage is introduced in this paper as a method for partial similarity searching in image databases. Region-based image retrieval is a method of retrieving partially similar images. It has been proposed as a way to accurately process queries in an image database. In region-based image retrieval, region matching is indispensable for computing the partial similarity between two images because the query processing is based upon regions instead of the entire image. A naive method of region matching is a sequential comparison between regions, which causes severe overhead and deteriorates the performance of query processing. In this paper, a new image contents representation, called Condensed eXtended Histogram (CXHistogram), is presented in conjunction with a well-defined distance function CXSim() on the CX-Histogram. The CXSim() is a new image-to-image similarity measure to compute the partial similarity between two images. It achieves the effect of comparing regions of two images by simply comparing the two images. The CXSim() reduces query space by pruning irrelevant images, and it is used as a filtering function before sequential scanning. Extensive experiments were performed on real image data to evaluate XMage. It provides a significant pruning of irrelevant images with no false dismissals. As a consequence, it achieves up to 5.9-fold speed-up in search over the R*-tree search followed by sequential scanning.
  3. Yee, K.-P.; Swearingen, K.; Li, K.; Hearst, M.: Faceted metadata for image search and browsing 0.00
    0.0024970302 = product of:
      0.009988121 = sum of:
        0.009988121 = product of:
          0.039952483 = sum of:
            0.039952483 = weight(_text_:based in 5944) [ClassicSimilarity], result of:
              0.039952483 = score(doc=5944,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.28246817 = fieldWeight in 5944, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5944)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    There are currently two dominant interface types for searching and browsing large image collections: keywordbased search, and searching by overall similarity to sample images. We present an alternative based on enabling users to navigate along conceptual dimensions that describe the images. The interface makes use of hierarchical faceted metadata and dynamically generated query previews. A usability study, in which 32 art history students explored a collection of 35,000 fine arts images, compares this approach to a standard image search interface. Despite the unfamiliarity and power of the interface (attributes that often lead to rejection of new search interfaces), the study results show that 90% of the participants preferred the metadata approach overall, 97% said that it helped them learn more about the collection, 75% found it more flexible, and 72% found it easier to use than a standard baseline system. These results indicate that a category-based approach is a successful way to provide access to image collections.
  4. Scalla, M.: Auf der Phantom-Spur : Georges Didi-Hubermans neues Standardwerk über Aby Warburg (2006) 0.00
    0.0023850764 = product of:
      0.0095403055 = sum of:
        0.0095403055 = product of:
          0.019080611 = sum of:
            0.019080611 = weight(_text_:22 in 4054) [ClassicSimilarity], result of:
              0.019080611 = score(doc=4054,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.116070345 = fieldWeight in 4054, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=4054)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    6. 1.2011 11:22:12
  5. Rorissa, A.: Relationships between perceived features and similarity of images : a test of Tversky's contrast model (2007) 0.00
    0.0020808585 = product of:
      0.008323434 = sum of:
        0.008323434 = product of:
          0.033293735 = sum of:
            0.033293735 = weight(_text_:based in 520) [ClassicSimilarity], result of:
              0.033293735 = score(doc=520,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23539014 = fieldWeight in 520, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=520)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    The rapid growth of the numbers of images and their users as a result of the reduction in cost and increase in efficiency of the creation, storage, manipulation, and transmission of images poses challenges to those who organize and provide access to images. One of these challenges is similarity matching, a key component of current content-based image retrieval systems. Similarity matching often is implemented through similarity measures based on geometric models of similarity whose metric axioms are not satisfied by human similarity judgment data. This study is significant in that it is among the first known to test Tversky's contrast model, which equates the degree of similarity of two stimuli to a linear combination of their common and distinctive features, in the context of image representation and retrieval. Data were collected from 150 participants who performed an image description and a similarity judgment task. Structural equation modeling, correlation, and regression analyses confirmed the relationships between perceived features and similarity of objects hypothesized by Tversky. The results hold implications for future research that will attempt to further test the contrast model and assist designers of image organization and retrieval systems by pointing toward alternative document representations and similarity measures that more closely match human similarity judgments.
  6. Drolshagen, J.A.: Pictorial representation of quilts from the underground railroad (2005) 0.00
    0.002059945 = product of:
      0.00823978 = sum of:
        0.00823978 = product of:
          0.03295912 = sum of:
            0.03295912 = weight(_text_:based in 6086) [ClassicSimilarity], result of:
              0.03295912 = score(doc=6086,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23302436 = fieldWeight in 6086, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6086)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    The Underground Railroad was a network of people who helped fugitive slaves escape to the North and Canada during the U.S. Civil War period, beginning in about 1831. Quilting was used as a form of information representation (Breneman 2001). This simple classification was designed to relate the symbolic transmission of escape routes and locations of sanctuary. Because it was for use in a children's library, symbolic representations were used to anchor the classes. Symbols are based in the African graphic arts, the Adinkra symbols of Ghana (West African wisdom. 2001), and also from actual quilt practice (Threads of Freedom 2001 and Breneman 2001).
  7. Lepsky, K.; Müller, T.; Wille, J.: Metadata improvement for image information retrieval (2010) 0.00
    0.002059945 = product of:
      0.00823978 = sum of:
        0.00823978 = product of:
          0.03295912 = sum of:
            0.03295912 = weight(_text_:based in 4995) [ClassicSimilarity], result of:
              0.03295912 = score(doc=4995,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23302436 = fieldWeight in 4995, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4995)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    This paper discusses the goals and results of the research project Perseus-a as an attempt to improve information retrieval of digital images by automatically connecting them with text-based descriptions. The development uses the image collection of prometheus, the distributed digital image archive for research and studies, the articles of the digitized Reallexikon zur Deutschen Kunstgeschichte, art historical terminological resources and classification data, and an open source system for linguistic and statistic automatic indexing called lingo.
  8. Lee, C.-Y.; Soo, V.-W.: ¬The conflict detection and resolution in knowledge merging for image annotation (2006) 0.00
    0.0017656671 = product of:
      0.0070626684 = sum of:
        0.0070626684 = product of:
          0.028250674 = sum of:
            0.028250674 = weight(_text_:based in 981) [ClassicSimilarity], result of:
              0.028250674 = score(doc=981,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.19973516 = fieldWeight in 981, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.046875 = fieldNorm(doc=981)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Semantic annotation of images is an important step to support semantic information extraction and retrieval. However, in a multi-annotator environment, various types of conflicts such as converting, merging, and inference conflicts could arise during the annotation. We devised conflict detection patterns based on different data, ontology at different inference levels and proposed the corresponding automatic conflict resolution strategies. We also constructed a simple annotator model to decide whether to trust a given piece of annotation from a given annotator. Finally, we conducted experiments to compare the performance of the automatic conflict resolution approaches during the annotation of images in the celebrity domain by 62 annotators. The experiments showed that the proposed method improved 3/4 annotation accuracy with respect to a naïve annotation system.
  9. Rorissa, A.: ¬A comparative study of Flickr tags and index terms in a general image collection (2010) 0.00
    0.0014713892 = product of:
      0.005885557 = sum of:
        0.005885557 = product of:
          0.023542227 = sum of:
            0.023542227 = weight(_text_:based in 4100) [ClassicSimilarity], result of:
              0.023542227 = score(doc=4100,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.16644597 = fieldWeight in 4100, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4100)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Web 2.0 and social/collaborative tagging have altered the traditional roles of indexer and user. Traditional indexing tools and systems assume the top-down approach to indexing in which a trained professional is responsible for assigning index terms to information sources with a potential user in mind. However, in today's Web, end users create, organize, index, and search for images and other information sources through social tagging and other collaborative activities. One of the impediments to user-centered indexing had been the cost of soliciting user-generated index terms or tags. Social tagging of images such as those on Flickr, an online photo management and sharing application, presents an opportunity that can be seized by designers of indexing tools and systems to bridge the semantic gap between indexer terms and user vocabularies. Empirical research on the differences and similarities between user-generated tags and index terms based on controlled vocabularies has the potential to inform future design of image indexing tools and systems. Toward this end, a random sample of Flickr images and the tags assigned to them were content analyzed and compared with another sample of index terms from a general image collection using established frameworks for image attributes and contents. The results show that there is a fundamental difference between the types of tags and types of index terms used. In light of this, implications for research into and design of user-centered image indexing tools and systems are discussed.