Search (7 results, page 1 of 1)

  • × theme_ss:"Bilder"
  1. Kim, C.-R.; Chung, C.-W.: XMage: An image retrieval method based on partial similarity (2006) 0.03
    0.031288948 = product of:
      0.062577896 = sum of:
        0.02586502 = weight(_text_:data in 973) [ClassicSimilarity], result of:
          0.02586502 = score(doc=973,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.17468026 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=973)
        0.036712877 = product of:
          0.073425755 = sum of:
            0.073425755 = weight(_text_:processing in 973) [ClassicSimilarity], result of:
              0.073425755 = score(doc=973,freq=6.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.38733965 = fieldWeight in 973, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=973)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    XMage is introduced in this paper as a method for partial similarity searching in image databases. Region-based image retrieval is a method of retrieving partially similar images. It has been proposed as a way to accurately process queries in an image database. In region-based image retrieval, region matching is indispensable for computing the partial similarity between two images because the query processing is based upon regions instead of the entire image. A naive method of region matching is a sequential comparison between regions, which causes severe overhead and deteriorates the performance of query processing. In this paper, a new image contents representation, called Condensed eXtended Histogram (CXHistogram), is presented in conjunction with a well-defined distance function CXSim() on the CX-Histogram. The CXSim() is a new image-to-image similarity measure to compute the partial similarity between two images. It achieves the effect of comparing regions of two images by simply comparing the two images. The CXSim() reduces query space by pruning irrelevant images, and it is used as a filtering function before sequential scanning. Extensive experiments were performed on real image data to evaluate XMage. It provides a significant pruning of irrelevant images with no false dismissals. As a consequence, it achieves up to 5.9-fold speed-up in search over the R*-tree search followed by sequential scanning.
    Source
    Information processing and management. 42(2006) no.2, S.484-502
  2. Lee, C.-Y.; Soo, V.-W.: ¬The conflict detection and resolution in knowledge merging for image annotation (2006) 0.03
    0.028236724 = product of:
      0.05647345 = sum of:
        0.031038022 = weight(_text_:data in 981) [ClassicSimilarity], result of:
          0.031038022 = score(doc=981,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 981, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=981)
        0.025435425 = product of:
          0.05087085 = sum of:
            0.05087085 = weight(_text_:processing in 981) [ClassicSimilarity], result of:
              0.05087085 = score(doc=981,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.26835677 = fieldWeight in 981, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=981)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Semantic annotation of images is an important step to support semantic information extraction and retrieval. However, in a multi-annotator environment, various types of conflicts such as converting, merging, and inference conflicts could arise during the annotation. We devised conflict detection patterns based on different data, ontology at different inference levels and proposed the corresponding automatic conflict resolution strategies. We also constructed a simple annotator model to decide whether to trust a given piece of annotation from a given annotator. Finally, we conducted experiments to compare the performance of the automatic conflict resolution approaches during the annotation of images in the celebrity domain by 62 annotators. The experiments showed that the proposed method improved 3/4 annotation accuracy with respect to a naïve annotation system.
    Source
    Information processing and management. 42(2006) no.4, S.1030-1055
  3. Rorissa, A.: Relationships between perceived features and similarity of images : a test of Tversky's contrast model (2007) 0.01
    0.009144665 = product of:
      0.03657866 = sum of:
        0.03657866 = weight(_text_:data in 520) [ClassicSimilarity], result of:
          0.03657866 = score(doc=520,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24703519 = fieldWeight in 520, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=520)
      0.25 = coord(1/4)
    
    Abstract
    The rapid growth of the numbers of images and their users as a result of the reduction in cost and increase in efficiency of the creation, storage, manipulation, and transmission of images poses challenges to those who organize and provide access to images. One of these challenges is similarity matching, a key component of current content-based image retrieval systems. Similarity matching often is implemented through similarity measures based on geometric models of similarity whose metric axioms are not satisfied by human similarity judgment data. This study is significant in that it is among the first known to test Tversky's contrast model, which equates the degree of similarity of two stimuli to a linear combination of their common and distinctive features, in the context of image representation and retrieval. Data were collected from 150 participants who performed an image description and a similarity judgment task. Structural equation modeling, correlation, and regression analyses confirmed the relationships between perceived features and similarity of objects hypothesized by Tversky. The results hold implications for future research that will attempt to further test the contrast model and assist designers of image organization and retrieval systems by pointing toward alternative document representations and similarity measures that more closely match human similarity judgments.
  4. Lepsky, K.; Müller, T.; Wille, J.: Metadata improvement for image information retrieval (2010) 0.01
    0.009052756 = product of:
      0.036211025 = sum of:
        0.036211025 = weight(_text_:data in 4995) [ClassicSimilarity], result of:
          0.036211025 = score(doc=4995,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24455236 = fieldWeight in 4995, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4995)
      0.25 = coord(1/4)
    
    Abstract
    This paper discusses the goals and results of the research project Perseus-a as an attempt to improve information retrieval of digital images by automatically connecting them with text-based descriptions. The development uses the image collection of prometheus, the distributed digital image archive for research and studies, the articles of the digitized Reallexikon zur Deutschen Kunstgeschichte, art historical terminological resources and classification data, and an open source system for linguistic and statistic automatic indexing called lingo.
  5. Fukumoto, T.: ¬An analysis of image retrieval behavior for metadata type image database (2006) 0.01
    0.007418666 = product of:
      0.029674664 = sum of:
        0.029674664 = product of:
          0.05934933 = sum of:
            0.05934933 = weight(_text_:processing in 965) [ClassicSimilarity], result of:
              0.05934933 = score(doc=965,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.3130829 = fieldWeight in 965, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=965)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 42(2006) no.3, S.723-728
  6. Menard, E.: Study on the influence of vocabularies used for image indexing in a multilingual retrieval environment : reflections on scribbles (2007) 0.01
    0.006466255 = product of:
      0.02586502 = sum of:
        0.02586502 = weight(_text_:data in 1089) [ClassicSimilarity], result of:
          0.02586502 = score(doc=1089,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.17468026 = fieldWeight in 1089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1089)
      0.25 = coord(1/4)
    
    Abstract
    For many years, the Web became an important media for the diffusion of multilingual resources. Linguistic differenees still form a major obstacle to scientific, cultural, and educational exchange. Besides this linguistic diversity, a multitude of databases and collections now contain documents in various formats, which may also adversely affect the retrieval process. This paper describes a research project aiming to verify the existing relations between two indexing approaches: traditional image indexing recommending the use of controlled vocabularies or free image indexing using uncontrolled vocabulary, and their respective performance for image retrieval, in a multilingual context. This research also compares image retrieval within two contexts: a monolingual context where the language of the query is the same as the indexing language; and a multilingual context where the language of the query is different from the indexing language. This research will indicate whether one of these indexing approaches surpasses the other, in terms of effectiveness, efficiency, and satisfaction of the image searchers. This paper presents the context and the problem statement of the research project. The experiment carried out is also described, as well as the data collection methods
  7. Scalla, M.: Auf der Phantom-Spur : Georges Didi-Hubermans neues Standardwerk über Aby Warburg (2006) 0.00
    0.0023791753 = product of:
      0.009516701 = sum of:
        0.009516701 = product of:
          0.019033402 = sum of:
            0.019033402 = weight(_text_:22 in 4054) [ClassicSimilarity], result of:
              0.019033402 = score(doc=4054,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.116070345 = fieldWeight in 4054, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=4054)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    6. 1.2011 11:22:12