Search (11 results, page 1 of 1)

  • × theme_ss:"Bilder"
  1. Farbensuche im Internet (2006) 0.01
    0.0071163666 = product of:
      0.049814563 = sum of:
        0.049814563 = product of:
          0.09962913 = sum of:
            0.09962913 = weight(_text_:zugriff in 1241) [ClassicSimilarity], result of:
              0.09962913 = score(doc=1241,freq=2.0), product of:
                0.2160124 = queryWeight, product of:
                  5.963546 = idf(docFreq=308, maxDocs=44218)
                  0.03622214 = queryNorm
                0.46121946 = fieldWeight in 1241, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.963546 = idf(docFreq=308, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1241)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Die Bilder-Suchmaschine "Yotophoto" kann nicht nur nach Begriffen suchen, sondern auch nach Farben. Wer zum Beispiel Gelb als Farbton und "Flower" als Suchbegriff eingibt, erhält als Ergebnis Sonnen- und Butterblumen angezeigt. Die Suche ist einfach gestaltet: Als Farbe kann man zwar nicht einfach "Rot" oder "Grün" eintippen. Dafür lässt sich der gewünschte Farbton über eine Pipette aus einer Palette auswählen. Gut: Der Nutzer kann die Suche auf vier verschiedene Lizenztypen beschränken, etwa uni nach kostenfrei verwendbaren Bildern für die eigene Homepage zu suchen. Yotophoto hat nach eigenen Angaben über 250.000 Bilder von Seiten wie Flickr oder Wikipedia in seinen Katalog aufgenommen. Praktisch: Ein Firefox-Plug-in ermöglicht den schnellen Zugriff auf den Dienst.
  2. Lepsky, K.; Müller, T.; Wille, J.: Metadata improvement for image information retrieval (2010) 0.00
    0.0036594293 = product of:
      0.025616003 = sum of:
        0.025616003 = product of:
          0.064040005 = sum of:
            0.036250874 = weight(_text_:retrieval in 4995) [ClassicSimilarity], result of:
              0.036250874 = score(doc=4995,freq=4.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.33085006 = fieldWeight in 4995, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4995)
            0.027789133 = weight(_text_:system in 4995) [ClassicSimilarity], result of:
              0.027789133 = score(doc=4995,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.2435858 = fieldWeight in 4995, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4995)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Abstract
    This paper discusses the goals and results of the research project Perseus-a as an attempt to improve information retrieval of digital images by automatically connecting them with text-based descriptions. The development uses the image collection of prometheus, the distributed digital image archive for research and studies, the articles of the digitized Reallexikon zur Deutschen Kunstgeschichte, art historical terminological resources and classification data, and an open source system for linguistic and statistic automatic indexing called lingo.
  3. Lee, C.-Y.; Soo, V.-W.: ¬The conflict detection and resolution in knowledge merging for image annotation (2006) 0.00
    0.0026166062 = product of:
      0.018316243 = sum of:
        0.018316243 = product of:
          0.045790605 = sum of:
            0.02197135 = weight(_text_:retrieval in 981) [ClassicSimilarity], result of:
              0.02197135 = score(doc=981,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.20052543 = fieldWeight in 981, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=981)
            0.023819257 = weight(_text_:system in 981) [ClassicSimilarity], result of:
              0.023819257 = score(doc=981,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.20878783 = fieldWeight in 981, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=981)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Semantic annotation of images is an important step to support semantic information extraction and retrieval. However, in a multi-annotator environment, various types of conflicts such as converting, merging, and inference conflicts could arise during the annotation. We devised conflict detection patterns based on different data, ontology at different inference levels and proposed the corresponding automatic conflict resolution strategies. We also constructed a simple annotator model to decide whether to trust a given piece of annotation from a given annotator. Finally, we conducted experiments to compare the performance of the automatic conflict resolution approaches during the annotation of images in the celebrity domain by 62 annotators. The experiments showed that the proposed method improved 3/4 annotation accuracy with respect to a naïve annotation system.
  4. Ménard, E.: Image retrieval : a comparative study on the influence of indexing vocabularies (2009) 0.00
    0.0014796278 = product of:
      0.010357394 = sum of:
        0.010357394 = product of:
          0.051786967 = sum of:
            0.051786967 = weight(_text_:retrieval in 3250) [ClassicSimilarity], result of:
              0.051786967 = score(doc=3250,freq=16.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.47264296 = fieldWeight in 3250, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3250)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    This paper reports on a research project that compared two different approaches for the indexing of ordinary images representing common objects: traditional indexing with controlled vocabulary and free indexing with uncontrolled vocabulary. We also compared image retrieval within two contexts: a monolingual context where the language of the query is the same as the indexing language and, secondly, a multilingual context where the language of the query is different from the indexing language. As a means of comparison in evaluating the performance of each indexing form, a simulation of the retrieval process involving 30 images was performed with 60 participants. A questionnaire was also submitted to participants in order to gather information with regard to the retrieval process and performance. The results of the retrieval simulation confirm that the retrieval is more effective and more satisfactory for the searcher when the images are indexed with the approach combining the controlled and uncontrolled vocabularies. The results also indicate that the indexing approach with controlled vocabulary is more efficient (queries needed to retrieve an image) than the uncontrolled vocabulary indexing approach. However, no significant differences in terms of temporal efficiency (time required to retrieve an image) was observed. Finally, the comparison of the two linguistic contexts reveal that the retrieval is more effective and more efficient (queries needed to retrieve an image) in the monolingual context rather than the multilingual context. Furthermore, image searchers are more satisfied when the retrieval is done in a monolingual context rather than a multilingual context.
  5. Menard, E.: Image retrieval in multilingual environments : research issues (2006) 0.00
    0.0011837021 = product of:
      0.008285915 = sum of:
        0.008285915 = product of:
          0.04142957 = sum of:
            0.04142957 = weight(_text_:retrieval in 240) [ClassicSimilarity], result of:
              0.04142957 = score(doc=240,freq=4.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.37811437 = fieldWeight in 240, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=240)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    This paper presents an overview of the nature and the characteristics of the numerous problems encountered when a user tries to access a collection of images in a multilingual environment. The authors identify major research questions to be investigated to improve image retrieval effectiveness in a multilingual environment.
  6. Scalla, M.: Auf der Phantom-Spur : Georges Didi-Hubermans neues Standardwerk über Aby Warburg (2006) 0.00
    0.0010516286 = product of:
      0.0073614 = sum of:
        0.0073614 = product of:
          0.0147228 = sum of:
            0.0147228 = weight(_text_:22 in 4054) [ClassicSimilarity], result of:
              0.0147228 = score(doc=4054,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.116070345 = fieldWeight in 4054, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=4054)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    6. 1.2011 11:22:12
  7. Menard, E.: Study on the influence of vocabularies used for image indexing in a multilingual retrieval environment : reflections on scribbles (2007) 0.00
    0.0010462549 = product of:
      0.007323784 = sum of:
        0.007323784 = product of:
          0.03661892 = sum of:
            0.03661892 = weight(_text_:retrieval in 1089) [ClassicSimilarity], result of:
              0.03661892 = score(doc=1089,freq=8.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.33420905 = fieldWeight in 1089, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1089)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    For many years, the Web became an important media for the diffusion of multilingual resources. Linguistic differenees still form a major obstacle to scientific, cultural, and educational exchange. Besides this linguistic diversity, a multitude of databases and collections now contain documents in various formats, which may also adversely affect the retrieval process. This paper describes a research project aiming to verify the existing relations between two indexing approaches: traditional image indexing recommending the use of controlled vocabularies or free image indexing using uncontrolled vocabulary, and their respective performance for image retrieval, in a multilingual context. This research also compares image retrieval within two contexts: a monolingual context where the language of the query is the same as the indexing language; and a multilingual context where the language of the query is different from the indexing language. This research will indicate whether one of these indexing approaches surpasses the other, in terms of effectiveness, efficiency, and satisfaction of the image searchers. This paper presents the context and the problem statement of the research project. The experiment carried out is also described, as well as the data collection methods
  8. Fukumoto, T.: ¬An analysis of image retrieval behavior for metadata type image database (2006) 0.00
    0.0010357393 = product of:
      0.007250175 = sum of:
        0.007250175 = product of:
          0.036250874 = sum of:
            0.036250874 = weight(_text_:retrieval in 965) [ClassicSimilarity], result of:
              0.036250874 = score(doc=965,freq=4.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.33085006 = fieldWeight in 965, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=965)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    The aim of this paper was to analyze users' behavior during image retrieval exercises. Results revealed that users tend to follow a set search strategy: firstly they input one or two keyword search terms one after another and view the images generated by their initial search and after they navigate their way around the web by using the 'back to home' or 'previous page' buttons. These results are consistent with existing Web research. Many of the actions recorded revealed that subjects behavior differed depending on if the task set was presented as a closed or open task. In contrast no differences were found for the time subjects took to perform a single action or their use of the AND operator.
  9. Rorissa, A.: Relationships between perceived features and similarity of images : a test of Tversky's contrast model (2007) 0.00
    9.0608327E-4 = product of:
      0.0063425824 = sum of:
        0.0063425824 = product of:
          0.031712912 = sum of:
            0.031712912 = weight(_text_:retrieval in 520) [ClassicSimilarity], result of:
              0.031712912 = score(doc=520,freq=6.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.28943354 = fieldWeight in 520, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=520)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    The rapid growth of the numbers of images and their users as a result of the reduction in cost and increase in efficiency of the creation, storage, manipulation, and transmission of images poses challenges to those who organize and provide access to images. One of these challenges is similarity matching, a key component of current content-based image retrieval systems. Similarity matching often is implemented through similarity measures based on geometric models of similarity whose metric axioms are not satisfied by human similarity judgment data. This study is significant in that it is among the first known to test Tversky's contrast model, which equates the degree of similarity of two stimuli to a linear combination of their common and distinctive features, in the context of image representation and retrieval. Data were collected from 150 participants who performed an image description and a similarity judgment task. Structural equation modeling, correlation, and regression analyses confirmed the relationships between perceived features and similarity of objects hypothesized by Tversky. The results hold implications for future research that will attempt to further test the contrast model and assist designers of image organization and retrieval systems by pointing toward alternative document representations and similarity measures that more closely match human similarity judgments.
  10. Kim, C.-R.; Chung, C.-W.: XMage: An image retrieval method based on partial similarity (2006) 0.00
    9.0608327E-4 = product of:
      0.0063425824 = sum of:
        0.0063425824 = product of:
          0.031712912 = sum of:
            0.031712912 = weight(_text_:retrieval in 973) [ClassicSimilarity], result of:
              0.031712912 = score(doc=973,freq=6.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.28943354 = fieldWeight in 973, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=973)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    XMage is introduced in this paper as a method for partial similarity searching in image databases. Region-based image retrieval is a method of retrieving partially similar images. It has been proposed as a way to accurately process queries in an image database. In region-based image retrieval, region matching is indispensable for computing the partial similarity between two images because the query processing is based upon regions instead of the entire image. A naive method of region matching is a sequential comparison between regions, which causes severe overhead and deteriorates the performance of query processing. In this paper, a new image contents representation, called Condensed eXtended Histogram (CXHistogram), is presented in conjunction with a well-defined distance function CXSim() on the CX-Histogram. The CXSim() is a new image-to-image similarity measure to compute the partial similarity between two images. It achieves the effect of comparing regions of two images by simply comparing the two images. The CXSim() reduces query space by pruning irrelevant images, and it is used as a filtering function before sequential scanning. Extensive experiments were performed on real image data to evaluate XMage. It provides a significant pruning of irrelevant images with no false dismissals. As a consequence, it achieves up to 5.9-fold speed-up in search over the R*-tree search followed by sequential scanning.
  11. Yee, K.-P.; Swearingen, K.; Li, K.; Hearst, M.: Faceted metadata for image search and browsing 0.00
    6.8055023E-4 = product of:
      0.0047638514 = sum of:
        0.0047638514 = product of:
          0.023819257 = sum of:
            0.023819257 = weight(_text_:system in 5944) [ClassicSimilarity], result of:
              0.023819257 = score(doc=5944,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.20878783 = fieldWeight in 5944, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5944)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    There are currently two dominant interface types for searching and browsing large image collections: keywordbased search, and searching by overall similarity to sample images. We present an alternative based on enabling users to navigate along conceptual dimensions that describe the images. The interface makes use of hierarchical faceted metadata and dynamically generated query previews. A usability study, in which 32 art history students explored a collection of 35,000 fine arts images, compares this approach to a standard image search interface. Despite the unfamiliarity and power of the interface (attributes that often lead to rejection of new search interfaces), the study results show that 90% of the participants preferred the metadata approach overall, 97% said that it helped them learn more about the collection, 75% found it more flexible, and 72% found it easier to use than a standard baseline system. These results indicate that a category-based approach is a successful way to provide access to image collections.