Search (16 results, page 1 of 1)

  • × theme_ss:"Bilder"
  1. Fukumoto, T.: ¬An analysis of image retrieval behavior for metadata type image database (2006) 0.02
    0.0152656 = product of:
      0.071239464 = sum of:
        0.034519844 = weight(_text_:web in 965) [ClassicSimilarity], result of:
          0.034519844 = score(doc=965,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.35694647 = fieldWeight in 965, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=965)
        0.0070627616 = weight(_text_:information in 965) [ClassicSimilarity], result of:
          0.0070627616 = score(doc=965,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13576832 = fieldWeight in 965, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=965)
        0.029656855 = weight(_text_:retrieval in 965) [ClassicSimilarity], result of:
          0.029656855 = score(doc=965,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.33085006 = fieldWeight in 965, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=965)
      0.21428572 = coord(3/14)
    
    Abstract
    The aim of this paper was to analyze users' behavior during image retrieval exercises. Results revealed that users tend to follow a set search strategy: firstly they input one or two keyword search terms one after another and view the images generated by their initial search and after they navigate their way around the web by using the 'back to home' or 'previous page' buttons. These results are consistent with existing Web research. Many of the actions recorded revealed that subjects behavior differed depending on if the task set was presented as a closed or open task. In contrast no differences were found for the time subjects took to perform a single action or their use of the AND operator.
    Source
    Information processing and management. 42(2006) no.3, S.723-728
  2. Ménard, E.: Image retrieval : a comparative study on the influence of indexing vocabularies (2009) 0.01
    0.0067731095 = product of:
      0.047411766 = sum of:
        0.0050448296 = weight(_text_:information in 3250) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=3250,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 3250, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3250)
        0.042366937 = weight(_text_:retrieval in 3250) [ClassicSimilarity], result of:
          0.042366937 = score(doc=3250,freq=16.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.47264296 = fieldWeight in 3250, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3250)
      0.14285715 = coord(2/14)
    
    Abstract
    This paper reports on a research project that compared two different approaches for the indexing of ordinary images representing common objects: traditional indexing with controlled vocabulary and free indexing with uncontrolled vocabulary. We also compared image retrieval within two contexts: a monolingual context where the language of the query is the same as the indexing language and, secondly, a multilingual context where the language of the query is different from the indexing language. As a means of comparison in evaluating the performance of each indexing form, a simulation of the retrieval process involving 30 images was performed with 60 participants. A questionnaire was also submitted to participants in order to gather information with regard to the retrieval process and performance. The results of the retrieval simulation confirm that the retrieval is more effective and more satisfactory for the searcher when the images are indexed with the approach combining the controlled and uncontrolled vocabularies. The results also indicate that the indexing approach with controlled vocabulary is more efficient (queries needed to retrieve an image) than the uncontrolled vocabulary indexing approach. However, no significant differences in terms of temporal efficiency (time required to retrieve an image) was observed. Finally, the comparison of the two linguistic contexts reveal that the retrieval is more effective and more efficient (queries needed to retrieve an image) in the monolingual context rather than the multilingual context. Furthermore, image searchers are more satisfied when the retrieval is done in a monolingual context rather than a multilingual context.
  3. Menard, E.: Study on the influence of vocabularies used for image indexing in a multilingual retrieval environment : reflections on scribbles (2007) 0.01
    0.006770443 = product of:
      0.0473931 = sum of:
        0.017435152 = weight(_text_:web in 1089) [ClassicSimilarity], result of:
          0.017435152 = score(doc=1089,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.18028519 = fieldWeight in 1089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1089)
        0.029957948 = weight(_text_:retrieval in 1089) [ClassicSimilarity], result of:
          0.029957948 = score(doc=1089,freq=8.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.33420905 = fieldWeight in 1089, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1089)
      0.14285715 = coord(2/14)
    
    Abstract
    For many years, the Web became an important media for the diffusion of multilingual resources. Linguistic differenees still form a major obstacle to scientific, cultural, and educational exchange. Besides this linguistic diversity, a multitude of databases and collections now contain documents in various formats, which may also adversely affect the retrieval process. This paper describes a research project aiming to verify the existing relations between two indexing approaches: traditional image indexing recommending the use of controlled vocabularies or free image indexing using uncontrolled vocabulary, and their respective performance for image retrieval, in a multilingual context. This research also compares image retrieval within two contexts: a monolingual context where the language of the query is the same as the indexing language; and a multilingual context where the language of the query is different from the indexing language. This research will indicate whether one of these indexing approaches surpasses the other, in terms of effectiveness, efficiency, and satisfaction of the image searchers. This paper presents the context and the problem statement of the research project. The experiment carried out is also described, as well as the data collection methods
  4. Lepsky, K.; Müller, T.; Wille, J.: Metadata improvement for image information retrieval (2010) 0.01
    0.0056635872 = product of:
      0.03964511 = sum of:
        0.009988253 = weight(_text_:information in 4995) [ClassicSimilarity], result of:
          0.009988253 = score(doc=4995,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1920054 = fieldWeight in 4995, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4995)
        0.029656855 = weight(_text_:retrieval in 4995) [ClassicSimilarity], result of:
          0.029656855 = score(doc=4995,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.33085006 = fieldWeight in 4995, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4995)
      0.14285715 = coord(2/14)
    
    Abstract
    This paper discusses the goals and results of the research project Perseus-a as an attempt to improve information retrieval of digital images by automatically connecting them with text-based descriptions. The development uses the image collection of prometheus, the distributed digital image archive for research and studies, the articles of the digitized Reallexikon zur Deutschen Kunstgeschichte, art historical terminological resources and classification data, and an open source system for linguistic and statistic automatic indexing called lingo.
  5. Rorissa, A.: ¬A comparative study of Flickr tags and index terms in a general image collection (2010) 0.00
    0.004770705 = product of:
      0.033394933 = sum of:
        0.02465703 = weight(_text_:web in 4100) [ClassicSimilarity], result of:
          0.02465703 = score(doc=4100,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25496176 = fieldWeight in 4100, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4100)
        0.008737902 = weight(_text_:information in 4100) [ClassicSimilarity], result of:
          0.008737902 = score(doc=4100,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16796975 = fieldWeight in 4100, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4100)
      0.14285715 = coord(2/14)
    
    Abstract
    Web 2.0 and social/collaborative tagging have altered the traditional roles of indexer and user. Traditional indexing tools and systems assume the top-down approach to indexing in which a trained professional is responsible for assigning index terms to information sources with a potential user in mind. However, in today's Web, end users create, organize, index, and search for images and other information sources through social tagging and other collaborative activities. One of the impediments to user-centered indexing had been the cost of soliciting user-generated index terms or tags. Social tagging of images such as those on Flickr, an online photo management and sharing application, presents an opportunity that can be seized by designers of indexing tools and systems to bridge the semantic gap between indexer terms and user vocabularies. Empirical research on the differences and similarities between user-generated tags and index terms based on controlled vocabularies has the potential to inform future design of image indexing tools and systems. Toward this end, a random sample of Flickr images and the tags assigned to them were content analyzed and compared with another sample of index terms from a general image collection using established frameworks for image attributes and contents. The results show that there is a fundamental difference between the types of tags and types of index terms used. In light of this, implications for research into and design of user-centered image indexing tools and systems are discussed.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.11, S.2230-2242
  6. Rorissa, A.: Relationships between perceived features and similarity of images : a test of Tversky's contrast model (2007) 0.00
    0.004427025 = product of:
      0.030989174 = sum of:
        0.0050448296 = weight(_text_:information in 520) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=520,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 520, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=520)
        0.025944345 = weight(_text_:retrieval in 520) [ClassicSimilarity], result of:
          0.025944345 = score(doc=520,freq=6.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.28943354 = fieldWeight in 520, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=520)
      0.14285715 = coord(2/14)
    
    Abstract
    The rapid growth of the numbers of images and their users as a result of the reduction in cost and increase in efficiency of the creation, storage, manipulation, and transmission of images poses challenges to those who organize and provide access to images. One of these challenges is similarity matching, a key component of current content-based image retrieval systems. Similarity matching often is implemented through similarity measures based on geometric models of similarity whose metric axioms are not satisfied by human similarity judgment data. This study is significant in that it is among the first known to test Tversky's contrast model, which equates the degree of similarity of two stimuli to a linear combination of their common and distinctive features, in the context of image representation and retrieval. Data were collected from 150 participants who performed an image description and a similarity judgment task. Structural equation modeling, correlation, and regression analyses confirmed the relationships between perceived features and similarity of objects hypothesized by Tversky. The results hold implications for future research that will attempt to further test the contrast model and assist designers of image organization and retrieval systems by pointing toward alternative document representations and similarity measures that more closely match human similarity judgments.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.10, S.1401-1418
  7. Kim, C.-R.; Chung, C.-W.: XMage: An image retrieval method based on partial similarity (2006) 0.00
    0.004427025 = product of:
      0.030989174 = sum of:
        0.0050448296 = weight(_text_:information in 973) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=973,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 973, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=973)
        0.025944345 = weight(_text_:retrieval in 973) [ClassicSimilarity], result of:
          0.025944345 = score(doc=973,freq=6.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.28943354 = fieldWeight in 973, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=973)
      0.14285715 = coord(2/14)
    
    Abstract
    XMage is introduced in this paper as a method for partial similarity searching in image databases. Region-based image retrieval is a method of retrieving partially similar images. It has been proposed as a way to accurately process queries in an image database. In region-based image retrieval, region matching is indispensable for computing the partial similarity between two images because the query processing is based upon regions instead of the entire image. A naive method of region matching is a sequential comparison between regions, which causes severe overhead and deteriorates the performance of query processing. In this paper, a new image contents representation, called Condensed eXtended Histogram (CXHistogram), is presented in conjunction with a well-defined distance function CXSim() on the CX-Histogram. The CXSim() is a new image-to-image similarity measure to compute the partial similarity between two images. It achieves the effect of comparing regions of two images by simply comparing the two images. The CXSim() reduces query space by pruning irrelevant images, and it is used as a filtering function before sequential scanning. Extensive experiments were performed on real image data to evaluate XMage. It provides a significant pruning of irrelevant images with no false dismissals. As a consequence, it achieves up to 5.9-fold speed-up in search over the R*-tree search followed by sequential scanning.
    Source
    Information processing and management. 42(2006) no.2, S.484-502
  8. Lee, C.-Y.; Soo, V.-W.: ¬The conflict detection and resolution in knowledge merging for image annotation (2006) 0.00
    0.003790876 = product of:
      0.02653613 = sum of:
        0.00856136 = weight(_text_:information in 981) [ClassicSimilarity], result of:
          0.00856136 = score(doc=981,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16457605 = fieldWeight in 981, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=981)
        0.01797477 = weight(_text_:retrieval in 981) [ClassicSimilarity], result of:
          0.01797477 = score(doc=981,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.20052543 = fieldWeight in 981, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=981)
      0.14285715 = coord(2/14)
    
    Abstract
    Semantic annotation of images is an important step to support semantic information extraction and retrieval. However, in a multi-annotator environment, various types of conflicts such as converting, merging, and inference conflicts could arise during the annotation. We devised conflict detection patterns based on different data, ontology at different inference levels and proposed the corresponding automatic conflict resolution strategies. We also constructed a simple annotator model to decide whether to trust a given piece of annotation from a given annotator. Finally, we conducted experiments to compare the performance of the automatic conflict resolution approaches during the annotation of images in the celebrity domain by 62 annotators. The experiments showed that the proposed method improved 3/4 annotation accuracy with respect to a naïve annotation system.
    Source
    Information processing and management. 42(2006) no.4, S.1030-1055
  9. Menard, E.: Image retrieval in multilingual environments : research issues (2006) 0.00
    0.002420968 = product of:
      0.033893548 = sum of:
        0.033893548 = weight(_text_:retrieval in 240) [ClassicSimilarity], result of:
          0.033893548 = score(doc=240,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.37811437 = fieldWeight in 240, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=240)
      0.071428575 = coord(1/14)
    
    Abstract
    This paper presents an overview of the nature and the characteristics of the numerous problems encountered when a user tries to access a collection of images in a multilingual environment. The authors identify major research questions to be investigated to improve image retrieval effectiveness in a multilingual environment.
  10. British Library stellt über eine Million gemeinfreie Bilder in Netz (2013) 0.00
    0.0017435154 = product of:
      0.024409214 = sum of:
        0.024409214 = weight(_text_:web in 1148) [ClassicSimilarity], result of:
          0.024409214 = score(doc=1148,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25239927 = fieldWeight in 1148, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1148)
      0.071428575 = coord(1/14)
    
    Abstract
    Die British Library hat über eine Million eingescannter Bilder im Web veröffentlicht. Die gemeinfreien, also frei verwendbaren Bilder, die über die Flickr-Seite der britischen Nationalbibliothek erhältlich sind, stammen aus Büchern des 17., 18. und 19. Jahrhundert, wie aus einer Mitteilung hervorgeht. Sie wurden von Microsoft aus 65.000 Büchern digitalisiert. Der Softwarekonzern und die British Library hatten vor acht Jahren eine Zusammenarbeit vereinbart. Die Inhalte von 100.000 Büchern sollten zunächst über Microsofts Buchsuchprojekt recherchierbar sein. Alle Abbildungen sind mit Herkunftsangaben und dem Erscheinungsjahr versehen. Im nächsten Schritt plant die British Library ein Crowdsourcing-Projekt, um die Bilder automatisch inhaltlich zu klassifizieren. Die Daten zu den Bildern hat die British Library auf Github bereitgestellt. Der Code soll unter eine offene Lizenz gestellt werden.
  11. Jesdanun, A.: Streitbare Suchmaschine : Polar Rose ermöglicht Internet-Recherche mit Gesichtserkennung (2007) 0.00
    9.962944E-4 = product of:
      0.013948122 = sum of:
        0.013948122 = weight(_text_:web in 547) [ClassicSimilarity], result of:
          0.013948122 = score(doc=547,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.14422815 = fieldWeight in 547, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=547)
      0.071428575 = coord(1/14)
    
    Abstract
    Probleme für den Schutz der Persönlichkeitsrechte wirft das Projekt einer schwedischen Firma auf, die eine Internet-Suchmaschine mit Gesichtserkennung entwickelt. Die Technik der Firma Polar Rose scannt öffentlich verfügbare Fotos ein, sortiert sie nach rund 90 verschiedenen Merkmalen und erstellt so eine Datenbank. Die Suchmaschine soll in der Lage sein, ein beliebiges Foto mit diesen Daten abzugleichen, die Identität der gezeigten Person zu ermitteln und eine Liste mit Web-Seiten zu liefern, auf denen diese Person zu sehen ist. Bei Tests. mit 10 000 Fotos habe es in 95 Prozent der Fälle eine zuverlässige Erkennung gegeben, sagt der Vorstandschef von Polar Rose, Nikolaj Nyholm. Allerdings schränkt er ein, dass die Genauigkeit mit wachsender Datenbasis vermutlich geringer wird, weil bei Millionen und vielleicht Milliarden von Personenfotos die Wahrscheinlichkeit zunimmt, dass sich zwei oder mehr Personen sehr ähnlich sehen. Deshalb sollen die Nutzer des geplanten Internet-Dienstes selbst Informationen beisteuern, etwa die Namen von abgebildeten Personen. Polar Rose verfolgt das Konzept, die zahllosen Fotos, die sich etwa bei Flickr oder Myspace finden, besser durchsuchbar zu machen, als bei der herkömmlichen Bildersuche. Auch Personen, die nur im Hintergrund eines Fotos zu sehen sind, sollen auf diese Weise erfasst werden. Was aber ist, wenn Arbeitgeber, Polizei oder misstrauische Partner auf diese Weise die Anwesenheit einer Person an einem bestimmten Ort aufdecken, die eigentlich vertraulich bleiben sollte? "Ich glaube nicht, dass wir da schon alle Antworten haben", räumt Nyholm ein. Der Leiter der Organisation Privacy International, Simon Davies, sieht sich durch Techniken wie die von Polar Rose in seiner Einschätzung bestätigt, dass es Grenzen für die Internet-Suche geben müsse. Andernfalls werde die Suche im Internet in Dimensionen vorstoßen, "die unendlich mächtiger sind, als wir es uns jemals vorstellen konnten". Davies fordert eine Debatte über eine Begrenzung der Internet-Suche und über ein Mitspracherecht von einzelnen Personen bei der Nutzung ihrer Daten. Die Verfügbarkeit von Fotos im Internet sei kein Freibrief für massenhafte Aufbereitung in Datenbanken.
  12. Drolshagen, J.A.: Pictorial representation of quilts from the underground railroad (2005) 0.00
    5.04483E-4 = product of:
      0.0070627616 = sum of:
        0.0070627616 = weight(_text_:information in 6086) [ClassicSimilarity], result of:
          0.0070627616 = score(doc=6086,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13576832 = fieldWeight in 6086, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6086)
      0.071428575 = coord(1/14)
    
    Abstract
    The Underground Railroad was a network of people who helped fugitive slaves escape to the North and Canada during the U.S. Civil War period, beginning in about 1831. Quilting was used as a form of information representation (Breneman 2001). This simple classification was designed to relate the symbolic transmission of escape routes and locations of sanctuary. Because it was for use in a children's library, symbolic representations were used to anchor the classes. Symbols are based in the African graphic arts, the Adinkra symbols of Ghana (West African wisdom. 2001), and also from actual quilt practice (Threads of Freedom 2001 and Breneman 2001).
  13. Bredekamp, H.: Theorie des Bildakts : über das Lebensrecht des Bildes (2005) 0.00
    3.6034497E-4 = product of:
      0.0050448296 = sum of:
        0.0050448296 = weight(_text_:information in 4820) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=4820,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 4820, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4820)
      0.071428575 = coord(1/14)
    
    Theme
    Information
  14. Stvilia, B.; Jörgensen, C.: Member activities and quality of tags in a collection of historical photographs in Flickr (2010) 0.00
    3.6034497E-4 = product of:
      0.0050448296 = sum of:
        0.0050448296 = weight(_text_:information in 4117) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=4117,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 4117, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4117)
      0.071428575 = coord(1/14)
    
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.12, S.2477-2489
  15. Scalla, M.: Auf der Phantom-Spur : Georges Didi-Hubermans neues Standardwerk über Aby Warburg (2006) 0.00
    2.8677925E-4 = product of:
      0.0040149093 = sum of:
        0.0040149093 = product of:
          0.012044728 = sum of:
            0.012044728 = weight(_text_:22 in 4054) [ClassicSimilarity], result of:
              0.012044728 = score(doc=4054,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.116070345 = fieldWeight in 4054, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=4054)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Date
    6. 1.2011 11:22:12
  16. Scalla, M.: Bilder sehen Dich an : Horst Bredekamp auf den Spuren von Max Horkheimer und Theodor W. Adorno (2005) 0.00
    2.16207E-4 = product of:
      0.0030268978 = sum of:
        0.0030268978 = weight(_text_:information in 4047) [ClassicSimilarity], result of:
          0.0030268978 = score(doc=4047,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.058186423 = fieldWeight in 4047, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4047)
      0.071428575 = coord(1/14)
    
    Theme
    Information