Search (38 results, page 1 of 2)

  • × theme_ss:"Inhaltsanalyse"
  1. Raieli, R.: ¬The semantic hole : enthusiasm and caution around multimedia information retrieval (2012) 0.06
    0.06232588 = product of:
      0.12465176 = sum of:
        0.12465176 = sum of:
          0.07463771 = weight(_text_:retrieval in 4888) [ClassicSimilarity], result of:
            0.07463771 = score(doc=4888,freq=16.0), product of:
              0.15791564 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.052204985 = queryNorm
              0.47264296 = fieldWeight in 4888, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4888)
          0.05001405 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
            0.05001405 = score(doc=4888,freq=4.0), product of:
              0.18281296 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052204985 = queryNorm
              0.27358043 = fieldWeight in 4888, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4888)
      0.5 = coord(1/2)
    
    Abstract
    This paper centres on the tools for the management of new digital documents, which are not only textual, but also visual-video, audio or multimedia in the full sense. Among the aims is to demonstrate that operating within the terms of generic Information Retrieval through textual language only is limiting, and it is instead necessary to consider ampler criteria, such as those of MultiMedia Information Retrieval, according to which, every type of digital document can be analyzed and searched by the proper elements of language for its proper nature. MMIR is presented as the organic complex of the systems of Text Retrieval, Visual Retrieval, Video Retrieval, and Audio Retrieval, each of which has an approach to information management that handles the concrete textual, visual, audio, or video content of the documents directly, here defined as content-based. In conclusion, the limits of this content-based objective access to documents is underlined. The discrepancy known as the semantic gap is that which occurs between semantic-interpretive access and content-based access. Finally, the integration of these conceptions is explained, gathering and composing the merits and the advantages of each of the approaches and of the systems to access to information.
    Date
    22. 1.2012 13:02:10
    Footnote
    Bezugnahme auf: Enser, P.G.B.: Visual image retrieval. In: Annual review of information science and technology. 42(2008), S.3-42.
    Source
    Knowledge organization. 39(2012) no.1, S.13-22
  2. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.05
    0.049402952 = product of:
      0.098805904 = sum of:
        0.098805904 = sum of:
          0.042221468 = weight(_text_:retrieval in 5830) [ClassicSimilarity], result of:
            0.042221468 = score(doc=5830,freq=2.0), product of:
              0.15791564 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.052204985 = queryNorm
              0.26736724 = fieldWeight in 5830, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0625 = fieldNorm(doc=5830)
          0.056584436 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
            0.056584436 = score(doc=5830,freq=2.0), product of:
              0.18281296 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052204985 = queryNorm
              0.30952093 = fieldWeight in 5830, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=5830)
      0.5 = coord(1/2)
    
    Date
    5. 8.2006 13:22:08
  3. Enser, P.G.B.; Sandom, C.J.; Hare, J.S.; Lewis, P.H.: Facing the reality of semantic image retrieval (2007) 0.02
    0.020861875 = product of:
      0.04172375 = sum of:
        0.04172375 = product of:
          0.0834475 = sum of:
            0.0834475 = weight(_text_:retrieval in 837) [ClassicSimilarity], result of:
              0.0834475 = score(doc=837,freq=20.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.5284309 = fieldWeight in 837, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=837)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - To provide a better-informed view of the extent of the semantic gap in image retrieval, and the limited potential for bridging it offered by current semantic image retrieval techniques. Design/methodology/approach - Within an ongoing project, a broad spectrum of operational image retrieval activity has been surveyed, and, from a number of collaborating institutions, a test collection assembled which comprises user requests, the images selected in response to those requests, and their associated metadata. This has provided the evidence base upon which to make informed observations on the efficacy of cutting-edge automatic annotation techniques which seek to integrate the text-based and content-based image retrieval paradigms. Findings - Evidence from the real-world practice of image retrieval highlights the existence of a generic-specific continuum of object identification, and the incidence of temporal, spatial, significance and abstract concept facets, manifest in textual indexing and real-query scenarios but often having no directly visible presence in an image. These factors combine to limit the functionality of current semantic image retrieval techniques, which interpret only visible features at the generic extremity of the generic-specific continuum. Research limitations/implications - The project is concerned with the traditional image retrieval environment in which retrieval transactions are conducted on still images which form part of managed collections. The possibilities offered by ontological support for adding functionality to automatic annotation techniques are considered. Originality/value - The paper offers fresh insights into the challenge of migrating content-based image retrieval from the laboratory to the operational environment, informed by newly-assembled, comprehensive, live data.
  4. Krause, J.: Principles of content analysis for information retrieval systems : an overview (1996) 0.02
    0.018471893 = product of:
      0.036943786 = sum of:
        0.036943786 = product of:
          0.07388757 = sum of:
            0.07388757 = weight(_text_:retrieval in 5270) [ClassicSimilarity], result of:
              0.07388757 = score(doc=5270,freq=2.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.46789268 = fieldWeight in 5270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5270)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  5. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.02
    0.017682636 = product of:
      0.035365272 = sum of:
        0.035365272 = product of:
          0.070730545 = sum of:
            0.070730545 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
              0.070730545 = score(doc=5835,freq=2.0), product of:
                0.18281296 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052204985 = queryNorm
                0.38690117 = fieldWeight in 5835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5835)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 8.2006 13:22:44
  6. Ornager, S.: View a picture : theoretical image analysis and empirical user studies on indexing and retrieval (1996) 0.02
    0.015997129 = product of:
      0.031994257 = sum of:
        0.031994257 = product of:
          0.063988514 = sum of:
            0.063988514 = weight(_text_:retrieval in 904) [ClassicSimilarity], result of:
              0.063988514 = score(doc=904,freq=6.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.40520695 = fieldWeight in 904, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=904)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Examines Panofsky's and Barthes's theories of image analysis and reports on a study of criteria for analysis and indexing of images and the different types of user queries used in 15 Danish newspaper image archives. A structured interview method and observation and various categories for subject analysis were used. The results identify a list of the minimum number of elements and led to user typology of 5 categories. The requirement for retrieval may involve combining images in a more visual way with text-based image retrieval
  7. Hidderley, R.; Rafferty, P.: Democratic indexing : an approach to the retrieval of fiction (1997) 0.02
    0.015997129 = product of:
      0.031994257 = sum of:
        0.031994257 = product of:
          0.063988514 = sum of:
            0.063988514 = weight(_text_:retrieval in 1783) [ClassicSimilarity], result of:
              0.063988514 = score(doc=1783,freq=6.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.40520695 = fieldWeight in 1783, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1783)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Examines how an analytical framework to describe the contents of images may be extended to deal with time based materials like film and music. A levels of meanings table was developed and used as an indexing template for image retrieval purposes. Develops a concept of democratic indexing which focused on user interpretation. Describes the approach to image or pictorial information retrieval. Extends the approach in relation to fiction
  8. Beghtol, C.: Stories : applications of narrative discourse analysis to issues in information storage and retrieval (1997) 0.02
    0.015997129 = product of:
      0.031994257 = sum of:
        0.031994257 = product of:
          0.063988514 = sum of:
            0.063988514 = weight(_text_:retrieval in 5844) [ClassicSimilarity], result of:
              0.063988514 = score(doc=5844,freq=6.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.40520695 = fieldWeight in 5844, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5844)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The arts, humanities, and social sciences commonly borrow concepts and methods from the sciences, but interdisciplinary borrowing seldom occurs in the opposite direction. Research on narrative discourse is relevant to problems of documentary storage and retrieval, for the arts and humanities in particular, but also for other broad areas of knowledge. This paper views the potential application of narrative discourse analysis to information storage and retrieval problems from 2 perspectives: 1) analysis and comparison of narrative documents in all disciplines may be simplified if fundamental categories that occur in narrative documents can be isolated; and 2) the possibility of subdividing the world of knowledge initially into narrative and non-narrative documents is explored with particular attention to Werlich's work on text types
  9. Austin, J.; Pejtersen, A.M.: Fiction retrieval: experimental design and evaluation of a search system based on user's value criteria. Pt.1 (1983) 0.02
    0.01583305 = product of:
      0.0316661 = sum of:
        0.0316661 = product of:
          0.0633322 = sum of:
            0.0633322 = weight(_text_:retrieval in 142) [ClassicSimilarity], result of:
              0.0633322 = score(doc=142,freq=2.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.40105087 = fieldWeight in 142, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.09375 = fieldNorm(doc=142)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  10. Belkin, N.J.: ¬The problem of 'matching' in information retrieval (1980) 0.02
    0.01583305 = product of:
      0.0316661 = sum of:
        0.0316661 = product of:
          0.0633322 = sum of:
            0.0633322 = weight(_text_:retrieval in 1329) [ClassicSimilarity], result of:
              0.0633322 = score(doc=1329,freq=2.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.40105087 = fieldWeight in 1329, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1329)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  11. Pejtersen, A.M.: Implications of users' value perception for the design of knowledge based bibliographic retrieval systems (1985) 0.02
    0.01583305 = product of:
      0.0316661 = sum of:
        0.0316661 = product of:
          0.0633322 = sum of:
            0.0633322 = weight(_text_:retrieval in 2088) [ClassicSimilarity], result of:
              0.0633322 = score(doc=2088,freq=2.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.40105087 = fieldWeight in 2088, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2088)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  12. Wyllie, J.: Concept indexing : the world beyond the windows (1990) 0.02
    0.01583305 = product of:
      0.0316661 = sum of:
        0.0316661 = product of:
          0.0633322 = sum of:
            0.0633322 = weight(_text_:retrieval in 2977) [ClassicSimilarity], result of:
              0.0633322 = score(doc=2977,freq=2.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.40105087 = fieldWeight in 2977, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2977)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper argues that the realisation of the electronic hypermedia of the future depends on integrating the technology of free text retrieval with the classification-based discipline of content analysis
  13. Bednarek, M.: Intellectual access to pictorial information (1993) 0.02
    0.01583305 = product of:
      0.0316661 = sum of:
        0.0316661 = product of:
          0.0633322 = sum of:
            0.0633322 = weight(_text_:retrieval in 5631) [ClassicSimilarity], result of:
              0.0633322 = score(doc=5631,freq=8.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.40105087 = fieldWeight in 5631, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5631)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Visual materials represent a significantly different type of communication to textual materials and therefore present distinct challenges for the process of retrieval, especially if by retireval we mean intellectual access to the content of images. This paper outlines the special characteristics of visual materials, focusing on their pontential complexity and subjectivity, and the methods used and explored for gaining access to visual materials as reported in the literature. It concludes that methods of access to visual materials are dominated by the relative mature systems developed for textual materials and that access methods based on visual communication are still largely in the developmental or prototype stage. Although reported research on user requirements in the retrieval of visual information is noticeably lacking, the results of at least one study indicate that the visually-based retrieval methods of structured and unstructered browsing seem to be preferred for visula materials and that effective retrieval methods are ultimately related to characteristics of the enquirer and the visual information sought
  14. Rorissa, A.; Iyer, H.: Theories of cognition and image categorization : what category labels reveal about basic level theory (2008) 0.02
    0.01583305 = product of:
      0.0316661 = sum of:
        0.0316661 = product of:
          0.0633322 = sum of:
            0.0633322 = weight(_text_:retrieval in 1958) [ClassicSimilarity], result of:
              0.0633322 = score(doc=1958,freq=8.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.40105087 = fieldWeight in 1958, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1958)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Information search and retrieval interactions usually involve information content in the form of document collections, information retrieval systems and interfaces, and the user. To fully understand information search and retrieval interactions between users' cognitive space and the information space, researchers need to turn to cognitive models and theories. In this article, the authors use one of these theories, the basic level theory. Use of the basic level theory to understand human categorization is both appropriate and essential to user-centered design of taxonomies, ontologies, browsing interfaces, and other indexing tools and systems. Analyses of data from two studies involving free sorting by 105 participants of 100 images were conducted. The types of categories formed and category labels were examined. Results of the analyses indicate that image category labels generally belong to superordinate to the basic level, and are generic and interpretive. Implications for research on theories of cognition and categorization, and design of image indexing, retrieval and browsing systems are discussed.
  15. Hauff-Hartig, S.: Automatische Transkription von Videos : Fernsehen 3.0: Automatisierte Sentimentanalyse und Zusammenstellung von Kurzvideos mit hohem Aufregungslevel KI-generierte Metadaten: Von der Technologiebeobachtung bis zum produktiven Einsatz (2021) 0.01
    0.014146109 = product of:
      0.028292218 = sum of:
        0.028292218 = product of:
          0.056584436 = sum of:
            0.056584436 = weight(_text_:22 in 251) [ClassicSimilarity], result of:
              0.056584436 = score(doc=251,freq=2.0), product of:
                0.18281296 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052204985 = queryNorm
                0.30952093 = fieldWeight in 251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=251)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 5.2021 12:43:05
  16. Greisdorf, H.; O'Connor, B.: Modelling what users see when they look at images : a cognitive viewpoint (2002) 0.01
    0.013711825 = product of:
      0.02742365 = sum of:
        0.02742365 = product of:
          0.0548473 = sum of:
            0.0548473 = weight(_text_:retrieval in 4471) [ClassicSimilarity], result of:
              0.0548473 = score(doc=4471,freq=6.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.34732026 = fieldWeight in 4471, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4471)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Analysis of user viewing and query-matching behavior furnishes additional evidence that the relevance of retrieved images for system users may arise from descriptions of objects and content-based elements that are not evident or not even present in the image. This investigation looks at how users assign pre-determined query terms to retrieved images, as well as looking at a post-retrieval process of image engagement to user cognitive assessments of meaningful terms. Additionally, affective/emotion-based query terms appear to be an important descriptive category for image retrieval. A system for capturing (eliciting) human interpretations derived from cognitive engagements with viewed images could further enhance the efficiency of image retrieval systems stemming from traditional indexing methods and technology-based content extraction algorithms. An approach to such a system is posited.
  17. Caldera-Serrano, J.: Thematic description of audio-visual information on television (2010) 0.01
    0.013711825 = product of:
      0.02742365 = sum of:
        0.02742365 = product of:
          0.0548473 = sum of:
            0.0548473 = weight(_text_:retrieval in 3953) [ClassicSimilarity], result of:
              0.0548473 = score(doc=3953,freq=6.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.34732026 = fieldWeight in 3953, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3953)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - This paper endeavours to show the possibilities for thematic description of audio-visual documents for television with the aim of promoting and facilitating information retrieval. Design/methodology/approach - To achieve these goals different database fields are shown, as well as the way in which they are organised for indexing and thematic element description, analysed and used as an example. Some of the database fields are extracted from an analytical study of the documentary system of television in Spain. Others are being tested in university television on which indexing experiments are carried out. Findings - Not all thematic descriptions are used on television information systems; nevertheless, some television channels do use thematic descriptions of both image and sound, applying thesauri. Moreover, it is possible to access sequences using full text retrieval as well. Originality/value - The development of the documentary task, applying the described techniques, promotes thematic indexing and hence thematic retrieval. Given the fact that this is without doubt one of the aspects most demanded by television journalists (along with people's names). This conceptualisation translates into the adaptation of databases to new indexing methods.
  18. Morehead, D.R.; Pejtersen, A.M.; Rouse, W.B.: ¬The value of information and computer-aided information seeking : problem formulation and application to fiction retrieval (1984) 0.01
    0.0130616 = product of:
      0.0261232 = sum of:
        0.0261232 = product of:
          0.0522464 = sum of:
            0.0522464 = weight(_text_:retrieval in 5828) [ClassicSimilarity], result of:
              0.0522464 = score(doc=5828,freq=4.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.33085006 = fieldWeight in 5828, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5828)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Issues concerning the formulation and application of a model of how humans value information are examined. Formulation of a value function is based on research from modelling, value assessment, human information seeking behavior, and human decision making. The proposed function is incorporated into a computer-based fiction retrieval system and evaluated using data from nine searches. Evaluation is based on the ability of an individual's value function to discriminate among novels selected, rejected, and not considered. The results are discussed in terms of both formulation and utilization of a value function as well as the implications for extending the proposed formulation to other information seeking environments
  19. Rorissa, A.: User-generated descriptions of individual images versus labels of groups of images : a comparison using basic level theory (2008) 0.01
    0.0114265205 = product of:
      0.022853041 = sum of:
        0.022853041 = product of:
          0.045706082 = sum of:
            0.045706082 = weight(_text_:retrieval in 2122) [ClassicSimilarity], result of:
              0.045706082 = score(doc=2122,freq=6.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.28943354 = fieldWeight in 2122, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2122)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Although images are visual information sources with little or no text associated with them, users still tend to use text to describe images and formulate queries. This is because digital libraries and search engines provide mostly text query options and rely on text annotations for representation and retrieval of the semantic content of images. While the main focus of image research is on indexing and retrieval of individual images, the general topic of image browsing and indexing, and retrieval of groups of images has not been adequately investigated. Comparisons of descriptions of individual images as well as labels of groups of images supplied by users using cognitive models are scarce. This work fills this gap. Using the basic level theory as a framework, a comparison of the descriptions of individual images and labels assigned to groups of images by 180 participants in three studies found a marked difference in their level of abstraction. Results confirm assertions by previous researchers in LIS and other fields that groups of images are labeled using more superordinate level terms while individual image descriptions are mainly at the basic level. Implications for design of image browsing interfaces, taxonomies, thesauri, and similar tools are discussed.
  20. Yoon, J.W.: Utilizing quantitative users' reactions to represent affective meanings of an image (2010) 0.01
    0.0114265205 = product of:
      0.022853041 = sum of:
        0.022853041 = product of:
          0.045706082 = sum of:
            0.045706082 = weight(_text_:retrieval in 3584) [ClassicSimilarity], result of:
              0.045706082 = score(doc=3584,freq=6.0), product of:
                0.15791564 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.052204985 = queryNorm
                0.28943354 = fieldWeight in 3584, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3584)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Emotional meaning is critical for users to retrieve relevant images. However, because emotional meanings are subject to the individual viewer's interpretation, they are considered difficult to implement when designing image retrieval systems. With the intent of making an image's emotional messages more readily accessible, this study aims to test a new approach designed to enhance the accessibility of emotional meanings during the image search process. This approach utilizes image searchers' emotional reactions, which are quantitatively measured. Broadly used quantitative measurements for emotional reactions, Semantic Differential (SD) and Self-Assessment Manikin (SAM), were selected as tools for gathering users' reactions. Emotional representations obtained from these two tools were compared with three image perception tasks: searching, describing, and sorting. A survey questionnaire with a set of 12 images was administered to 58 participants, which were tagged with basic emotions. Results demonstrated that the SAM represents basic emotions on 2-dimensional plots (pleasure and arousal dimensions), and this representation consistently corresponded to the three image perception tasks. This study provided experimental evidence that quantitative users' reactions can be a useful complementary element of current image retrieval/indexing systems. Integrating users' reactions obtained from the SAM into image browsing systems would reduce the efforts of human indexers as well as improve the effectiveness of image retrieval systems.