Search (18 results, page 1 of 1)

  • × theme_ss:"Inhaltsanalyse"
  • × year_i:[2000 TO 2010}
  1. White, M.D.; Marsh, E.E.: Content analysis : a flexible methodology (2006) 0.09
    0.08519173 = product of:
      0.12778759 = sum of:
        0.10759281 = weight(_text_:systematic in 5589) [ClassicSimilarity], result of:
          0.10759281 = score(doc=5589,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.3788859 = fieldWeight in 5589, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=5589)
        0.02019477 = product of:
          0.04038954 = sum of:
            0.04038954 = weight(_text_:22 in 5589) [ClassicSimilarity], result of:
              0.04038954 = score(doc=5589,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.23214069 = fieldWeight in 5589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5589)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Content analysis is a highly flexible research method that has been widely used in library and information science (LIS) studies with varying research goals and objectives. The research method is applied in qualitative, quantitative, and sometimes mixed modes of research frameworks and employs a wide range of analytical techniques to generate findings and put them into context. This article characterizes content analysis as a systematic, rigorous approach to analyzing documents obtained or generated in the course of research. It briefly describes the steps involved in content analysis, differentiates between quantitative and qualitative content analysis, and shows that content analysis serves the purposes of both quantitative research and qualitative research. The authors draw on selected LIS studies that have used content analysis to illustrate the concepts addressed in the article. The article also serves as a gateway to methodological books and articles that provide more detail about aspects of content analysis discussed only briefly in the article.
    Source
    Library trends. 55(2006) no.1, S.22-45
  2. Mai, J.-E.: Analysis in indexing : document and domain centered approaches (2005) 0.03
    0.026541978 = product of:
      0.079625934 = sum of:
        0.079625934 = product of:
          0.15925187 = sum of:
            0.15925187 = weight(_text_:indexing in 1024) [ClassicSimilarity], result of:
              0.15925187 = score(doc=1024,freq=16.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.8373461 = fieldWeight in 1024, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1024)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The paper discusses the notion of steps in indexing and reveals that the document-centered approach to indexing is prevalent and argues that the document-centered approach is problematic because it blocks out context-dependent factors in the indexing process. A domain-centered approach to indexing is presented as an alternative and the paper discusses how this approach includes a broader range of analyses and how it requires a new set of actions from using this approach; analysis of the domain, users and indexers. The paper concludes that the two-step procedure to indexing is insufficient to explain the indexing process and suggests that the domain-centered approach offers a guide for indexers that can help them manage the complexity of indexing.
  3. Andersen, J.; Christensen, F.S.: Wittgenstein and indexing theory (2001) 0.02
    0.023219395 = product of:
      0.06965818 = sum of:
        0.06965818 = product of:
          0.13931637 = sum of:
            0.13931637 = weight(_text_:indexing in 1590) [ClassicSimilarity], result of:
              0.13931637 = score(doc=1590,freq=24.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.7325252 = fieldWeight in 1590, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1590)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The paper considers indexing an activity that deals with linguistic entities. It rests an the assumption that a theory of indexing should be based an a philosophy of language, because indexing is concerned with the linguistic representation of meaning. The paper consists of four sections: It begins with some basic considerations an the nature of indexing and the requirements for a theory an this; it is followed by a short review of the use of Wittgenstein's philosophy in LIS-literature; next is an analysis of Wittgenstein's work Philosophical Investigations; finally, we deduce a theory of indexing from this philosophy. Considering an indexing theory a theory of meaning entails that, for the purpose of retrieval, indexing is a representation of meaning. Therefore, an indexing theory is concerned with how words are used in the linguistic context. Furthermore, the indexing process is a communicative process containing an interpretative element. Through the philosophy of the later Wittgenstein, it is shown that language and meaning are publicly constituted entities. Since they form the basis of indexing, a theory hereof must take into account that no single actor can define the meaning of documents. Rather this is decided by the social, historical and linguistic context in which the document is produced, distributed and exchanged. Indexing must clarify and reflect these contexts.
  4. Mai, J.-E.: Deconstructing the indexing process (2000) 0.02
    0.02144916 = product of:
      0.064347476 = sum of:
        0.064347476 = product of:
          0.12869495 = sum of:
            0.12869495 = weight(_text_:indexing in 4696) [ClassicSimilarity], result of:
              0.12869495 = score(doc=4696,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.6766778 = fieldWeight in 4696, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.125 = fieldNorm(doc=4696)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  5. Sigel, A.: How can user-oriented depth analysis be constructively guided? (2000) 0.02
    0.018172052 = product of:
      0.05451615 = sum of:
        0.05451615 = product of:
          0.1090323 = sum of:
            0.1090323 = weight(_text_:indexing in 133) [ClassicSimilarity], result of:
              0.1090323 = score(doc=133,freq=30.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.57329166 = fieldWeight in 133, product of:
                  5.477226 = tf(freq=30.0), with freq of:
                    30.0 = termFreq=30.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=133)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    It is vital for library and information science to understand the subject indexing process thoroughly. However, document analysis, the first and most important step in indexing, has not received sufficient attention. As this is an exceptionally hard problem, we still do not dispose of a sound indexing theory. Therefore we have difficulties in teaching indexing and in explaining why a given subject representation is "better" than another. Technological advancements have not helped to close this fundamental gap. To proceed, we should ask the right questions instead. Several types of indexer inconsistencies can be explained as acceptable, yet different conceptualizations which resulting of the variety of groups dealing with a problem from their respective viewpoints. Multiple indexed documents are regarded as the normal case. Intersubjectively replicable indexing results are often questionable or do not constitute interesting cases of indexing at all. In the context of my ongoing dissertation in which I intend to develop an enhanced indexing theory by investigating improvements within a social sciences domain, this paper explains user-oriented selective depth analysis and why I chose that configuration. Strongly influenced by Mai's dissertation, I also communicate my first insights concerning current indexing theories. I agree that I cannot ignore epistemological stances and philosophical issues in language and meaning related to indexing and accept the openness of the interpretive nature of the indexing process. Although I present arguments against the employment of an indexing language as well, it is still indispensable in situations which demand easier access and control by devices. Despite the enormous difficulties the user-oriented and selective depth analysis poses, I argue that it is both feasible and useful if one achieves careful guidance of the possible interpretations. There is some hope because the number of useful interpretations is limited: Every summary is tailored to a purpose, audience and situation. Domain, discourse and social practice entail additional constraints. A pluralistic method mix that focusses on ecologically valid, holistic contexts and employs qualitative methods is recommended. Domain analysis urgently has to be made more practical and applicable. Only then we will be able to investigate empirically domains in order to identify their structures shaped by the corresponding discourse communities. We plan to represent the recognized problem structures and indexing questions of relevance to a small domain in formal, ontological computer models -- if we can find such stable knowledge structures. This would allow us to tailor dynamically summaries for user communities. For practical purposes we suggest to assume a less demanding position than Hjorland's "totality of the epistemological potential". It is sufficent that we identify and represent iteratively the information needs of today's user groups in interactive knowledge-based systems. The best way to formalize such knowledge gained about discourse communities is however unknown. Indexers should stay in direct contact with the community they serve or be part of it to ensure agreement with their viewpoints. Checklist/request-oriented indexing could be very helpful but it remains to be demonstrated how well it will be applicable in the social sciences. A frame-based representation or at least a sophisticated grouping of terms could help to express relational knowledge structures. There remains much work to do since in practice no one has shown yet how such an improved indexing system would work and if the indexing results were really "better".
  6. Mai, J.-E.: Semiotics and indexing : an analysis of the subject indexing process (2001) 0.02
    0.017734105 = product of:
      0.053202312 = sum of:
        0.053202312 = product of:
          0.106404625 = sum of:
            0.106404625 = weight(_text_:indexing in 4480) [ClassicSimilarity], result of:
              0.106404625 = score(doc=4480,freq=14.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.55947536 = fieldWeight in 4480, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4480)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper explains at least some of the major problems related to the subject indexing process and proposes a new approach to understanding the process, which is ordinarily described as a process that takes a number of steps. The subject is first determined, then it is described in a few sentences and, lastly, the description of the subject is converted into the indexing language. It is argued that this typical approach characteristically lacks an understanding of what the central nature of the process is. Indexing is not a neutral and objective representation of a document's subject matter but the representation of an interpretation of a document for future use. Semiotics is offered here as a framework for understanding the "interpretative" nature of the subject indexing process. By placing this process within Peirce's semiotic framework of ideas and terminology, a more detailed description of the process is offered which shows that the uncertainty generally associated with this process is created by the fact that the indexer goes through a number of steps and creates the subject matter of the document during this process. The creation of the subject matter is based on the indexer's social and cultural context. The paper offers an explanation of what occurs in the indexing process and suggests that there is only little certainty to its result.
  7. Jens-Erik Mai, J.-E.: ¬The role of documents, domains and decisions in indexing (2004) 0.02
    0.016957048 = product of:
      0.050871145 = sum of:
        0.050871145 = product of:
          0.10174229 = sum of:
            0.10174229 = weight(_text_:indexing in 2653) [ClassicSimilarity], result of:
              0.10174229 = score(doc=2653,freq=20.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.5349608 = fieldWeight in 2653, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2653)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The paper demonstrates that indexing is a complex phenomenon and presents a domain centered approach to indexing. The indexing process is analysed using the Means-Ends Analysis, a tool developed for the Cognitive Work Analysis framework. A Means-Ends Analysis of indexing provides a holistic understanding of indexing and Shows the importance of understanding the users' activities when indexing. The paper presents a domain-centered approach to indexing that includes an analysis of the users' activities and the paper outlines that approach to indexing.
    Content
    1. Introduction The document at hand is often regarded as the most important entity for analysis in the indexing situation. The indexer's focus is directed to the "entity and its faithful description" (Soergel, 1985, 227) and the indexer is advised to "stick to the text and the author's claims" (Lancaster, 2003, 37). The indexer's aim is to establish the subject matter based an an analysis of the document with the goal of representing the document as truthfully as possible and to ensure the subject representation's validity by remaining neutral and objective. To help indexers with their task they are guided towards particular and important attributes of the document that could help them determine the document's subject matter. The exact attributes the indexer is recommended to examine varies, but typical examples are: the title, the abstract, the table of contents, chapter headings, chapter subheadings, preface, introduction, foreword, the text itself, bibliographical references, index entries, illustrations, diagrams, and tables and their captions. The exact recommendations vary according to the type of document that is being indexed (monographs vs. periodical articles, for instance). It is clear that indexers should provide faithful descriptions, that indexers should represent the author's claims, and that the document's attributes are helpful points of analysis. However, indexers need much more guidance when determining the subject than simply the documents themselves. One approach that could be taken to handle the Situation is a useroriented approach in which it is argued that the indexer should ask, "how should I make this document ... visible to potential users? What terms should I use to convey its knowledge to those interested?" (Albrechtsen, 1993, 222). The basic idea is that indexers need to have the users' information needs and terminology in mind when determining the subject matter of documents as well as when selecting index terms.
  8. Rorissa, A.; Iyer, H.: Theories of cognition and image categorization : what category labels reveal about basic level theory (2008) 0.01
    0.011375135 = product of:
      0.034125403 = sum of:
        0.034125403 = product of:
          0.068250805 = sum of:
            0.068250805 = weight(_text_:indexing in 1958) [ClassicSimilarity], result of:
              0.068250805 = score(doc=1958,freq=4.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.3588626 = fieldWeight in 1958, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1958)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Information search and retrieval interactions usually involve information content in the form of document collections, information retrieval systems and interfaces, and the user. To fully understand information search and retrieval interactions between users' cognitive space and the information space, researchers need to turn to cognitive models and theories. In this article, the authors use one of these theories, the basic level theory. Use of the basic level theory to understand human categorization is both appropriate and essential to user-centered design of taxonomies, ontologies, browsing interfaces, and other indexing tools and systems. Analyses of data from two studies involving free sorting by 105 participants of 100 images were conducted. The types of categories formed and category labels were examined. Results of the analyses indicate that image category labels generally belong to superordinate to the basic level, and are generic and interpretive. Implications for research on theories of cognition and categorization, and design of image indexing, retrieval and browsing systems are discussed.
  9. Sauperl, A.: Catalogers' common ground and shared knowledge (2004) 0.01
    0.009479279 = product of:
      0.028437834 = sum of:
        0.028437834 = product of:
          0.05687567 = sum of:
            0.05687567 = weight(_text_:indexing in 2069) [ClassicSimilarity], result of:
              0.05687567 = score(doc=2069,freq=4.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.29905218 = fieldWeight in 2069, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2069)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The problem of multiple interpretations of meaning in the indexing process has been mostly avoided by information scientists. Among the few who have addressed this question are Clare Beghtol and Jens Erik Mai. Their findings and findings of other researchers in the area of information science, social psychology, and psycholinguistics indicate that the source of the problem might lie in the background and culture of each indexer or cataloger. Are the catalogers aware of the problem? A general model of the indexing process was developed from observations and interviews of 12 catalogers in three American academic libraries. The model is illustrated with a hypothetical cataloger's process. The study with catalogers revealed that catalogers are aware of the author's, the user's, and their own meaning, but do not try to accommodate them all. On the other hand, they make every effort to build common ground with catalog users by studying documents related to the document being cataloged, and by considering catalog records and subject headings related to the subject identified in the document being cataloged. They try to build common ground with other catalogers by using cataloging tools and by inferring unstated rules of cataloging from examples in the catalogs.
  10. Sauperl, A.: Subject cataloging process of Slovenian and American catalogers (2005) 0.01
    0.009479279 = product of:
      0.028437834 = sum of:
        0.028437834 = product of:
          0.05687567 = sum of:
            0.05687567 = weight(_text_:indexing in 4702) [ClassicSimilarity], result of:
              0.05687567 = score(doc=4702,freq=4.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.29905218 = fieldWeight in 4702, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4702)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - An empirical study has shown that the real process of subject cataloging does not correspond entirely to theoretical descriptions in textbooks and international standards. The purpose of this is paper is to address the issue of whether it be possible for catalogers who have not received formal training to perform subject cataloging in a different way to their trained colleagues. Design/methodology/approach - A qualitative study was conducted in 2001 among five Slovenian public library catalogers. The resulting model is compared to previous findings. Findings - First, all catalogers attempted to determine what the book was about. While the American catalogers tried to understand the topic and the author's intent, the Slovenian catalogers appeared to focus on the topic only. Slovenian and American academic library catalogers did not demonstrate any anticipation of possible uses that users might have of the book, while this was important for American public library catalogers. All catalogers used existing records to build new ones and/or to search for subject headings. The verification of subject representation with the indexing language was the last step in the subject cataloging process of American catalogers, often skipped by Slovenian catalogers. Research limitations/implications - The small and convenient sample limits the findings. Practical implications - Comparison of subject cataloging processes of Slovenian and American catalogers, two different groups, is important because they both contribute to OCLC's WorldCat database. If the cataloging community is building a universal catalog and approaches to subject description are different, then the resulting subject representations might also be different. Originality/value - This is one of the very few empirical studies of subject cataloging and indexing.
  11. Rorissa, A.: User-generated descriptions of individual images versus labels of groups of images : a comparison using basic level theory (2008) 0.01
    0.009479279 = product of:
      0.028437834 = sum of:
        0.028437834 = product of:
          0.05687567 = sum of:
            0.05687567 = weight(_text_:indexing in 2122) [ClassicSimilarity], result of:
              0.05687567 = score(doc=2122,freq=4.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.29905218 = fieldWeight in 2122, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2122)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Although images are visual information sources with little or no text associated with them, users still tend to use text to describe images and formulate queries. This is because digital libraries and search engines provide mostly text query options and rely on text annotations for representation and retrieval of the semantic content of images. While the main focus of image research is on indexing and retrieval of individual images, the general topic of image browsing and indexing, and retrieval of groups of images has not been adequately investigated. Comparisons of descriptions of individual images as well as labels of groups of images supplied by users using cognitive models are scarce. This work fills this gap. Using the basic level theory as a framework, a comparison of the descriptions of individual images and labels assigned to groups of images by 180 participants in three studies found a marked difference in their level of abstraction. Results confirm assertions by previous researchers in LIS and other fields that groups of images are labeled using more superordinate level terms while individual image descriptions are mainly at the basic level. Implications for design of image browsing interfaces, taxonomies, thesauri, and similar tools are discussed.
  12. Zarri, G.P.: Indexing and querying of narrative documents, a knowledge representation approach (2003) 0.01
    0.009384007 = product of:
      0.02815202 = sum of:
        0.02815202 = product of:
          0.05630404 = sum of:
            0.05630404 = weight(_text_:indexing in 2691) [ClassicSimilarity], result of:
              0.05630404 = score(doc=2691,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.29604656 = fieldWeight in 2691, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2691)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  13. Naun, C.C.: Objectivity and subject access in the print library (2006) 0.01
    0.009384007 = product of:
      0.02815202 = sum of:
        0.02815202 = product of:
          0.05630404 = sum of:
            0.05630404 = weight(_text_:indexing in 236) [ClassicSimilarity], result of:
              0.05630404 = score(doc=236,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.29604656 = fieldWeight in 236, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=236)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Librarians have inherited from the print environment a particular way of thinking about subject representation, one based on the conscious identification by librarians of appropriate subject classes and terminology. This conception has played a central role in shaping the profession's characteristic approach to upholding one of its core values: objectivity. It is argued that the social and technological roots of traditional indexing practice are closely intertwined. It is further argued that in traditional library practice objectivity is to be understood as impartiality, and reflects the mediating role that librarians have played in society. The case presented here is not a historical one based on empirical research, but rather a conceptual examination of practices that are already familiar to most librarians.
  14. Greisdorf, H.; O'Connor, B.: Modelling what users see when they look at images : a cognitive viewpoint (2002) 0.01
    0.0080434345 = product of:
      0.024130303 = sum of:
        0.024130303 = product of:
          0.048260607 = sum of:
            0.048260607 = weight(_text_:indexing in 4471) [ClassicSimilarity], result of:
              0.048260607 = score(doc=4471,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.2537542 = fieldWeight in 4471, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4471)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Analysis of user viewing and query-matching behavior furnishes additional evidence that the relevance of retrieved images for system users may arise from descriptions of objects and content-based elements that are not evident or not even present in the image. This investigation looks at how users assign pre-determined query terms to retrieved images, as well as looking at a post-retrieval process of image engagement to user cognitive assessments of meaningful terms. Additionally, affective/emotion-based query terms appear to be an important descriptive category for image retrieval. A system for capturing (eliciting) human interpretations derived from cognitive engagements with viewed images could further enhance the efficiency of image retrieval systems stemming from traditional indexing methods and technology-based content extraction algorithms. An approach to such a system is posited.
  15. Garcia Jiménez, A.; Valle Gastaminza, F. del: From thesauri to ontologies: a case study in a digital visual context (2004) 0.01
    0.0067028617 = product of:
      0.020108584 = sum of:
        0.020108584 = product of:
          0.04021717 = sum of:
            0.04021717 = weight(_text_:indexing in 2657) [ClassicSimilarity], result of:
              0.04021717 = score(doc=2657,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.21146181 = fieldWeight in 2657, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2657)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In this paper a framework for the construction and organization of knowledge organization and representation languages in the context of digital photograph collections is presented. It analyses exigencies of photographs as documentary objects, as well as several models of indexing, different proposals of languages and a theoretical revision of ontologies in this research field, in relation to visual documents. In considering the photograph as an analysis object, it is appropriate to study all its attributes: features, components or properties of an objeet that can be represented in an information processing system. The attributes which are related to visual features include cognitive and affective answers and elements that describe spatial, semantic, symbolic or emotional features about a photograph. In any case, it is necessary to treat: a) morphological and material attributes (emulsion, state of preservation); b) biographical attributes: (school or trend, publication or exhibition); c) attributes of content: what and how a photograph says something; d) relational attributes: visual documents establish relationships with other documents that can be analysed in order to understand them.
  16. Enser, P.G.B.; Sandom, C.J.; Hare, J.S.; Lewis, P.H.: Facing the reality of semantic image retrieval (2007) 0.01
    0.0067028617 = product of:
      0.020108584 = sum of:
        0.020108584 = product of:
          0.04021717 = sum of:
            0.04021717 = weight(_text_:indexing in 837) [ClassicSimilarity], result of:
              0.04021717 = score(doc=837,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.21146181 = fieldWeight in 837, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=837)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - To provide a better-informed view of the extent of the semantic gap in image retrieval, and the limited potential for bridging it offered by current semantic image retrieval techniques. Design/methodology/approach - Within an ongoing project, a broad spectrum of operational image retrieval activity has been surveyed, and, from a number of collaborating institutions, a test collection assembled which comprises user requests, the images selected in response to those requests, and their associated metadata. This has provided the evidence base upon which to make informed observations on the efficacy of cutting-edge automatic annotation techniques which seek to integrate the text-based and content-based image retrieval paradigms. Findings - Evidence from the real-world practice of image retrieval highlights the existence of a generic-specific continuum of object identification, and the incidence of temporal, spatial, significance and abstract concept facets, manifest in textual indexing and real-query scenarios but often having no directly visible presence in an image. These factors combine to limit the functionality of current semantic image retrieval techniques, which interpret only visible features at the generic extremity of the generic-specific continuum. Research limitations/implications - The project is concerned with the traditional image retrieval environment in which retrieval transactions are conducted on still images which form part of managed collections. The possibilities offered by ontological support for adding functionality to automatic annotation techniques are considered. Originality/value - The paper offers fresh insights into the challenge of migrating content-based image retrieval from the laboratory to the operational environment, informed by newly-assembled, comprehensive, live data.
  17. Sauperl, A.: Subject determination during the cataloging process : the development of a system based on theoretical principles (2002) 0.00
    0.0033657951 = product of:
      0.010097385 = sum of:
        0.010097385 = product of:
          0.02019477 = sum of:
            0.02019477 = weight(_text_:22 in 2293) [ClassicSimilarity], result of:
              0.02019477 = score(doc=2293,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.116070345 = fieldWeight in 2293, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2293)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    27. 9.2005 14:22:19
  18. Bade, D.: ¬The creation and persistence of misinformation in shared library catalogs : language and subject knowledge in a technological era (2002) 0.00
    0.0022438637 = product of:
      0.0067315907 = sum of:
        0.0067315907 = product of:
          0.013463181 = sum of:
            0.013463181 = weight(_text_:22 in 1858) [ClassicSimilarity], result of:
              0.013463181 = score(doc=1858,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.07738023 = fieldWeight in 1858, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1858)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 9.1997 19:16:05