Search (33 results, page 2 of 2)

  • × theme_ss:"Inhaltsanalyse"
  • × year_i:[2000 TO 2010}
  1. Sigel, A.: How can user-oriented depth analysis be constructively guided? (2000) 0.00
    0.0020506454 = product of:
      0.004101291 = sum of:
        0.004101291 = product of:
          0.008202582 = sum of:
            0.008202582 = weight(_text_:a in 133) [ClassicSimilarity], result of:
              0.008202582 = score(doc=133,freq=24.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1544581 = fieldWeight in 133, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=133)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    It is vital for library and information science to understand the subject indexing process thoroughly. However, document analysis, the first and most important step in indexing, has not received sufficient attention. As this is an exceptionally hard problem, we still do not dispose of a sound indexing theory. Therefore we have difficulties in teaching indexing and in explaining why a given subject representation is "better" than another. Technological advancements have not helped to close this fundamental gap. To proceed, we should ask the right questions instead. Several types of indexer inconsistencies can be explained as acceptable, yet different conceptualizations which resulting of the variety of groups dealing with a problem from their respective viewpoints. Multiple indexed documents are regarded as the normal case. Intersubjectively replicable indexing results are often questionable or do not constitute interesting cases of indexing at all. In the context of my ongoing dissertation in which I intend to develop an enhanced indexing theory by investigating improvements within a social sciences domain, this paper explains user-oriented selective depth analysis and why I chose that configuration. Strongly influenced by Mai's dissertation, I also communicate my first insights concerning current indexing theories. I agree that I cannot ignore epistemological stances and philosophical issues in language and meaning related to indexing and accept the openness of the interpretive nature of the indexing process. Although I present arguments against the employment of an indexing language as well, it is still indispensable in situations which demand easier access and control by devices. Despite the enormous difficulties the user-oriented and selective depth analysis poses, I argue that it is both feasible and useful if one achieves careful guidance of the possible interpretations. There is some hope because the number of useful interpretations is limited: Every summary is tailored to a purpose, audience and situation. Domain, discourse and social practice entail additional constraints. A pluralistic method mix that focusses on ecologically valid, holistic contexts and employs qualitative methods is recommended. Domain analysis urgently has to be made more practical and applicable. Only then we will be able to investigate empirically domains in order to identify their structures shaped by the corresponding discourse communities. We plan to represent the recognized problem structures and indexing questions of relevance to a small domain in formal, ontological computer models -- if we can find such stable knowledge structures. This would allow us to tailor dynamically summaries for user communities. For practical purposes we suggest to assume a less demanding position than Hjorland's "totality of the epistemological potential". It is sufficent that we identify and represent iteratively the information needs of today's user groups in interactive knowledge-based systems. The best way to formalize such knowledge gained about discourse communities is however unknown. Indexers should stay in direct contact with the community they serve or be part of it to ensure agreement with their viewpoints. Checklist/request-oriented indexing could be very helpful but it remains to be demonstrated how well it will be applicable in the social sciences. A frame-based representation or at least a sophisticated grouping of terms could help to express relational knowledge structures. There remains much work to do since in practice no one has shown yet how such an improved indexing system would work and if the indexing results were really "better".
    Type
    a
  2. Zarri, G.P.: Indexing and querying of narrative documents, a knowledge representation approach (2003) 0.00
    0.0020506454 = product of:
      0.004101291 = sum of:
        0.004101291 = product of:
          0.008202582 = sum of:
            0.008202582 = weight(_text_:a in 2691) [ClassicSimilarity], result of:
              0.008202582 = score(doc=2691,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1544581 = fieldWeight in 2691, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2691)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We describe here NKRL (Narrative Knowledge Representation Language), a semantic formalism for taking into account the characteristics of narrative multimedia documents. In these documents, the information content consists in the description of 'events' that relate the real or intended behaviour of some 'actors' (characters, personages, etc.). Narrative documents of an economic interest correspond to news stories, corporate documents, normative and legal texts, intelligence messages, representation of patient's medical records, etc. NKRL is characterised by the use of several knowledge representation principles and several high-level inference tools.
    Type
    a
  3. Marshall, L.: Specific and generic subject headings : increasing subject access to library materials (2003) 0.00
    0.0020506454 = product of:
      0.004101291 = sum of:
        0.004101291 = product of:
          0.008202582 = sum of:
            0.008202582 = weight(_text_:a in 5497) [ClassicSimilarity], result of:
              0.008202582 = score(doc=5497,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1544581 = fieldWeight in 5497, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5497)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The principle of specificity for subject headings provides a clear advantage to many researchers for the precision it brings to subject searching. However, for some researchers very specific subject headings hinder an efficient and comprehensive search. An appropriate broader heading, especially when made narrower in scope by the addition of subheadings, can benefit researchers by providing generic access to their topic. Assigning both specific and generic subject headings to a work would enhance the subject accessibility for the diverse approaches and research needs of different catalog users. However, it can be difficult for catalogers to assign broader terms consistently to different works and without consistency the gathering function of those terms may not be realized.
    Type
    a
  4. Marsh, E.E.; White, M.D.: ¬A taxonomy of relationships between images and text (2003) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 4444) [ClassicSimilarity], result of:
              0.008118451 = score(doc=4444,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 4444, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4444)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The paper establishes a taxonomy of image-text relationships that reflects the ways that images and text interact. It is applicable to all subject areas and document types. The taxonomy was developed to answer the research question: how does an illustration relate to the text with which it is associated, or, what are the functions of illustration? Developed in a two-stage process - first, analysis of relevant research in children's literature, dictionary development, education, journalism, and library and information design and, second, subsequent application of the first version of the taxonomy to 954 image-text pairs in 45 Web pages (pages with educational content for children, online newspapers, and retail business pages) - the taxonomy identifies 49 relationships and groups them in three categories according to the closeness of the conceptual relationship between image and text. The paper uses qualitative content analysis to illustrate use of the taxonomy to analyze four image-text pairs in government publications and discusses the implications of the research for information retrieval and document design.
    Type
    a
  5. Jens-Erik Mai, J.-E.: ¬The role of documents, domains and decisions in indexing (2004) 0.00
    0.001913537 = product of:
      0.003827074 = sum of:
        0.003827074 = product of:
          0.007654148 = sum of:
            0.007654148 = weight(_text_:a in 2653) [ClassicSimilarity], result of:
              0.007654148 = score(doc=2653,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14413087 = fieldWeight in 2653, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2653)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The paper demonstrates that indexing is a complex phenomenon and presents a domain centered approach to indexing. The indexing process is analysed using the Means-Ends Analysis, a tool developed for the Cognitive Work Analysis framework. A Means-Ends Analysis of indexing provides a holistic understanding of indexing and Shows the importance of understanding the users' activities when indexing. The paper presents a domain-centered approach to indexing that includes an analysis of the users' activities and the paper outlines that approach to indexing.
    Content
    1. Introduction The document at hand is often regarded as the most important entity for analysis in the indexing situation. The indexer's focus is directed to the "entity and its faithful description" (Soergel, 1985, 227) and the indexer is advised to "stick to the text and the author's claims" (Lancaster, 2003, 37). The indexer's aim is to establish the subject matter based an an analysis of the document with the goal of representing the document as truthfully as possible and to ensure the subject representation's validity by remaining neutral and objective. To help indexers with their task they are guided towards particular and important attributes of the document that could help them determine the document's subject matter. The exact attributes the indexer is recommended to examine varies, but typical examples are: the title, the abstract, the table of contents, chapter headings, chapter subheadings, preface, introduction, foreword, the text itself, bibliographical references, index entries, illustrations, diagrams, and tables and their captions. The exact recommendations vary according to the type of document that is being indexed (monographs vs. periodical articles, for instance). It is clear that indexers should provide faithful descriptions, that indexers should represent the author's claims, and that the document's attributes are helpful points of analysis. However, indexers need much more guidance when determining the subject than simply the documents themselves. One approach that could be taken to handle the Situation is a useroriented approach in which it is argued that the indexer should ask, "how should I make this document ... visible to potential users? What terms should I use to convey its knowledge to those interested?" (Albrechtsen, 1993, 222). The basic idea is that indexers need to have the users' information needs and terminology in mind when determining the subject matter of documents as well as when selecting index terms.
    Type
    a
  6. Sauperl, A.: Subject cataloging process of Slovenian and American catalogers (2005) 0.00
    0.0018909799 = product of:
      0.0037819599 = sum of:
        0.0037819599 = product of:
          0.0075639198 = sum of:
            0.0075639198 = weight(_text_:a in 4702) [ClassicSimilarity], result of:
              0.0075639198 = score(doc=4702,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14243183 = fieldWeight in 4702, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4702)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - An empirical study has shown that the real process of subject cataloging does not correspond entirely to theoretical descriptions in textbooks and international standards. The purpose of this is paper is to address the issue of whether it be possible for catalogers who have not received formal training to perform subject cataloging in a different way to their trained colleagues. Design/methodology/approach - A qualitative study was conducted in 2001 among five Slovenian public library catalogers. The resulting model is compared to previous findings. Findings - First, all catalogers attempted to determine what the book was about. While the American catalogers tried to understand the topic and the author's intent, the Slovenian catalogers appeared to focus on the topic only. Slovenian and American academic library catalogers did not demonstrate any anticipation of possible uses that users might have of the book, while this was important for American public library catalogers. All catalogers used existing records to build new ones and/or to search for subject headings. The verification of subject representation with the indexing language was the last step in the subject cataloging process of American catalogers, often skipped by Slovenian catalogers. Research limitations/implications - The small and convenient sample limits the findings. Practical implications - Comparison of subject cataloging processes of Slovenian and American catalogers, two different groups, is important because they both contribute to OCLC's WorldCat database. If the cataloging community is building a universal catalog and approaches to subject description are different, then the resulting subject representations might also be different. Originality/value - This is one of the very few empirical studies of subject cataloging and indexing.
    Type
    a
  7. Winget, M.: Describing art : an alternative approach to subject access and interpretation (2009) 0.00
    0.0018909799 = product of:
      0.0037819599 = sum of:
        0.0037819599 = product of:
          0.0075639198 = sum of:
            0.0075639198 = weight(_text_:a in 3618) [ClassicSimilarity], result of:
              0.0075639198 = score(doc=3618,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14243183 = fieldWeight in 3618, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3618)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - The purpose of this paper is to examine the art historical antecedents of providing subject access to images. After reviewing the assumptions and limitations inherent in the most prevalent descriptive method, the paper seeks to introduce a new model that allows for more comprehensive representation of visually-based cultural materials. Design/methodology/approach - The paper presents a literature-based conceptual analysis, taking Panofsky's theory of iconography and iconology as the starting-point. Panofsky's conceptual model, while appropriate for art created in the Western academic tradition, ignores or misrepresents work from other eras or cultures. Continued dependence on Panofskian descriptive methods limits the functionality and usefulness of image representation systems. Findings - The paper recommends the development of a more precise and inclusive descriptive model for art objects, which is based on the premise that art is not another sort of text, and should not be interpreted as such. Practical implications - The paper provides suggestions for the development of representation models that will enhance the description of non-textual artifacts. Originality/value - The paper addresses issues in information science, the history of art, and computer science, and suggests that a new descriptive model would be of great value to both humanist and social science scholars.
    Type
    a
  8. Sauperl, A.: Catalogers' common ground and shared knowledge (2004) 0.00
    0.0016913437 = product of:
      0.0033826875 = sum of:
        0.0033826875 = product of:
          0.006765375 = sum of:
            0.006765375 = weight(_text_:a in 2069) [ClassicSimilarity], result of:
              0.006765375 = score(doc=2069,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12739488 = fieldWeight in 2069, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2069)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The problem of multiple interpretations of meaning in the indexing process has been mostly avoided by information scientists. Among the few who have addressed this question are Clare Beghtol and Jens Erik Mai. Their findings and findings of other researchers in the area of information science, social psychology, and psycholinguistics indicate that the source of the problem might lie in the background and culture of each indexer or cataloger. Are the catalogers aware of the problem? A general model of the indexing process was developed from observations and interviews of 12 catalogers in three American academic libraries. The model is illustrated with a hypothetical cataloger's process. The study with catalogers revealed that catalogers are aware of the author's, the user's, and their own meaning, but do not try to accommodate them all. On the other hand, they make every effort to build common ground with catalog users by studying documents related to the document being cataloged, and by considering catalog records and subject headings related to the subject identified in the document being cataloged. They try to build common ground with other catalogers by using cataloging tools and by inferring unstated rules of cataloging from examples in the catalogs.
    Type
    a
  9. Hoover, L.: ¬A beginners' guide for subject analysis of theses and dissertations in the hard sciences (2005) 0.00
    0.0014647468 = product of:
      0.0029294936 = sum of:
        0.0029294936 = product of:
          0.005858987 = sum of:
            0.005858987 = weight(_text_:a in 5740) [ClassicSimilarity], result of:
              0.005858987 = score(doc=5740,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.11032722 = fieldWeight in 5740, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5740)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This guide, for beginning catalogers with humanities or social sciences backgrounds, provides assistance in subject analysis (based on Library of Congress Subject Headings) of theses and dissertations (T/Ds) that are produced by graduate students in university departments in the hard sciences (physical sciences and engineering). It is aimed at those who have had little or no experience in cataloging, especially of this type of material, and for those who desire to supplement local mentoring resources for subject analysis in the hard sciences. Theses and dissertations from these departments present a special challenge because they are the results of current research representing specific new concepts with which the cataloger may not be familiar. In fact, subject headings often have not yet been created for the specific concept(s) being researched. Additionally, T/D authors often use jargon/terminology specific to their department. Catalogers often have many other duties in addition to subject analysis of T/Ds in the hard sciences, yet they desire to provide optimal access through accurate, thorough subject analysis. Tips are provided for determining the content of the T/D, strategic searches on WorldCat for possible subject headings, evaluating the relevancy of these subject headings for final selection, and selecting appropriate subdivisions where needed. Lists of basic reference resources are also provided.
    Type
    a
  10. Miene, A.; Hermes, T.; Ioannidis, G.: Wie kommt das Bild in die Datenbank? : Inhaltsbasierte Analyse von Bildern und Videos (2002) 0.00
    0.0014351527 = product of:
      0.0028703054 = sum of:
        0.0028703054 = product of:
          0.005740611 = sum of:
            0.005740611 = weight(_text_:a in 213) [ClassicSimilarity], result of:
              0.005740611 = score(doc=213,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10809815 = fieldWeight in 213, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=213)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  11. Rorissa, A.; Iyer, H.: Theories of cognition and image categorization : what category labels reveal about basic level theory (2008) 0.00
    0.0014351527 = product of:
      0.0028703054 = sum of:
        0.0028703054 = product of:
          0.005740611 = sum of:
            0.005740611 = weight(_text_:a in 1958) [ClassicSimilarity], result of:
              0.005740611 = score(doc=1958,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10809815 = fieldWeight in 1958, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1958)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  12. Andersen, J.: ¬The concept of genre : when, how, and why? (2001) 0.00
    0.001353075 = product of:
      0.00270615 = sum of:
        0.00270615 = product of:
          0.0054123 = sum of:
            0.0054123 = weight(_text_:a in 639) [ClassicSimilarity], result of:
              0.0054123 = score(doc=639,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10191591 = fieldWeight in 639, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=639)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  13. Ackermann, A.: Zur Rolle der Inhaltsanalyse bei der Sacherschließung : theoretischer Anspruch und praktische Wirklichkeit in der RSWK (2001) 0.00
    3.3826876E-4 = product of:
      6.765375E-4 = sum of:
        6.765375E-4 = product of:
          0.001353075 = sum of:
            0.001353075 = weight(_text_:a in 2061) [ClassicSimilarity], result of:
              0.001353075 = score(doc=2061,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.025478978 = fieldWeight in 2061, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2061)
          0.5 = coord(1/2)
      0.5 = coord(1/2)