Search (11 results, page 1 of 1)

  • × year_i:[2000 TO 2010}
  • × theme_ss:"Inhaltsanalyse"
  1. White, M.D.; Marsh, E.E.: Content analysis : a flexible methodology (2006) 0.03
    0.025098871 = product of:
      0.100395486 = sum of:
        0.100395486 = sum of:
          0.06273703 = weight(_text_:aspects in 5589) [ClassicSimilarity], result of:
            0.06273703 = score(doc=5589,freq=2.0), product of:
              0.20938325 = queryWeight, product of:
                4.5198684 = idf(docFreq=1308, maxDocs=44218)
                0.046325076 = queryNorm
              0.29962775 = fieldWeight in 5589, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.5198684 = idf(docFreq=1308, maxDocs=44218)
                0.046875 = fieldNorm(doc=5589)
          0.03765845 = weight(_text_:22 in 5589) [ClassicSimilarity], result of:
            0.03765845 = score(doc=5589,freq=2.0), product of:
              0.16222252 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046325076 = queryNorm
              0.23214069 = fieldWeight in 5589, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5589)
      0.25 = coord(1/4)
    
    Abstract
    Content analysis is a highly flexible research method that has been widely used in library and information science (LIS) studies with varying research goals and objectives. The research method is applied in qualitative, quantitative, and sometimes mixed modes of research frameworks and employs a wide range of analytical techniques to generate findings and put them into context. This article characterizes content analysis as a systematic, rigorous approach to analyzing documents obtained or generated in the course of research. It briefly describes the steps involved in content analysis, differentiates between quantitative and qualitative content analysis, and shows that content analysis serves the purposes of both quantitative research and qualitative research. The authors draw on selected LIS studies that have used content analysis to illustrate the concepts addressed in the article. The article also serves as a gateway to methodological books and articles that provide more detail about aspects of content analysis discussed only briefly in the article.
    Source
    Library trends. 55(2006) no.1, S.22-45
  2. Naun, C.C.: Objectivity and subject access in the print library (2006) 0.01
    0.0142422225 = product of:
      0.05696889 = sum of:
        0.05696889 = weight(_text_:social in 236) [ClassicSimilarity], result of:
          0.05696889 = score(doc=236,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.30839854 = fieldWeight in 236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0546875 = fieldNorm(doc=236)
      0.25 = coord(1/4)
    
    Abstract
    Librarians have inherited from the print environment a particular way of thinking about subject representation, one based on the conscious identification by librarians of appropriate subject classes and terminology. This conception has played a central role in shaping the profession's characteristic approach to upholding one of its core values: objectivity. It is argued that the social and technological roots of traditional indexing practice are closely intertwined. It is further argued that in traditional library practice objectivity is to be understood as impartiality, and reflects the mediating role that librarians have played in society. The case presented here is not a historical one based on empirical research, but rather a conceptual examination of practices that are already familiar to most librarians.
  3. Sigel, A.: How can user-oriented depth analysis be constructively guided? (2000) 0.01
    0.012334129 = product of:
      0.049336515 = sum of:
        0.049336515 = weight(_text_:social in 133) [ClassicSimilarity], result of:
          0.049336515 = score(doc=133,freq=6.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.26708102 = fieldWeight in 133, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.02734375 = fieldNorm(doc=133)
      0.25 = coord(1/4)
    
    Abstract
    It is vital for library and information science to understand the subject indexing process thoroughly. However, document analysis, the first and most important step in indexing, has not received sufficient attention. As this is an exceptionally hard problem, we still do not dispose of a sound indexing theory. Therefore we have difficulties in teaching indexing and in explaining why a given subject representation is "better" than another. Technological advancements have not helped to close this fundamental gap. To proceed, we should ask the right questions instead. Several types of indexer inconsistencies can be explained as acceptable, yet different conceptualizations which resulting of the variety of groups dealing with a problem from their respective viewpoints. Multiple indexed documents are regarded as the normal case. Intersubjectively replicable indexing results are often questionable or do not constitute interesting cases of indexing at all. In the context of my ongoing dissertation in which I intend to develop an enhanced indexing theory by investigating improvements within a social sciences domain, this paper explains user-oriented selective depth analysis and why I chose that configuration. Strongly influenced by Mai's dissertation, I also communicate my first insights concerning current indexing theories. I agree that I cannot ignore epistemological stances and philosophical issues in language and meaning related to indexing and accept the openness of the interpretive nature of the indexing process. Although I present arguments against the employment of an indexing language as well, it is still indispensable in situations which demand easier access and control by devices. Despite the enormous difficulties the user-oriented and selective depth analysis poses, I argue that it is both feasible and useful if one achieves careful guidance of the possible interpretations. There is some hope because the number of useful interpretations is limited: Every summary is tailored to a purpose, audience and situation. Domain, discourse and social practice entail additional constraints. A pluralistic method mix that focusses on ecologically valid, holistic contexts and employs qualitative methods is recommended. Domain analysis urgently has to be made more practical and applicable. Only then we will be able to investigate empirically domains in order to identify their structures shaped by the corresponding discourse communities. We plan to represent the recognized problem structures and indexing questions of relevance to a small domain in formal, ontological computer models -- if we can find such stable knowledge structures. This would allow us to tailor dynamically summaries for user communities. For practical purposes we suggest to assume a less demanding position than Hjorland's "totality of the epistemological potential". It is sufficent that we identify and represent iteratively the information needs of today's user groups in interactive knowledge-based systems. The best way to formalize such knowledge gained about discourse communities is however unknown. Indexers should stay in direct contact with the community they serve or be part of it to ensure agreement with their viewpoints. Checklist/request-oriented indexing could be very helpful but it remains to be demonstrated how well it will be applicable in the social sciences. A frame-based representation or at least a sophisticated grouping of terms could help to express relational knowledge structures. There remains much work to do since in practice no one has shown yet how such an improved indexing system would work and if the indexing results were really "better".
  4. Campbell, G.: Queer theory and the creation of contextual subject access tools for gay and lesbian communities (2000) 0.01
    0.010173016 = product of:
      0.040692065 = sum of:
        0.040692065 = weight(_text_:social in 6054) [ClassicSimilarity], result of:
          0.040692065 = score(doc=6054,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.22028469 = fieldWeight in 6054, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6054)
      0.25 = coord(1/4)
    
    Abstract
    Knowledge organization research has come to question the theoretical distinction between "aboutness" (a document's innate content) and "meaning" (the use to which a document is put). This distinction has relevance beyond Information Studies, particularly in relation to homosexual concerns. Literary criticism, in particular, frequently addresses the question: when is a work "about" homosexuality? This paper explores this literary debate and its implications for the design of subject access systems for gay and lesbian communities. By examining the literary criticism of Herman Melville's Billy Budd, particularly in relation to the theories of Eve Kosofsky Sedgwick in The Epistemology of the Closet (1990), this paper exposes three tensions that designers of gay and lesbian classifications and vocabularies can expect to face. First is a tension between essentialist and constructivist views of homosexuality, which will affect the choice of terms, categories, and references. Second is a tension between minoritizing and universalizing perspectives on homosexuality. Third is a redefined distinction between aboutness and meaning, in which aboutness refers not to stable document content, but to the system designer's inescapable social and ideological perspectives. Designers of subject access systems can therefore expect to work in a context of intense scrutiny and persistent controversy
  5. Andersen, J.; Christensen, F.S.: Wittgenstein and indexing theory (2001) 0.01
    0.010173016 = product of:
      0.040692065 = sum of:
        0.040692065 = weight(_text_:social in 1590) [ClassicSimilarity], result of:
          0.040692065 = score(doc=1590,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.22028469 = fieldWeight in 1590, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1590)
      0.25 = coord(1/4)
    
    Abstract
    The paper considers indexing an activity that deals with linguistic entities. It rests an the assumption that a theory of indexing should be based an a philosophy of language, because indexing is concerned with the linguistic representation of meaning. The paper consists of four sections: It begins with some basic considerations an the nature of indexing and the requirements for a theory an this; it is followed by a short review of the use of Wittgenstein's philosophy in LIS-literature; next is an analysis of Wittgenstein's work Philosophical Investigations; finally, we deduce a theory of indexing from this philosophy. Considering an indexing theory a theory of meaning entails that, for the purpose of retrieval, indexing is a representation of meaning. Therefore, an indexing theory is concerned with how words are used in the linguistic context. Furthermore, the indexing process is a communicative process containing an interpretative element. Through the philosophy of the later Wittgenstein, it is shown that language and meaning are publicly constituted entities. Since they form the basis of indexing, a theory hereof must take into account that no single actor can define the meaning of documents. Rather this is decided by the social, historical and linguistic context in which the document is produced, distributed and exchanged. Indexing must clarify and reflect these contexts.
  6. Sauperl, A.: Catalogers' common ground and shared knowledge (2004) 0.01
    0.010173016 = product of:
      0.040692065 = sum of:
        0.040692065 = weight(_text_:social in 2069) [ClassicSimilarity], result of:
          0.040692065 = score(doc=2069,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.22028469 = fieldWeight in 2069, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2069)
      0.25 = coord(1/4)
    
    Abstract
    The problem of multiple interpretations of meaning in the indexing process has been mostly avoided by information scientists. Among the few who have addressed this question are Clare Beghtol and Jens Erik Mai. Their findings and findings of other researchers in the area of information science, social psychology, and psycholinguistics indicate that the source of the problem might lie in the background and culture of each indexer or cataloger. Are the catalogers aware of the problem? A general model of the indexing process was developed from observations and interviews of 12 catalogers in three American academic libraries. The model is illustrated with a hypothetical cataloger's process. The study with catalogers revealed that catalogers are aware of the author's, the user's, and their own meaning, but do not try to accommodate them all. On the other hand, they make every effort to build common ground with catalog users by studying documents related to the document being cataloged, and by considering catalog records and subject headings related to the subject identified in the document being cataloged. They try to build common ground with other catalogers by using cataloging tools and by inferring unstated rules of cataloging from examples in the catalogs.
  7. Mai, J.-E.: Semiotics and indexing : an analysis of the subject indexing process (2001) 0.01
    0.010173016 = product of:
      0.040692065 = sum of:
        0.040692065 = weight(_text_:social in 4480) [ClassicSimilarity], result of:
          0.040692065 = score(doc=4480,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.22028469 = fieldWeight in 4480, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4480)
      0.25 = coord(1/4)
    
    Abstract
    This paper explains at least some of the major problems related to the subject indexing process and proposes a new approach to understanding the process, which is ordinarily described as a process that takes a number of steps. The subject is first determined, then it is described in a few sentences and, lastly, the description of the subject is converted into the indexing language. It is argued that this typical approach characteristically lacks an understanding of what the central nature of the process is. Indexing is not a neutral and objective representation of a document's subject matter but the representation of an interpretation of a document for future use. Semiotics is offered here as a framework for understanding the "interpretative" nature of the subject indexing process. By placing this process within Peirce's semiotic framework of ideas and terminology, a more detailed description of the process is offered which shows that the uncertainty generally associated with this process is created by the fact that the indexer goes through a number of steps and creates the subject matter of the document during this process. The creation of the subject matter is based on the indexer's social and cultural context. The paper offers an explanation of what occurs in the indexing process and suggests that there is only little certainty to its result.
  8. Hoover, L.: ¬A beginners' guide for subject analysis of theses and dissertations in the hard sciences (2005) 0.01
    0.010173016 = product of:
      0.040692065 = sum of:
        0.040692065 = weight(_text_:social in 5740) [ClassicSimilarity], result of:
          0.040692065 = score(doc=5740,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.22028469 = fieldWeight in 5740, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5740)
      0.25 = coord(1/4)
    
    Abstract
    This guide, for beginning catalogers with humanities or social sciences backgrounds, provides assistance in subject analysis (based on Library of Congress Subject Headings) of theses and dissertations (T/Ds) that are produced by graduate students in university departments in the hard sciences (physical sciences and engineering). It is aimed at those who have had little or no experience in cataloging, especially of this type of material, and for those who desire to supplement local mentoring resources for subject analysis in the hard sciences. Theses and dissertations from these departments present a special challenge because they are the results of current research representing specific new concepts with which the cataloger may not be familiar. In fact, subject headings often have not yet been created for the specific concept(s) being researched. Additionally, T/D authors often use jargon/terminology specific to their department. Catalogers often have many other duties in addition to subject analysis of T/Ds in the hard sciences, yet they desire to provide optimal access through accurate, thorough subject analysis. Tips are provided for determining the content of the T/D, strategic searches on WorldCat for possible subject headings, evaluating the relevancy of these subject headings for final selection, and selecting appropriate subdivisions where needed. Lists of basic reference resources are also provided.
  9. Winget, M.: Describing art : an alternative approach to subject access and interpretation (2009) 0.01
    0.010173016 = product of:
      0.040692065 = sum of:
        0.040692065 = weight(_text_:social in 3618) [ClassicSimilarity], result of:
          0.040692065 = score(doc=3618,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.22028469 = fieldWeight in 3618, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3618)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The purpose of this paper is to examine the art historical antecedents of providing subject access to images. After reviewing the assumptions and limitations inherent in the most prevalent descriptive method, the paper seeks to introduce a new model that allows for more comprehensive representation of visually-based cultural materials. Design/methodology/approach - The paper presents a literature-based conceptual analysis, taking Panofsky's theory of iconography and iconology as the starting-point. Panofsky's conceptual model, while appropriate for art created in the Western academic tradition, ignores or misrepresents work from other eras or cultures. Continued dependence on Panofskian descriptive methods limits the functionality and usefulness of image representation systems. Findings - The paper recommends the development of a more precise and inclusive descriptive model for art objects, which is based on the premise that art is not another sort of text, and should not be interpreted as such. Practical implications - The paper provides suggestions for the development of representation models that will enhance the description of non-textual artifacts. Originality/value - The paper addresses issues in information science, the history of art, and computer science, and suggests that a new descriptive model would be of great value to both humanist and social science scholars.
  10. Sauperl, A.: Subject determination during the cataloging process : the development of a system based on theoretical principles (2002) 0.00
    0.002353653 = product of:
      0.009414612 = sum of:
        0.009414612 = product of:
          0.018829225 = sum of:
            0.018829225 = weight(_text_:22 in 2293) [ClassicSimilarity], result of:
              0.018829225 = score(doc=2293,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.116070345 = fieldWeight in 2293, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2293)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    27. 9.2005 14:22:19
  11. Bade, D.: ¬The creation and persistence of misinformation in shared library catalogs : language and subject knowledge in a technological era (2002) 0.00
    0.001569102 = product of:
      0.006276408 = sum of:
        0.006276408 = product of:
          0.012552816 = sum of:
            0.012552816 = weight(_text_:22 in 1858) [ClassicSimilarity], result of:
              0.012552816 = score(doc=1858,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.07738023 = fieldWeight in 1858, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1858)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 9.1997 19:16:05