Search (56 results, page 1 of 3)

  • × theme_ss:"Inhaltsanalyse"
  1. White, M.D.; Marsh, E.E.: Content analysis : a flexible methodology (2006) 0.09
    0.08519173 = product of:
      0.12778759 = sum of:
        0.10759281 = weight(_text_:systematic in 5589) [ClassicSimilarity], result of:
          0.10759281 = score(doc=5589,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.3788859 = fieldWeight in 5589, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=5589)
        0.02019477 = product of:
          0.04038954 = sum of:
            0.04038954 = weight(_text_:22 in 5589) [ClassicSimilarity], result of:
              0.04038954 = score(doc=5589,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.23214069 = fieldWeight in 5589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5589)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Content analysis is a highly flexible research method that has been widely used in library and information science (LIS) studies with varying research goals and objectives. The research method is applied in qualitative, quantitative, and sometimes mixed modes of research frameworks and employs a wide range of analytical techniques to generate findings and put them into context. This article characterizes content analysis as a systematic, rigorous approach to analyzing documents obtained or generated in the course of research. It briefly describes the steps involved in content analysis, differentiates between quantitative and qualitative content analysis, and shows that content analysis serves the purposes of both quantitative research and qualitative research. The authors draw on selected LIS studies that have used content analysis to illustrate the concepts addressed in the article. The article also serves as a gateway to methodological books and articles that provide more detail about aspects of content analysis discussed only briefly in the article.
    Source
    Library trends. 55(2006) no.1, S.22-45
  2. Huang, X.; Soergel, D.; Klavans, J.L.: Modeling and analyzing the topicality of art images (2015) 0.08
    0.07873234 = product of:
      0.11809851 = sum of:
        0.08966068 = weight(_text_:systematic in 2127) [ClassicSimilarity], result of:
          0.08966068 = score(doc=2127,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 2127, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2127)
        0.028437834 = product of:
          0.05687567 = sum of:
            0.05687567 = weight(_text_:indexing in 2127) [ClassicSimilarity], result of:
              0.05687567 = score(doc=2127,freq=4.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.29905218 = fieldWeight in 2127, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2127)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This study demonstrates an improved conceptual foundation to support well-structured analysis of image topicality. First we present a conceptual framework for analyzing image topicality, explicating the layers, the perspectives, and the topical relevance relationships involved in modeling the topicality of art images. We adapt a generic relevance typology to image analysis by extending it with definitions and relationships specific to the visual art domain and integrating it with schemes of image-text relationships that are important for image subject indexing. We then apply the adapted typology to analyze the topical relevance relationships between 11 art images and 768 image tags assigned by art historians and librarians. The original contribution of our work is the topical structure analysis of image tags that allows the viewer to more easily grasp the content, context, and meaning of an image and quickly tune into aspects of interest; it could also guide both the indexer and the searcher to specify image tags/descriptors in a more systematic and precise manner and thus improve the match between the two parties. An additional contribution is systematically examining and integrating the variety of image-text relationships from a relevance perspective. The paper concludes with implications for relational indexing and social tagging.
  3. Bland, R.N.: ¬The concept of intellectual level in cataloging and classification (1983) 0.05
    0.04781903 = product of:
      0.14345708 = sum of:
        0.14345708 = weight(_text_:systematic in 321) [ClassicSimilarity], result of:
          0.14345708 = score(doc=321,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.5051812 = fieldWeight in 321, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0625 = fieldNorm(doc=321)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper traces the history of the concept of intellectual level in cataloging and classification in the United States. Past cataloging codes, subject-heading practice, and classification systems have provided library users with little systematic information concerning the intellectual level or intended audience of works. Reasons for this omission are discussed, and arguments are developed to show that this kind of information would be a useful addition to the catalog record of the present and the future.
  4. Mai, J.-E.: Analysis in indexing : document and domain centered approaches (2005) 0.03
    0.026541978 = product of:
      0.079625934 = sum of:
        0.079625934 = product of:
          0.15925187 = sum of:
            0.15925187 = weight(_text_:indexing in 1024) [ClassicSimilarity], result of:
              0.15925187 = score(doc=1024,freq=16.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.8373461 = fieldWeight in 1024, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1024)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The paper discusses the notion of steps in indexing and reveals that the document-centered approach to indexing is prevalent and argues that the document-centered approach is problematic because it blocks out context-dependent factors in the indexing process. A domain-centered approach to indexing is presented as an alternative and the paper discusses how this approach includes a broader range of analyses and how it requires a new set of actions from using this approach; analysis of the domain, users and indexers. The paper concludes that the two-step procedure to indexing is insufficient to explain the indexing process and suggests that the domain-centered approach offers a guide for indexers that can help them manage the complexity of indexing.
  5. Green, R.: ¬The role of relational structures in indexing for the humanities (1997) 0.02
    0.024130303 = product of:
      0.07239091 = sum of:
        0.07239091 = product of:
          0.14478181 = sum of:
            0.14478181 = weight(_text_:indexing in 474) [ClassicSimilarity], result of:
              0.14478181 = score(doc=474,freq=18.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.76126254 = fieldWeight in 474, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=474)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The paper is divided into 3 parts. The 1st develops a framework for evaluating the indexing needs of the humanities with reference to 4 sets of contrasts: user (need)-oriented vs. document-oriented indexing; subject indexing vs. attribute indexing; scientific writing vs. humanistic writing; and topical relevance vs. logical relevance vs. evidential relevance vs. aesthetic relevance. The indexing needs for the humanities range broadly across these contrasts. The 2nd part establishes the centrality of relationships to the communication of indexable matter and examines the advantages and disadvantages of means used for their expression inboth natural languages and indexing languages. The use of relational structure, such as a frame, is shown to represent perhaps the best available option. The 3rd part illustrates where the use of relational structures in humanities indexing would help meet some of the needs previously identified. Although not a panacea, the adoption of frame-based indexing in the humanities might substantially improve the retrieval of its literature
  6. Farrow, J.: All in the mind : concept analysis in indexing (1995) 0.02
    0.02398089 = product of:
      0.071942665 = sum of:
        0.071942665 = product of:
          0.14388533 = sum of:
            0.14388533 = weight(_text_:indexing in 2926) [ClassicSimilarity], result of:
              0.14388533 = score(doc=2926,freq=10.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.7565488 = fieldWeight in 2926, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2926)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The indexing process consists of the comprehension of the document to be indexed, followed by the production of a set of index terms. Differences between academic indexing and back-of-the-book indexing are discussed. Text comprehension is a branch of human information processing, and it is argued that the model of text comprehension and production debeloped by van Dijk and Kintsch can form the basis for a cognitive process model of indexing. Strategies for testing such a model are suggested
  7. Andersen, J.; Christensen, F.S.: Wittgenstein and indexing theory (2001) 0.02
    0.023219395 = product of:
      0.06965818 = sum of:
        0.06965818 = product of:
          0.13931637 = sum of:
            0.13931637 = weight(_text_:indexing in 1590) [ClassicSimilarity], result of:
              0.13931637 = score(doc=1590,freq=24.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.7325252 = fieldWeight in 1590, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1590)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The paper considers indexing an activity that deals with linguistic entities. It rests an the assumption that a theory of indexing should be based an a philosophy of language, because indexing is concerned with the linguistic representation of meaning. The paper consists of four sections: It begins with some basic considerations an the nature of indexing and the requirements for a theory an this; it is followed by a short review of the use of Wittgenstein's philosophy in LIS-literature; next is an analysis of Wittgenstein's work Philosophical Investigations; finally, we deduce a theory of indexing from this philosophy. Considering an indexing theory a theory of meaning entails that, for the purpose of retrieval, indexing is a representation of meaning. Therefore, an indexing theory is concerned with how words are used in the linguistic context. Furthermore, the indexing process is a communicative process containing an interpretative element. Through the philosophy of the later Wittgenstein, it is shown that language and meaning are publicly constituted entities. Since they form the basis of indexing, a theory hereof must take into account that no single actor can define the meaning of documents. Rather this is decided by the social, historical and linguistic context in which the document is produced, distributed and exchanged. Indexing must clarify and reflect these contexts.
  8. Todd, R.J.: Academic indexing : what's it all about? (1992) 0.02
    0.02144916 = product of:
      0.064347476 = sum of:
        0.064347476 = product of:
          0.12869495 = sum of:
            0.12869495 = weight(_text_:indexing in 3011) [ClassicSimilarity], result of:
              0.12869495 = score(doc=3011,freq=8.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.6766778 = fieldWeight in 3011, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3011)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    While the literature identifies some broad approaches to subject analysis there is little supporting empirical evidence and few attempts to explicate any specifiable procedures. A productive step forward with indexing research would be to begin by examining how indexers actually undertake the process of subject analysis and to explore systematically factors that guide and influence this process. This would shed some light on a theory of subject analysis, clarify some of the central concepts of indexing, and provide an intelligent knowledge-base for effective, academic indexing practice
  9. ISO 5963: Methods for examining documents, determining their subjects and selecting indexing terms (1983) 0.02
    0.02144916 = product of:
      0.064347476 = sum of:
        0.064347476 = product of:
          0.12869495 = sum of:
            0.12869495 = weight(_text_:indexing in 3991) [ClassicSimilarity], result of:
              0.12869495 = score(doc=3991,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.6766778 = fieldWeight in 3991, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.125 = fieldNorm(doc=3991)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  10. Chubin, D.E.; Moitra, S.D.: Content analysis of references : adjunct or alternative to citation counting? (1975) 0.02
    0.02144916 = product of:
      0.064347476 = sum of:
        0.064347476 = product of:
          0.12869495 = sum of:
            0.12869495 = weight(_text_:indexing in 5647) [ClassicSimilarity], result of:
              0.12869495 = score(doc=5647,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.6766778 = fieldWeight in 5647, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.125 = fieldNorm(doc=5647)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Theme
    Citation indexing
  11. Farrow, J.: Indexing as a cognitive process (1994) 0.02
    0.02144916 = product of:
      0.064347476 = sum of:
        0.064347476 = product of:
          0.12869495 = sum of:
            0.12869495 = weight(_text_:indexing in 1257) [ClassicSimilarity], result of:
              0.12869495 = score(doc=1257,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.6766778 = fieldWeight in 1257, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.125 = fieldNorm(doc=1257)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  12. Mai, J.-E.: Deconstructing the indexing process (2000) 0.02
    0.02144916 = product of:
      0.064347476 = sum of:
        0.064347476 = product of:
          0.12869495 = sum of:
            0.12869495 = weight(_text_:indexing in 4696) [ClassicSimilarity], result of:
              0.12869495 = score(doc=4696,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.6766778 = fieldWeight in 4696, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.125 = fieldNorm(doc=4696)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  13. Langridge, D.W.: Subject analysis : principles and procedures (1989) 0.02
    0.020983277 = product of:
      0.06294983 = sum of:
        0.06294983 = product of:
          0.12589966 = sum of:
            0.12589966 = weight(_text_:indexing in 2021) [ClassicSimilarity], result of:
              0.12589966 = score(doc=2021,freq=10.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.6619802 = fieldWeight in 2021, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2021)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Subject analysis is the basis of all classifying and indexing techniques and is equally applicable to automatic and manual indexing systems. This book discusses subject analysis as an activity in its own right, independent of any indexing language. It examines the theoretical basis of subject analysis using the concepts of forms of knowledge as applicable to classification schemes.
    LCSH
    Indexing
    Subject
    Indexing
  14. Riesthuis, G.J.A.; Stuurman, P.: Tendenzen in de onderwerpsontsluiting : T.1: Inhoudsanalyse (1989) 0.02
    0.018768014 = product of:
      0.05630404 = sum of:
        0.05630404 = product of:
          0.11260808 = sum of:
            0.11260808 = weight(_text_:indexing in 1841) [ClassicSimilarity], result of:
              0.11260808 = score(doc=1841,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.5920931 = fieldWeight in 1841, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1841)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Footnote
    Übers. des Titels: Trends in subject indexing: contents analysis
  15. Ahmad, N.: Newspaper indexing : an international overview (1991) 0.02
    0.018768014 = product of:
      0.05630404 = sum of:
        0.05630404 = product of:
          0.11260808 = sum of:
            0.11260808 = weight(_text_:indexing in 3633) [ClassicSimilarity], result of:
              0.11260808 = score(doc=3633,freq=8.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.5920931 = fieldWeight in 3633, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3633)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Comprehensiveness and consistency in newspaper indexing depend on the effectiveness of subject analysis of the news items. Discusses indexing skills required in order to identify indexable concepts. Describes practical aspects of conceptual analysis, crystalises criteria and methods for the indexing of news stories, and eludicates reasons form providing multiple subject-entries for certain news items. Suggests rules for news analysis and speedy and accurate allocation of subject headings, and illustrates the technique of dealing with complex and diversified news headings reported at intervals. As the headlines do not always indicate the real subject of a news story, the identification of indexable concepts can become arduous and cumbersome. Discusses the methods, skills and capability needed to tackle such problems
  16. BS 6529: Recommendations for examining documents, determining their subjects and selecting indexing terms (1984) 0.02
    0.018768014 = product of:
      0.05630404 = sum of:
        0.05630404 = product of:
          0.11260808 = sum of:
            0.11260808 = weight(_text_:indexing in 1155) [ClassicSimilarity], result of:
              0.11260808 = score(doc=1155,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.5920931 = fieldWeight in 1155, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1155)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  17. Hutchins, W.J.: ¬The concept of 'aboutness' in subject indexing (1978) 0.02
    0.018768014 = product of:
      0.05630404 = sum of:
        0.05630404 = product of:
          0.11260808 = sum of:
            0.11260808 = weight(_text_:indexing in 1961) [ClassicSimilarity], result of:
              0.11260808 = score(doc=1961,freq=8.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.5920931 = fieldWeight in 1961, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1961)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The common view of the 'aboutness' of documents is that the index entries (or classifications) assigned to documents represent or indicate in some way the total contents of documents; indexing and classifying are seen as processes involving the 'summerization' of the texts of documents. In this paper an alternative concept of 'aboutness' is proposed based on an analysis of the linguistic organization of texts, which is felt to be more appropriate in many indexing environments (particularly in non-specialized libraries and information services) and which has implications for the evaluation of the effectiveness of indexing systems
  18. Svenonius, E.: Access to nonbook materials : the limits of subject indexing for visual and aural languages (1994) 0.02
    0.018575516 = product of:
      0.055726547 = sum of:
        0.055726547 = product of:
          0.11145309 = sum of:
            0.11145309 = weight(_text_:indexing in 8263) [ClassicSimilarity], result of:
              0.11145309 = score(doc=8263,freq=6.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.5860202 = fieldWeight in 8263, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8263)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    An examination of some nonbook materials with respect to an aboutness model of indexing leads to the conclusion that there are instances that defy subject indexing. These occur not so much because of the nature of the medium per se but because it is being used for nondocumentary purposes, or, when being used for such purposes, the subject referenced is nonlexical
  19. Sigel, A.: How can user-oriented depth analysis be constructively guided? (2000) 0.02
    0.018172052 = product of:
      0.05451615 = sum of:
        0.05451615 = product of:
          0.1090323 = sum of:
            0.1090323 = weight(_text_:indexing in 133) [ClassicSimilarity], result of:
              0.1090323 = score(doc=133,freq=30.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.57329166 = fieldWeight in 133, product of:
                  5.477226 = tf(freq=30.0), with freq of:
                    30.0 = termFreq=30.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=133)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    It is vital for library and information science to understand the subject indexing process thoroughly. However, document analysis, the first and most important step in indexing, has not received sufficient attention. As this is an exceptionally hard problem, we still do not dispose of a sound indexing theory. Therefore we have difficulties in teaching indexing and in explaining why a given subject representation is "better" than another. Technological advancements have not helped to close this fundamental gap. To proceed, we should ask the right questions instead. Several types of indexer inconsistencies can be explained as acceptable, yet different conceptualizations which resulting of the variety of groups dealing with a problem from their respective viewpoints. Multiple indexed documents are regarded as the normal case. Intersubjectively replicable indexing results are often questionable or do not constitute interesting cases of indexing at all. In the context of my ongoing dissertation in which I intend to develop an enhanced indexing theory by investigating improvements within a social sciences domain, this paper explains user-oriented selective depth analysis and why I chose that configuration. Strongly influenced by Mai's dissertation, I also communicate my first insights concerning current indexing theories. I agree that I cannot ignore epistemological stances and philosophical issues in language and meaning related to indexing and accept the openness of the interpretive nature of the indexing process. Although I present arguments against the employment of an indexing language as well, it is still indispensable in situations which demand easier access and control by devices. Despite the enormous difficulties the user-oriented and selective depth analysis poses, I argue that it is both feasible and useful if one achieves careful guidance of the possible interpretations. There is some hope because the number of useful interpretations is limited: Every summary is tailored to a purpose, audience and situation. Domain, discourse and social practice entail additional constraints. A pluralistic method mix that focusses on ecologically valid, holistic contexts and employs qualitative methods is recommended. Domain analysis urgently has to be made more practical and applicable. Only then we will be able to investigate empirically domains in order to identify their structures shaped by the corresponding discourse communities. We plan to represent the recognized problem structures and indexing questions of relevance to a small domain in formal, ontological computer models -- if we can find such stable knowledge structures. This would allow us to tailor dynamically summaries for user communities. For practical purposes we suggest to assume a less demanding position than Hjorland's "totality of the epistemological potential". It is sufficent that we identify and represent iteratively the information needs of today's user groups in interactive knowledge-based systems. The best way to formalize such knowledge gained about discourse communities is however unknown. Indexers should stay in direct contact with the community they serve or be part of it to ensure agreement with their viewpoints. Checklist/request-oriented indexing could be very helpful but it remains to be demonstrated how well it will be applicable in the social sciences. A frame-based representation or at least a sophisticated grouping of terms could help to express relational knowledge structures. There remains much work to do since in practice no one has shown yet how such an improved indexing system would work and if the indexing results were really "better".
  20. Mai, J.-E.: Semiotics and indexing : an analysis of the subject indexing process (2001) 0.02
    0.017734105 = product of:
      0.053202312 = sum of:
        0.053202312 = product of:
          0.106404625 = sum of:
            0.106404625 = weight(_text_:indexing in 4480) [ClassicSimilarity], result of:
              0.106404625 = score(doc=4480,freq=14.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.55947536 = fieldWeight in 4480, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4480)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper explains at least some of the major problems related to the subject indexing process and proposes a new approach to understanding the process, which is ordinarily described as a process that takes a number of steps. The subject is first determined, then it is described in a few sentences and, lastly, the description of the subject is converted into the indexing language. It is argued that this typical approach characteristically lacks an understanding of what the central nature of the process is. Indexing is not a neutral and objective representation of a document's subject matter but the representation of an interpretation of a document for future use. Semiotics is offered here as a framework for understanding the "interpretative" nature of the subject indexing process. By placing this process within Peirce's semiotic framework of ideas and terminology, a more detailed description of the process is offered which shows that the uncertainty generally associated with this process is created by the fact that the indexer goes through a number of steps and creates the subject matter of the document during this process. The creation of the subject matter is based on the indexer's social and cultural context. The paper offers an explanation of what occurs in the indexing process and suggests that there is only little certainty to its result.

Languages

  • e 53
  • d 2
  • nl 1
  • More… Less…

Types

  • a 51
  • m 3
  • n 2
  • el 1
  • More… Less…

Classifications