Search (158 results, page 1 of 8)

  • × theme_ss:"Inhaltsanalyse"
  1. White, M.D.; Marsh, E.E.: Content analysis : a flexible methodology (2006) 0.07
    0.06556197 = sum of:
      0.018274104 = product of:
        0.07309642 = sum of:
          0.07309642 = weight(_text_:authors in 5589) [ClassicSimilarity], result of:
            0.07309642 = score(doc=5589,freq=2.0), product of:
              0.2418733 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.053056188 = queryNorm
              0.30220953 = fieldWeight in 5589, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=5589)
        0.25 = coord(1/4)
      0.047287866 = sum of:
        0.0041575856 = weight(_text_:s in 5589) [ClassicSimilarity], result of:
          0.0041575856 = score(doc=5589,freq=2.0), product of:
            0.057684682 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.053056188 = queryNorm
            0.072074346 = fieldWeight in 5589, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=5589)
        0.043130282 = weight(_text_:22 in 5589) [ClassicSimilarity], result of:
          0.043130282 = score(doc=5589,freq=2.0), product of:
            0.18579373 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.053056188 = queryNorm
            0.23214069 = fieldWeight in 5589, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=5589)
    
    Abstract
    Content analysis is a highly flexible research method that has been widely used in library and information science (LIS) studies with varying research goals and objectives. The research method is applied in qualitative, quantitative, and sometimes mixed modes of research frameworks and employs a wide range of analytical techniques to generate findings and put them into context. This article characterizes content analysis as a systematic, rigorous approach to analyzing documents obtained or generated in the course of research. It briefly describes the steps involved in content analysis, differentiates between quantitative and qualitative content analysis, and shows that content analysis serves the purposes of both quantitative research and qualitative research. The authors draw on selected LIS studies that have used content analysis to illustrate the concepts addressed in the article. The article also serves as a gateway to methodological books and articles that provide more detail about aspects of content analysis discussed only briefly in the article.
    Source
    Library trends. 55(2006) no.1, S.22-45
  2. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.04
    0.03940656 = product of:
      0.07881312 = sum of:
        0.07881312 = sum of:
          0.00692931 = weight(_text_:s in 5835) [ClassicSimilarity], result of:
            0.00692931 = score(doc=5835,freq=2.0), product of:
              0.057684682 = queryWeight, product of:
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.053056188 = queryNorm
              0.120123915 = fieldWeight in 5835, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.078125 = fieldNorm(doc=5835)
          0.07188381 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
            0.07188381 = score(doc=5835,freq=2.0), product of:
              0.18579373 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.053056188 = queryNorm
              0.38690117 = fieldWeight in 5835, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=5835)
      0.5 = coord(1/2)
    
    Date
    5. 8.2006 13:22:44
    Pages
    S.146-159
  3. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.03
    0.031525247 = product of:
      0.06305049 = sum of:
        0.06305049 = sum of:
          0.005543448 = weight(_text_:s in 5830) [ClassicSimilarity], result of:
            0.005543448 = score(doc=5830,freq=2.0), product of:
              0.057684682 = queryWeight, product of:
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.053056188 = queryNorm
              0.09609913 = fieldWeight in 5830, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.0625 = fieldNorm(doc=5830)
          0.057507046 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
            0.057507046 = score(doc=5830,freq=2.0), product of:
              0.18579373 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.053056188 = queryNorm
              0.30952093 = fieldWeight in 5830, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=5830)
      0.5 = coord(1/2)
    
    Date
    5. 8.2006 13:22:08
    Pages
    S.39-48
  4. Hauff-Hartig, S.: Automatische Transkription von Videos : Fernsehen 3.0: Automatisierte Sentimentanalyse und Zusammenstellung von Kurzvideos mit hohem Aufregungslevel KI-generierte Metadaten: Von der Technologiebeobachtung bis zum produktiven Einsatz (2021) 0.03
    0.031525247 = product of:
      0.06305049 = sum of:
        0.06305049 = sum of:
          0.005543448 = weight(_text_:s in 251) [ClassicSimilarity], result of:
            0.005543448 = score(doc=251,freq=2.0), product of:
              0.057684682 = queryWeight, product of:
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.053056188 = queryNorm
              0.09609913 = fieldWeight in 251, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.0625 = fieldNorm(doc=251)
          0.057507046 = weight(_text_:22 in 251) [ClassicSimilarity], result of:
            0.057507046 = score(doc=251,freq=2.0), product of:
              0.18579373 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.053056188 = queryNorm
              0.30952093 = fieldWeight in 251, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=251)
      0.5 = coord(1/2)
    
    Date
    22. 5.2021 12:43:05
  5. Raieli, R.: ¬The semantic hole : enthusiasm and caution around multimedia information retrieval (2012) 0.03
    0.027864646 = product of:
      0.055729292 = sum of:
        0.055729292 = sum of:
          0.004899762 = weight(_text_:s in 4888) [ClassicSimilarity], result of:
            0.004899762 = score(doc=4888,freq=4.0), product of:
              0.057684682 = queryWeight, product of:
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.053056188 = queryNorm
              0.08494043 = fieldWeight in 4888, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4888)
          0.05082953 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
            0.05082953 = score(doc=4888,freq=4.0), product of:
              0.18579373 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.053056188 = queryNorm
              0.27358043 = fieldWeight in 4888, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4888)
      0.5 = coord(1/2)
    
    Date
    22. 1.2012 13:02:10
    Footnote
    Bezugnahme auf: Enser, P.G.B.: Visual image retrieval. In: Annual review of information science and technology. 42(2008), S.3-42.
    Source
    Knowledge organization. 39(2012) no.1, S.13-22
  6. Chen, S.-J.; Lee, H.-L.: Art images and mental associations : a preliminary exploration (2014) 0.02
    0.024504999 = product of:
      0.049009997 = sum of:
        0.049009997 = sum of:
          0.0058797146 = weight(_text_:s in 1416) [ClassicSimilarity], result of:
            0.0058797146 = score(doc=1416,freq=4.0), product of:
              0.057684682 = queryWeight, product of:
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.053056188 = queryNorm
              0.101928525 = fieldWeight in 1416, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.046875 = fieldNorm(doc=1416)
          0.043130282 = weight(_text_:22 in 1416) [ClassicSimilarity], result of:
            0.043130282 = score(doc=1416,freq=2.0), product of:
              0.18579373 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.053056188 = queryNorm
              0.23214069 = fieldWeight in 1416, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1416)
      0.5 = coord(1/2)
    
    Pages
    S.144-151
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  7. Shaw, R.: Information organization and the philosophy of history (2013) 0.02
    0.023745045 = sum of:
      0.021319786 = product of:
        0.085279144 = sum of:
          0.085279144 = weight(_text_:authors in 946) [ClassicSimilarity], result of:
            0.085279144 = score(doc=946,freq=2.0), product of:
              0.2418733 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.053056188 = queryNorm
              0.35257778 = fieldWeight in 946, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=946)
        0.25 = coord(1/4)
      0.0024252585 = product of:
        0.004850517 = sum of:
          0.004850517 = weight(_text_:s in 946) [ClassicSimilarity], result of:
            0.004850517 = score(doc=946,freq=2.0), product of:
              0.057684682 = queryWeight, product of:
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.053056188 = queryNorm
              0.08408674 = fieldWeight in 946, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.0546875 = fieldNorm(doc=946)
        0.5 = coord(1/2)
    
    Abstract
    The philosophy of history can help articulate problems relevant to information organization. One such problem is "aboutness": How do texts relate to the world? In response to this problem, philosophers of history have developed theories of colligation describing how authors bind together phenomena under organizing concepts. Drawing on these ideas, I present a theory of subject analysis that avoids the problematic illusion of an independent "landscape" of subjects. This theory points to a broad vision of the future of information organization and some specific challenges to be met.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.6, S.1092-1103
  8. Weimer, K.H.: ¬The nexus of subject analysis and bibliographic description : the case of multipart videos (1996) 0.02
    0.023643933 = product of:
      0.047287866 = sum of:
        0.047287866 = sum of:
          0.0041575856 = weight(_text_:s in 6525) [ClassicSimilarity], result of:
            0.0041575856 = score(doc=6525,freq=2.0), product of:
              0.057684682 = queryWeight, product of:
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.053056188 = queryNorm
              0.072074346 = fieldWeight in 6525, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.046875 = fieldNorm(doc=6525)
          0.043130282 = weight(_text_:22 in 6525) [ClassicSimilarity], result of:
            0.043130282 = score(doc=6525,freq=2.0), product of:
              0.18579373 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.053056188 = queryNorm
              0.23214069 = fieldWeight in 6525, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=6525)
      0.5 = coord(1/2)
    
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.5-18
  9. Rorissa, A.; Iyer, H.: Theories of cognition and image categorization : what category labels reveal about basic level theory (2008) 0.02
    0.020352896 = sum of:
      0.018274104 = product of:
        0.07309642 = sum of:
          0.07309642 = weight(_text_:authors in 1958) [ClassicSimilarity], result of:
            0.07309642 = score(doc=1958,freq=2.0), product of:
              0.2418733 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.053056188 = queryNorm
              0.30220953 = fieldWeight in 1958, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=1958)
        0.25 = coord(1/4)
      0.0020787928 = product of:
        0.0041575856 = sum of:
          0.0041575856 = weight(_text_:s in 1958) [ClassicSimilarity], result of:
            0.0041575856 = score(doc=1958,freq=2.0), product of:
              0.057684682 = queryWeight, product of:
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.053056188 = queryNorm
              0.072074346 = fieldWeight in 1958, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.046875 = fieldNorm(doc=1958)
        0.5 = coord(1/2)
    
    Abstract
    Information search and retrieval interactions usually involve information content in the form of document collections, information retrieval systems and interfaces, and the user. To fully understand information search and retrieval interactions between users' cognitive space and the information space, researchers need to turn to cognitive models and theories. In this article, the authors use one of these theories, the basic level theory. Use of the basic level theory to understand human categorization is both appropriate and essential to user-centered design of taxonomies, ontologies, browsing interfaces, and other indexing tools and systems. Analyses of data from two studies involving free sorting by 105 participants of 100 images were conducted. The types of categories formed and category labels were examined. Results of the analyses indicate that image category labels generally belong to superordinate to the basic level, and are generic and interpretive. Implications for research on theories of cognition and categorization, and design of image indexing, retrieval and browsing systems are discussed.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.9, S.1383-1392
  10. Hoover, L.: ¬A beginners' guide for subject analysis of theses and dissertations in the hard sciences (2005) 0.02
    0.0176783 = sum of:
      0.01522842 = product of:
        0.06091368 = sum of:
          0.06091368 = weight(_text_:authors in 5740) [ClassicSimilarity], result of:
            0.06091368 = score(doc=5740,freq=2.0), product of:
              0.2418733 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.053056188 = queryNorm
              0.25184128 = fieldWeight in 5740, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5740)
        0.25 = coord(1/4)
      0.002449881 = product of:
        0.004899762 = sum of:
          0.004899762 = weight(_text_:s in 5740) [ClassicSimilarity], result of:
            0.004899762 = score(doc=5740,freq=4.0), product of:
              0.057684682 = queryWeight, product of:
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.053056188 = queryNorm
              0.08494043 = fieldWeight in 5740, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5740)
        0.5 = coord(1/2)
    
    Abstract
    This guide, for beginning catalogers with humanities or social sciences backgrounds, provides assistance in subject analysis (based on Library of Congress Subject Headings) of theses and dissertations (T/Ds) that are produced by graduate students in university departments in the hard sciences (physical sciences and engineering). It is aimed at those who have had little or no experience in cataloging, especially of this type of material, and for those who desire to supplement local mentoring resources for subject analysis in the hard sciences. Theses and dissertations from these departments present a special challenge because they are the results of current research representing specific new concepts with which the cataloger may not be familiar. In fact, subject headings often have not yet been created for the specific concept(s) being researched. Additionally, T/D authors often use jargon/terminology specific to their department. Catalogers often have many other duties in addition to subject analysis of T/Ds in the hard sciences, yet they desire to provide optimal access through accurate, thorough subject analysis. Tips are provided for determining the content of the T/D, strategic searches on WorldCat for possible subject headings, evaluating the relevancy of these subject headings for final selection, and selecting appropriate subdivisions where needed. Lists of basic reference resources are also provided.
    Source
    Cataloging and classification quarterly. 41(2005) no.1, S.133-161
  11. Hauser, E.; Tennis, J.T.: Episemantics: aboutness as aroundness (2019) 0.02
    0.016960748 = sum of:
      0.01522842 = product of:
        0.06091368 = sum of:
          0.06091368 = weight(_text_:authors in 5640) [ClassicSimilarity], result of:
            0.06091368 = score(doc=5640,freq=2.0), product of:
              0.2418733 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.053056188 = queryNorm
              0.25184128 = fieldWeight in 5640, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5640)
        0.25 = coord(1/4)
      0.0017323275 = product of:
        0.003464655 = sum of:
          0.003464655 = weight(_text_:s in 5640) [ClassicSimilarity], result of:
            0.003464655 = score(doc=5640,freq=2.0), product of:
              0.057684682 = queryWeight, product of:
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.053056188 = queryNorm
              0.060061958 = fieldWeight in 5640, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5640)
        0.5 = coord(1/2)
    
    Abstract
    Aboutness ranks amongst our field's greatest bugbears. What is a work about? How can this be known? This mirrors debates within the philosophy of language, where the concept of representation has similarly evaded satisfactory definition. This paper proposes that we abandon the strong sense of the word aboutness, which seems to promise some inherent relationship between work and subject, or, in philosophical terms, between word and world. Instead, we seek an etymological reset to the older sense of aboutness as "in the vicinity, nearby; in some place or various places nearby; all over a surface." To distinguish this sense in the context of information studies, we introduce the term episemantics. The authors have each independently applied this term in slightly different contexts and scales (Hauser 2018a; Tennis 2016), and this article presents a unified definition of the term and guidelines for applying it at the scale of both words and works. The resulting weak concept of aboutness is pragmatic, in Star's sense of a focus on consequences over antecedents, while reserving space for the critique and improvement of aboutness determinations within various contexts and research programs. The paper finishes with a discussion of the implication of the concept of episemantics and methodological possibilities it offers for knowledge organization research and practice. We draw inspiration from Melvil Dewey's use of physical aroundness in his first classification system and ask how aroundness might be more effectively operationalized in digital environments.
    Source
    Knowledge organization. 46(2019) no.8, S.590-595
  12. Sauperl, A.: Subject determination during the cataloging process : the development of a system based on theoretical principles (2002) 0.01
    0.012252499 = product of:
      0.024504999 = sum of:
        0.024504999 = sum of:
          0.0029398573 = weight(_text_:s in 2293) [ClassicSimilarity], result of:
            0.0029398573 = score(doc=2293,freq=4.0), product of:
              0.057684682 = queryWeight, product of:
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.053056188 = queryNorm
              0.050964262 = fieldWeight in 2293, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.0234375 = fieldNorm(doc=2293)
          0.021565141 = weight(_text_:22 in 2293) [ClassicSimilarity], result of:
            0.021565141 = score(doc=2293,freq=2.0), product of:
              0.18579373 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.053056188 = queryNorm
              0.116070345 = fieldWeight in 2293, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=2293)
      0.5 = coord(1/2)
    
    Date
    27. 9.2005 14:22:19
    Footnote
    Rez. in: Knowledge organization 30(2003) no.2, S.114-115 (M. Hudon); "This most interesting contribution to the literature of subject cataloguing originates in the author's doctoral dissertation, prepared under the direction of jerry Saye at the University of North Carolina at Chapel Hill. In seven highly readable chapters, Alenka Sauperl develops possible answers to her principal research question: How do cataloguers determine or identify the topic of a document and choose appropriate subject representations? Specific questions at the source of this research an a process which has not been a frequent object of study include: Where do cataloguers look for an overall sense of what a document is about? How do they get an overall sense of what a document is about, especially when they are not familiar with the discipline? Do they consider only one or several possible interpretations? How do they translate meanings in appropriate and valid class numbers and subject headings? Using a strictly qualitative methodology, Dr. Sauperl's research is a study of twelve cataloguers in reallife situation. The author insists an the holistic rather than purely theoretical understanding of the process she is targeting. Participants in the study were professional cataloguers, with at least one year experience in their current job at one of three large academic libraries in the Southeastern United States. All three libraries have a large central cataloguing department, and use OCLC sources and the same automated system; the context of cataloguing tasks is thus considered to be reasonably comparable. All participants were volunteers in this study which combined two datagathering techniques: the think-aloud method and time-line interviews. A model of the subject cataloguing process was first developed from observations of a group of six cataloguers who were asked to independently perform original cataloguing an three nonfiction, non-serial items selected from materials regularly assigned to them for processing. The model was then used for follow-up interviews. Each participant in the second group of cataloguers was invited to reflect an his/her work process for a recent challenging document they had catalogued. Results are presented in 12 stories describing as many personal approaches to subject cataloguing. From these stories a summarization is offered and a theoretical model of subject cataloguing is developed which, according to the author, represents a realistic approach to subject cataloguing. Stories alternate comments from the researcher and direct quotations from the observed or interviewed cataloguers. Not surprisingly, the participants' stories reveal similarities in the sequence and accomplishment of several tasks in the process of subject cataloguing. Sauperl's proposed model, described in Chapter 5, includes as main stages: 1) Examination of the book and subject identification; 2) Search for subject headings; 3) Classification. Chapter 6 is a hypothetical Gase study, using the proposed model to describe the various stages of cataloguing a hypothetical resource. ...
    Pages
    VII,173 S
  13. Fairthorne, R.A.: Temporal structure in bibliographic classification (1985) 0.01
    0.010606981 = sum of:
      0.009137052 = product of:
        0.03654821 = sum of:
          0.03654821 = weight(_text_:authors in 3651) [ClassicSimilarity], result of:
            0.03654821 = score(doc=3651,freq=2.0), product of:
              0.2418733 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.053056188 = queryNorm
              0.15110476 = fieldWeight in 3651, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0234375 = fieldNorm(doc=3651)
        0.25 = coord(1/4)
      0.0014699287 = product of:
        0.0029398573 = sum of:
          0.0029398573 = weight(_text_:s in 3651) [ClassicSimilarity], result of:
            0.0029398573 = score(doc=3651,freq=4.0), product of:
              0.057684682 = queryWeight, product of:
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.053056188 = queryNorm
              0.050964262 = fieldWeight in 3651, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.0234375 = fieldNorm(doc=3651)
        0.5 = coord(1/2)
    
    Abstract
    The fan of past documents may be seen across time as a philosophical "wake," translated documents as a sideways relationship and future documents as another fan spreading forward from a given document (p. 365). The "overlap of reading histories can be used to detect common interests among readers," (p. 365) and readers may be classified accordingly. Finally, Fairthorne rejects the notion of a "general" classification, which he regards as a mirage, to be replaced by a citation-type network to identify classes. An interesting feature of his work lies in his linkage between old and new documents via a bibliographic method-citations, authors' names, imprints, style, and vocabulary - rather than topical (subject) terms. This is an indirect method of creating classes. The subject (aboutness) is conceived as a finite, common sharing of knowledge over time (past, present, and future) as opposed to the more common hierarchy of topics in an infinite schema assumed to be universally useful. Fairthorne, a mathematician by training, is a prolific writer an the foundations of classification and information. His professional career includes work with the Royal Engineers Chemical Warfare Section and the Royal Aircraft Establishment (RAE). He was the founder of the Computing Unit which became the RAE Mathematics Department.
    Footnote
    Original in: Ottawa Conference on the Conceptual Basis of the Classification of Knowledge, Ottawa, 1971. Ed.: Jerzy A Wojceichowski. Pullach: Verlag Dokumentation 1974. S.404-412.
    Pages
    S.356-368
  14. Bade, D.: ¬The creation and persistence of misinformation in shared library catalogs : language and subject knowledge in a technological era (2002) 0.01
    0.008388572 = product of:
      0.016777145 = sum of:
        0.016777145 = sum of:
          0.0024003836 = weight(_text_:s in 1858) [ClassicSimilarity], result of:
            0.0024003836 = score(doc=1858,freq=6.0), product of:
              0.057684682 = queryWeight, product of:
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.053056188 = queryNorm
              0.04161215 = fieldWeight in 1858, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.015625 = fieldNorm(doc=1858)
          0.014376761 = weight(_text_:22 in 1858) [ClassicSimilarity], result of:
            0.014376761 = score(doc=1858,freq=2.0), product of:
              0.18579373 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.053056188 = queryNorm
              0.07738023 = fieldWeight in 1858, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.015625 = fieldNorm(doc=1858)
      0.5 = coord(1/2)
    
    Date
    22. 9.1997 19:16:05
    Footnote
    Rez. in JASIST 54(2003) no.4, S.356-357 (S.J. Lincicum): "Reliance upon shared cataloging in academic libraries in the United States has been driven largely by the need to reduce the expense of cataloging operations without muck regard for the Impact that this approach might have an the quality of the records included in local catalogs. In recent years, ever increasing pressures have prompted libraries to adopt practices such as "rapid" copy cataloging that purposely reduce the scrutiny applied to bibliographic records downloaded from shared databases, possibly increasing the number of errors that slip through unnoticed. Errors in bibliographic records can lead to serious problems for library catalog users. If the data contained in bibliographic records is inaccurate, users will have difficulty discovering and recognizing resources in a library's collection that are relevant to their needs. Thus, it has become increasingly important to understand the extent and nature of errors that occur in the records found in large shared bibliographic databases, such as OCLC WorldCat, to develop cataloging practices optimized for the shared cataloging environment. Although this monograph raises a few legitimate concerns about recent trends in cataloging practice, it fails to provide the "detailed look" at misinformation in library catalogs arising from linguistic errors and mistakes in subject analysis promised by the publisher. A basic premise advanced throughout the text is that a certain amount of linguistic and subject knowledge is required to catalog library materials effectively. The author emphasizes repeatedly that most catalogers today are asked to catalog an increasingly diverse array of materials, and that they are often required to work in languages or subject areas of which they have little or no knowledge. He argues that the records contributed to shared databases are increasingly being created by catalogers with inadequate linguistic or subject expertise. This adversely affects the quality of individual library catalogs because errors often go uncorrected as records are downloaded from shared databases to local catalogs by copy catalogers who possess even less knowledge. Calling misinformation an "evil phenomenon," Bade states that his main goal is to discuss, "two fundamental types of misinformation found in bibliographic and authority records in library catalogs: that arising from linguistic errors, and that caused by errors in subject analysis, including missing or wrong subject headings" (p. 2). After a superficial discussion of "other" types of errors that can occur in bibliographic records, such as typographical errors and errors in the application of descriptive cataloging rules, Bade begins his discussion of linguistic errors. He asserts that sharing bibliographic records created by catalogers with inadequate linguistic or subject knowledge has, "disastrous effects an the library community" (p. 6). To support this bold assertion, Bade provides as evidence little more than a laundry list of errors that he has personally observed in bibliographic records over the years. When he eventually cites several studies that have addressed the availability and quality of records available for materials in languages other than English, he fails to describe the findings of these studies in any detail, let alone relate the findings to his own observations in a meaningful way. Bade claims that a lack of linguistic expertise among catalogers is the "primary source for linguistic misinformation in our databases" (p. 10), but he neither cites substantive data from existing studies nor provides any new data regarding the overall level of linguistic knowledge among catalogers to support this claim. The section concludes with a brief list of eight sensible, if unoriginal, suggestions for coping with the challenge of cataloging materials in unfamiliar languages.
    Arguing that catalogers need to work both quickly and accurately, Bade maintains that employing specialists is the most efficient and effective way to achieve this outcome. Far less compelling than these arguments are Bade's concluding remarks, in which he offers meager suggestions for correcting the problems as he sees them. Overall, this essay is little more than a curmudgeon's diatribe. Addressed primarily to catalogers and library administrators, the analysis presented is too superficial to assist practicing catalogers or cataloging managers in developing solutions to any systemic problems in current cataloging practice, and it presents too little evidence of pervasive problems to convince budget-conscious library administrators of a need to alter practice or to increase their investment in local cataloging operations. Indeed, the reliance upon anecdotal evidence and the apparent nit-picking that dominate the essay might tend to reinforce a negative image of catalogers in the minds of some. To his credit, Bade does provide an important reminder that it is the intellectual contributions made by thousands of erudite catalogers that have made shared cataloging a successful strategy for improving cataloging efficiency. This is an important point that often seems to be forgotten in academic libraries when focus centers an cutting costs. Had Bade focused more narrowly upon the issue of deintellectualization of cataloging and written a carefully structured essay to advance this argument, this essay might have been much more effective." - KO 29(2002) nos.3/4, S.236-237 (A. Sauperl)
    Pages
    33 S
  15. Hicks, C.; Rush, J.; Strong, S.: Content analysis (1977) 0.00
    0.0039198096 = product of:
      0.007839619 = sum of:
        0.007839619 = product of:
          0.015679238 = sum of:
            0.015679238 = weight(_text_:s in 7514) [ClassicSimilarity], result of:
              0.015679238 = score(doc=7514,freq=4.0), product of:
                0.057684682 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.053056188 = queryNorm
                0.2718094 = fieldWeight in 7514, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.125 = fieldNorm(doc=7514)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    S. -
  16. Wersig, G.: Inhaltsanalyse : Einführung in ihre Systematik und Literatur (1968) 0.00
    0.003464655 = product of:
      0.00692931 = sum of:
        0.00692931 = product of:
          0.01385862 = sum of:
            0.01385862 = weight(_text_:s in 2386) [ClassicSimilarity], result of:
              0.01385862 = score(doc=2386,freq=2.0), product of:
                0.057684682 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.053056188 = queryNorm
                0.24024783 = fieldWeight in 2386, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.15625 = fieldNorm(doc=2386)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    45 S
  17. Pejtersen, A.M.: Fiction and library classification (1978) 0.00
    0.002771724 = product of:
      0.005543448 = sum of:
        0.005543448 = product of:
          0.011086896 = sum of:
            0.011086896 = weight(_text_:s in 722) [ClassicSimilarity], result of:
              0.011086896 = score(doc=722,freq=2.0), product of:
                0.057684682 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.053056188 = queryNorm
                0.19219826 = fieldWeight in 722, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.125 = fieldNorm(doc=722)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Scandinavian public library quarterly. 11(1978), S.5-12
  18. Beghtol, C.: Bibliographic classification theory and text linguistics : aboutness, analysis, intertextuality and the cognitive act of classifying documents (1986) 0.00
    0.002771724 = product of:
      0.005543448 = sum of:
        0.005543448 = product of:
          0.011086896 = sum of:
            0.011086896 = weight(_text_:s in 1346) [ClassicSimilarity], result of:
              0.011086896 = score(doc=1346,freq=2.0), product of:
                0.057684682 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.053056188 = queryNorm
                0.19219826 = fieldWeight in 1346, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.125 = fieldNorm(doc=1346)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Journal of documentation. 42(1986), S.84-113
  19. Computergestützte Inhaltsanalyse in der empirischen Sozialforschung (1983) 0.00
    0.002771724 = product of:
      0.005543448 = sum of:
        0.005543448 = product of:
          0.011086896 = sum of:
            0.011086896 = weight(_text_:s in 1877) [ClassicSimilarity], result of:
              0.011086896 = score(doc=1877,freq=2.0), product of:
                0.057684682 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.053056188 = queryNorm
                0.19219826 = fieldWeight in 1877, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.125 = fieldNorm(doc=1877)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    314 S
  20. Gardin, J.C.: Document analysis and linguistic theory (1973) 0.00
    0.002771724 = product of:
      0.005543448 = sum of:
        0.005543448 = product of:
          0.011086896 = sum of:
            0.011086896 = weight(_text_:s in 2387) [ClassicSimilarity], result of:
              0.011086896 = score(doc=2387,freq=2.0), product of:
                0.057684682 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.053056188 = queryNorm
                0.19219826 = fieldWeight in 2387, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.125 = fieldNorm(doc=2387)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Journal of documentation. 29(1973) no.2, S.137-168

Authors

Languages

  • e 131
  • d 24
  • f 2
  • nl 1
  • More… Less…

Types

  • a 138
  • m 12
  • x 5
  • el 3
  • d 2
  • s 1
  • More… Less…