Search (156 results, page 1 of 8)

  • × theme_ss:"Inhaltsanalyse"
  1. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.04
    0.042697184 = product of:
      0.08539437 = sum of:
        0.08539437 = sum of:
          0.02504445 = weight(_text_:m in 5830) [ClassicSimilarity], result of:
            0.02504445 = score(doc=5830,freq=2.0), product of:
              0.11386436 = queryWeight, product of:
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.045757167 = queryNorm
              0.21994986 = fieldWeight in 5830, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.0625 = fieldNorm(doc=5830)
          0.0107542 = weight(_text_:a in 5830) [ClassicSimilarity], result of:
            0.0107542 = score(doc=5830,freq=8.0), product of:
              0.05276016 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.045757167 = queryNorm
              0.20383182 = fieldWeight in 5830, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0625 = fieldNorm(doc=5830)
          0.049595714 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
            0.049595714 = score(doc=5830,freq=2.0), product of:
              0.1602338 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045757167 = queryNorm
              0.30952093 = fieldWeight in 5830, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=5830)
      0.5 = coord(1/2)
    
    Abstract
    This paper examnines various isues that arise in establishing a theoretical basis for an experimental fiction analysis system. It analyzes the warrants of fiction and of works about fiction. From this analysis, it derives classificatory requirements for a fiction system. Classificatory techniques that may contribute to the specification of data elements in fiction are suggested
    Date
    5. 8.2006 13:22:08
    Source
    Classification research for knowledge representation and organization. Proc. 5th Int. Study Conf. on Classification Research, Toronto, Canada, 24.-28.6.1991. Ed. by N.J. Williamson u. M. Hudon
    Type
    a
  2. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.02
    0.023833368 = product of:
      0.047666736 = sum of:
        0.047666736 = product of:
          0.0715001 = sum of:
            0.00950546 = weight(_text_:a in 5835) [ClassicSimilarity], result of:
              0.00950546 = score(doc=5835,freq=4.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.18016359 = fieldWeight in 5835, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5835)
            0.061994642 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
              0.061994642 = score(doc=5835,freq=2.0), product of:
                0.1602338 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045757167 = queryNorm
                0.38690117 = fieldWeight in 5835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5835)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    5. 8.2006 13:22:44
    Type
    a
  3. Sauperl, A.: Subject determination during the cataloging process : the development of a system based on theoretical principles (2002) 0.02
    0.020448945 = product of:
      0.04089789 = sum of:
        0.04089789 = sum of:
          0.013281826 = weight(_text_:m in 2293) [ClassicSimilarity], result of:
            0.013281826 = score(doc=2293,freq=4.0), product of:
              0.11386436 = queryWeight, product of:
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.045757167 = queryNorm
              0.11664603 = fieldWeight in 2293, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.0234375 = fieldNorm(doc=2293)
          0.009017671 = weight(_text_:a in 2293) [ClassicSimilarity], result of:
            0.009017671 = score(doc=2293,freq=40.0), product of:
              0.05276016 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.045757167 = queryNorm
              0.1709182 = fieldWeight in 2293, product of:
                6.3245554 = tf(freq=40.0), with freq of:
                  40.0 = termFreq=40.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0234375 = fieldNorm(doc=2293)
          0.018598393 = weight(_text_:22 in 2293) [ClassicSimilarity], result of:
            0.018598393 = score(doc=2293,freq=2.0), product of:
              0.1602338 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045757167 = queryNorm
              0.116070345 = fieldWeight in 2293, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=2293)
      0.5 = coord(1/2)
    
    Date
    27. 9.2005 14:22:19
    Footnote
    Rez. in: Knowledge organization 30(2003) no.2, S.114-115 (M. Hudon); "This most interesting contribution to the literature of subject cataloguing originates in the author's doctoral dissertation, prepared under the direction of jerry Saye at the University of North Carolina at Chapel Hill. In seven highly readable chapters, Alenka Sauperl develops possible answers to her principal research question: How do cataloguers determine or identify the topic of a document and choose appropriate subject representations? Specific questions at the source of this research an a process which has not been a frequent object of study include: Where do cataloguers look for an overall sense of what a document is about? How do they get an overall sense of what a document is about, especially when they are not familiar with the discipline? Do they consider only one or several possible interpretations? How do they translate meanings in appropriate and valid class numbers and subject headings? Using a strictly qualitative methodology, Dr. Sauperl's research is a study of twelve cataloguers in reallife situation. The author insists an the holistic rather than purely theoretical understanding of the process she is targeting. Participants in the study were professional cataloguers, with at least one year experience in their current job at one of three large academic libraries in the Southeastern United States. All three libraries have a large central cataloguing department, and use OCLC sources and the same automated system; the context of cataloguing tasks is thus considered to be reasonably comparable. All participants were volunteers in this study which combined two datagathering techniques: the think-aloud method and time-line interviews. A model of the subject cataloguing process was first developed from observations of a group of six cataloguers who were asked to independently perform original cataloguing an three nonfiction, non-serial items selected from materials regularly assigned to them for processing. The model was then used for follow-up interviews. Each participant in the second group of cataloguers was invited to reflect an his/her work process for a recent challenging document they had catalogued. Results are presented in 12 stories describing as many personal approaches to subject cataloguing. From these stories a summarization is offered and a theoretical model of subject cataloguing is developed which, according to the author, represents a realistic approach to subject cataloguing. Stories alternate comments from the researcher and direct quotations from the observed or interviewed cataloguers. Not surprisingly, the participants' stories reveal similarities in the sequence and accomplishment of several tasks in the process of subject cataloguing. Sauperl's proposed model, described in Chapter 5, includes as main stages: 1) Examination of the book and subject identification; 2) Search for subject headings; 3) Classification. Chapter 6 is a hypothetical Gase study, using the proposed model to describe the various stages of cataloguing a hypothetical resource. ...
    This document will be particularly useful to subject cataloguing teachers and trainers who could use the model to design case descriptions and exercises. We believe it is an accurate description of the reality of subject cataloguing today. But now that we know how things are dope, the next interesting question may be: Is that the best way? Is there a better, more efficient, way to do things? We can only hope that Dr. Sauperl will soon provide her own view of methods and techniques that could improve the flow of work or address the cataloguers' concern as to the lack of feedback an their work. Her several excellent suggestions for further research in this area all build an bits and pieces of what is done already, and stay well away from what could be done by the various actors in the area, from the designers of controlled vocabularies and authority files to those who use these tools an a daily basis to index, classify, or search for information."
    Type
    m
  4. Hauff-Hartig, S.: Automatische Transkription von Videos : Fernsehen 3.0: Automatisierte Sentimentanalyse und Zusammenstellung von Kurzvideos mit hohem Aufregungslevel KI-generierte Metadaten: Von der Technologiebeobachtung bis zum produktiven Einsatz (2021) 0.02
    0.01832427 = product of:
      0.03664854 = sum of:
        0.03664854 = product of:
          0.054972813 = sum of:
            0.0053771 = weight(_text_:a in 251) [ClassicSimilarity], result of:
              0.0053771 = score(doc=251,freq=2.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.10191591 = fieldWeight in 251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=251)
            0.049595714 = weight(_text_:22 in 251) [ClassicSimilarity], result of:
              0.049595714 = score(doc=251,freq=2.0), product of:
                0.1602338 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045757167 = queryNorm
                0.30952093 = fieldWeight in 251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=251)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    22. 5.2021 12:43:05
    Type
    a
  5. Merrill, W.S.: Code for classifiers : principles governing the consistent placing of books in a system of classification (1969) 0.02
    0.017745905 = product of:
      0.03549181 = sum of:
        0.03549181 = product of:
          0.053237714 = sum of:
            0.043827787 = weight(_text_:m in 1640) [ClassicSimilarity], result of:
              0.043827787 = score(doc=1640,freq=2.0), product of:
                0.11386436 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045757167 = queryNorm
                0.38491225 = fieldWeight in 1640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1640)
            0.009409925 = weight(_text_:a in 1640) [ClassicSimilarity], result of:
              0.009409925 = score(doc=1640,freq=2.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.17835285 = fieldWeight in 1640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1640)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Type
    m
  6. Raieli, R.: ¬The semantic hole : enthusiasm and caution around multimedia information retrieval (2012) 0.02
    0.015732508 = product of:
      0.031465016 = sum of:
        0.031465016 = product of:
          0.04719752 = sum of:
            0.0033606873 = weight(_text_:a in 4888) [ClassicSimilarity], result of:
              0.0033606873 = score(doc=4888,freq=2.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.06369744 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4888)
            0.043836832 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.043836832 = score(doc=4888,freq=4.0), product of:
                0.1602338 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045757167 = queryNorm
                0.27358043 = fieldWeight in 4888, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4888)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    22. 1.2012 13:02:10
    Source
    Knowledge organization. 39(2012) no.1, S.13-22
    Type
    a
  7. White, M.D.; Marsh, E.E.: Content analysis : a flexible methodology (2006) 0.02
    0.015691716 = product of:
      0.031383432 = sum of:
        0.031383432 = product of:
          0.04707515 = sum of:
            0.009878363 = weight(_text_:a in 5589) [ClassicSimilarity], result of:
              0.009878363 = score(doc=5589,freq=12.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.18723148 = fieldWeight in 5589, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5589)
            0.037196785 = weight(_text_:22 in 5589) [ClassicSimilarity], result of:
              0.037196785 = score(doc=5589,freq=2.0), product of:
                0.1602338 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045757167 = queryNorm
                0.23214069 = fieldWeight in 5589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5589)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Content analysis is a highly flexible research method that has been widely used in library and information science (LIS) studies with varying research goals and objectives. The research method is applied in qualitative, quantitative, and sometimes mixed modes of research frameworks and employs a wide range of analytical techniques to generate findings and put them into context. This article characterizes content analysis as a systematic, rigorous approach to analyzing documents obtained or generated in the course of research. It briefly describes the steps involved in content analysis, differentiates between quantitative and qualitative content analysis, and shows that content analysis serves the purposes of both quantitative research and qualitative research. The authors draw on selected LIS studies that have used content analysis to illustrate the concepts addressed in the article. The article also serves as a gateway to methodological books and articles that provide more detail about aspects of content analysis discussed only briefly in the article.
    Source
    Library trends. 55(2006) no.1, S.22-45
    Type
    a
  8. Chen, S.-J.; Lee, H.-L.: Art images and mental associations : a preliminary exploration (2014) 0.02
    0.0154048195 = product of:
      0.030809639 = sum of:
        0.030809639 = product of:
          0.046214458 = sum of:
            0.009017671 = weight(_text_:a in 1416) [ClassicSimilarity], result of:
              0.009017671 = score(doc=1416,freq=10.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.1709182 = fieldWeight in 1416, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1416)
            0.037196785 = weight(_text_:22 in 1416) [ClassicSimilarity], result of:
              0.037196785 = score(doc=1416,freq=2.0), product of:
                0.1602338 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045757167 = queryNorm
                0.23214069 = fieldWeight in 1416, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1416)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    This paper reports on the preliminary findings of a study that explores mental associations made by novices viewing art images. In a controlled environment, 20 Taiwanese college students responded to the question "What does the painting remind you of?" after viewing each digitized image of 15 oil paintings by a famous Taiwanese artist. Rather than focusing on the representation or interpretation of art, the study attempted to solicit information about how non-experts are stimulated by art. This paper reports on the analysis of participant responses to three of the images, and describes a12-type taxonomy of association emerged from the analysis. While 9 of the types are derived and adapted from facets in the Art & Architecture Thesaurus, three new types - Artistic Influence Association, Reactive Association, and Prototype Association - are discovered. The conclusion briefly discusses both the significance of the findings and the implications for future research.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
    Type
    a
  9. Weimer, K.H.: ¬The nexus of subject analysis and bibliographic description : the case of multipart videos (1996) 0.01
    0.013743203 = product of:
      0.027486406 = sum of:
        0.027486406 = product of:
          0.04122961 = sum of:
            0.004032825 = weight(_text_:a in 6525) [ClassicSimilarity], result of:
              0.004032825 = score(doc=6525,freq=2.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.07643694 = fieldWeight in 6525, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6525)
            0.037196785 = weight(_text_:22 in 6525) [ClassicSimilarity], result of:
              0.037196785 = score(doc=6525,freq=2.0), product of:
                0.1602338 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045757167 = queryNorm
                0.23214069 = fieldWeight in 6525, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6525)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.5-18
    Type
    a
  10. Bade, D.: ¬The creation and persistence of misinformation in shared library catalogs : language and subject knowledge in a technological era (2002) 0.01
    0.0122598 = product of:
      0.0245196 = sum of:
        0.0245196 = sum of:
          0.0062611126 = weight(_text_:m in 1858) [ClassicSimilarity], result of:
            0.0062611126 = score(doc=1858,freq=2.0), product of:
              0.11386436 = queryWeight, product of:
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.045757167 = queryNorm
              0.054987464 = fieldWeight in 1858, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4884486 = idf(docFreq=9980, maxDocs=44218)
                0.015625 = fieldNorm(doc=1858)
          0.005859559 = weight(_text_:a in 1858) [ClassicSimilarity], result of:
            0.005859559 = score(doc=1858,freq=38.0), product of:
              0.05276016 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.045757167 = queryNorm
              0.11106029 = fieldWeight in 1858, product of:
                6.164414 = tf(freq=38.0), with freq of:
                  38.0 = termFreq=38.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.015625 = fieldNorm(doc=1858)
          0.012398928 = weight(_text_:22 in 1858) [ClassicSimilarity], result of:
            0.012398928 = score(doc=1858,freq=2.0), product of:
              0.1602338 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045757167 = queryNorm
              0.07738023 = fieldWeight in 1858, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.015625 = fieldNorm(doc=1858)
      0.5 = coord(1/2)
    
    Date
    22. 9.1997 19:16:05
    Footnote
    Rez. in JASIST 54(2003) no.4, S.356-357 (S.J. Lincicum): "Reliance upon shared cataloging in academic libraries in the United States has been driven largely by the need to reduce the expense of cataloging operations without muck regard for the Impact that this approach might have an the quality of the records included in local catalogs. In recent years, ever increasing pressures have prompted libraries to adopt practices such as "rapid" copy cataloging that purposely reduce the scrutiny applied to bibliographic records downloaded from shared databases, possibly increasing the number of errors that slip through unnoticed. Errors in bibliographic records can lead to serious problems for library catalog users. If the data contained in bibliographic records is inaccurate, users will have difficulty discovering and recognizing resources in a library's collection that are relevant to their needs. Thus, it has become increasingly important to understand the extent and nature of errors that occur in the records found in large shared bibliographic databases, such as OCLC WorldCat, to develop cataloging practices optimized for the shared cataloging environment. Although this monograph raises a few legitimate concerns about recent trends in cataloging practice, it fails to provide the "detailed look" at misinformation in library catalogs arising from linguistic errors and mistakes in subject analysis promised by the publisher. A basic premise advanced throughout the text is that a certain amount of linguistic and subject knowledge is required to catalog library materials effectively. The author emphasizes repeatedly that most catalogers today are asked to catalog an increasingly diverse array of materials, and that they are often required to work in languages or subject areas of which they have little or no knowledge. He argues that the records contributed to shared databases are increasingly being created by catalogers with inadequate linguistic or subject expertise. This adversely affects the quality of individual library catalogs because errors often go uncorrected as records are downloaded from shared databases to local catalogs by copy catalogers who possess even less knowledge. Calling misinformation an "evil phenomenon," Bade states that his main goal is to discuss, "two fundamental types of misinformation found in bibliographic and authority records in library catalogs: that arising from linguistic errors, and that caused by errors in subject analysis, including missing or wrong subject headings" (p. 2). After a superficial discussion of "other" types of errors that can occur in bibliographic records, such as typographical errors and errors in the application of descriptive cataloging rules, Bade begins his discussion of linguistic errors. He asserts that sharing bibliographic records created by catalogers with inadequate linguistic or subject knowledge has, "disastrous effects an the library community" (p. 6). To support this bold assertion, Bade provides as evidence little more than a laundry list of errors that he has personally observed in bibliographic records over the years. When he eventually cites several studies that have addressed the availability and quality of records available for materials in languages other than English, he fails to describe the findings of these studies in any detail, let alone relate the findings to his own observations in a meaningful way. Bade claims that a lack of linguistic expertise among catalogers is the "primary source for linguistic misinformation in our databases" (p. 10), but he neither cites substantive data from existing studies nor provides any new data regarding the overall level of linguistic knowledge among catalogers to support this claim. The section concludes with a brief list of eight sensible, if unoriginal, suggestions for coping with the challenge of cataloging materials in unfamiliar languages.
    Bade begins his discussion of errors in subject analysis by summarizing the contents of seven records containing what he considers to be egregious errors. The examples were drawn only from items that he has encountered in the course of his work. Five of the seven records were full-level ("I" level) records for Eastern European materials created between 1996 and 2000 in the OCLC WorldCat database. The final two examples were taken from records created by Bade himself over an unspecified period of time. Although he is to be commended for examining the actual items cataloged and for examining mostly items that he claims to have adequate linguistic and subject expertise to evaluate reliably, Bade's methodology has major flaws. First and foremost, the number of examples provided is completely inadequate to draw any conclusions about the extent of the problem. Although an in-depth qualitative analysis of a small number of records might have yielded some valuable insight into factors that contribute to errors in subject analysis, Bade provides no Information about the circumstances under which the live OCLC records he critiques were created. Instead, he offers simplistic explanations for the errors based solely an his own assumptions. He supplements his analysis of examples with an extremely brief survey of other studies regarding errors in subject analysis, which consists primarily of criticism of work done by Sheila Intner. In the end, it is impossible to draw any reliable conclusions about the nature or extent of errors in subject analysis found in records in shared bibliographic databases based an Bade's analysis. In the final third of the essay, Bade finally reveals his true concern: the deintellectualization of cataloging. It would strengthen the essay tremendously to present this as the primary premise from the very beginning, as this section offers glimpses of a compelling argument. Bade laments, "Many librarians simply do not sec cataloging as an intellectual activity requiring an educated mind" (p. 20). Commenting an recent trends in copy cataloging practice, he declares, "The disaster of our time is that this work is being done more and more by people who can neither evaluate nor correct imported errors and offen are forbidden from even thinking about it" (p. 26). Bade argues that the most valuable content found in catalog records is the intellectual content contributed by knowledgeable catalogers, and he asserts that to perform intellectually demanding tasks such as subject analysis reliably and effectively, catalogers must have the linguistic and subject knowledge required to gain at least a rudimentary understanding of the materials that they describe. He contends that requiring catalogers to quickly dispense with materials in unfamiliar languages and subjects clearly undermines their ability to perform the intellectual work of cataloging and leads to an increasing number of errors in the bibliographic records contributed to shared databases.
    Arguing that catalogers need to work both quickly and accurately, Bade maintains that employing specialists is the most efficient and effective way to achieve this outcome. Far less compelling than these arguments are Bade's concluding remarks, in which he offers meager suggestions for correcting the problems as he sees them. Overall, this essay is little more than a curmudgeon's diatribe. Addressed primarily to catalogers and library administrators, the analysis presented is too superficial to assist practicing catalogers or cataloging managers in developing solutions to any systemic problems in current cataloging practice, and it presents too little evidence of pervasive problems to convince budget-conscious library administrators of a need to alter practice or to increase their investment in local cataloging operations. Indeed, the reliance upon anecdotal evidence and the apparent nit-picking that dominate the essay might tend to reinforce a negative image of catalogers in the minds of some. To his credit, Bade does provide an important reminder that it is the intellectual contributions made by thousands of erudite catalogers that have made shared cataloging a successful strategy for improving cataloging efficiency. This is an important point that often seems to be forgotten in academic libraries when focus centers an cutting costs. Had Bade focused more narrowly upon the issue of deintellectualization of cataloging and written a carefully structured essay to advance this argument, this essay might have been much more effective." - KO 29(2002) nos.3/4, S.236-237 (A. Sauperl)
    Type
    m
  11. Wersig, G.: Inhaltsanalyse : Einführung in ihre Systematik und Literatur (1968) 0.01
    0.010435188 = product of:
      0.020870376 = sum of:
        0.020870376 = product of:
          0.062611125 = sum of:
            0.062611125 = weight(_text_:m in 2386) [ClassicSimilarity], result of:
              0.062611125 = score(doc=2386,freq=2.0), product of:
                0.11386436 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045757167 = queryNorm
                0.54987466 = fieldWeight in 2386, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.15625 = fieldNorm(doc=2386)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Type
    m
  12. Krippendorff, K.: Content analysis : an introduction to its methodology (1985) 0.01
    0.010435188 = product of:
      0.020870376 = sum of:
        0.020870376 = product of:
          0.062611125 = sum of:
            0.062611125 = weight(_text_:m in 7511) [ClassicSimilarity], result of:
              0.062611125 = score(doc=7511,freq=2.0), product of:
                0.11386436 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045757167 = queryNorm
                0.54987466 = fieldWeight in 7511, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.15625 = fieldNorm(doc=7511)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Type
    m
  13. Franke-Maier, M.; Harbeck, M.: Superman = Persepolis = Naruto? : Herausforderungen und Probleme der formalen und inhaltlichen Vielfalt von Comics und Comicforschung für die Regensburger Verbundklassifikation (2016) 0.01
    0.010198826 = product of:
      0.020397652 = sum of:
        0.020397652 = product of:
          0.030596476 = sum of:
            0.026563652 = weight(_text_:m in 3306) [ClassicSimilarity], result of:
              0.026563652 = score(doc=3306,freq=4.0), product of:
                0.11386436 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045757167 = queryNorm
                0.23329206 = fieldWeight in 3306, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3306)
            0.004032825 = weight(_text_:a in 3306) [ClassicSimilarity], result of:
              0.004032825 = score(doc=3306,freq=2.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.07643694 = fieldWeight in 3306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3306)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Type
    a
  14. Zarri, G.P.: Indexing and querying of narrative documents, a knowledge representation approach (2003) 0.01
    0.010021043 = product of:
      0.020042086 = sum of:
        0.020042086 = product of:
          0.030063128 = sum of:
            0.021913894 = weight(_text_:m in 2691) [ClassicSimilarity], result of:
              0.021913894 = score(doc=2691,freq=2.0), product of:
                0.11386436 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045757167 = queryNorm
                0.19245613 = fieldWeight in 2691, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2691)
            0.008149235 = weight(_text_:a in 2691) [ClassicSimilarity], result of:
              0.008149235 = score(doc=2691,freq=6.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.1544581 = fieldWeight in 2691, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2691)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    We describe here NKRL (Narrative Knowledge Representation Language), a semantic formalism for taking into account the characteristics of narrative multimedia documents. In these documents, the information content consists in the description of 'events' that relate the real or intended behaviour of some 'actors' (characters, personages, etc.). Narrative documents of an economic interest correspond to news stories, corporate documents, normative and legal texts, intelligence messages, representation of patient's medical records, etc. NKRL is characterised by the use of several knowledge representation principles and several high-level inference tools.
    Source
    Challenges in knowledge representation and organization for the 21st century: Integration of knowledge across boundaries. Proceedings of the 7th ISKO International Conference Granada, Spain, July 10-13, 2002. Ed.: M. López-Huertas
    Type
    a
  15. Nahotko, M.: Genre groups in knowledge organization (2016) 0.01
    0.010021043 = product of:
      0.020042086 = sum of:
        0.020042086 = product of:
          0.030063128 = sum of:
            0.021913894 = weight(_text_:m in 5139) [ClassicSimilarity], result of:
              0.021913894 = score(doc=5139,freq=2.0), product of:
                0.11386436 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045757167 = queryNorm
                0.19245613 = fieldWeight in 5139, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5139)
            0.008149235 = weight(_text_:a in 5139) [ClassicSimilarity], result of:
              0.008149235 = score(doc=5139,freq=6.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.1544581 = fieldWeight in 5139, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5139)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    The article is an introduction to the development of Andersen's concept of textual tools used in knowledge organization (KO) in light of the theory of genres and activity systems. In particular, the question is based on the concepts of genre connectivity and genre group, in addition to previously established concepts such as genre hierarchy, set, system, and repertoire. Five genre groups used in KO are described. The analysis of groups, systems, and selected genres used in KO is provided, based on the method proposed by Yates and Orlikowski. The aim is to show the genre system as a part of the activity system, and thus as a framework for KO.
    Type
    a
  16. From information to knowledge : conceptual and content analysis by computer (1995) 0.01
    0.009883702 = product of:
      0.019767404 = sum of:
        0.019767404 = product of:
          0.029651104 = sum of:
            0.022136377 = weight(_text_:m in 5392) [ClassicSimilarity], result of:
              0.022136377 = score(doc=5392,freq=4.0), product of:
                0.11386436 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045757167 = queryNorm
                0.19441006 = fieldWeight in 5392, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5392)
            0.007514726 = weight(_text_:a in 5392) [ClassicSimilarity], result of:
              0.007514726 = score(doc=5392,freq=10.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.14243183 = fieldWeight in 5392, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5392)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Content
    SCHMIDT, K.M.: Concepts - content - meaning: an introduction; DUCHASTEL, J. et al.: The SACAO project: using computation toward textual data analysis; PAQUIN, L.-C. u. L. DUPUY: An approach to expertise transfer: computer-assisted text analysis; HOGENRAAD, R., Y. BESTGEN u. J.-L. NYSTEN: Terrorist rhetoric: texture and architecture; MOHLER, P.P.: On the interaction between reading and computing: an interpretative approach to content analysis; LANCASHIRE, I.: Computer tools for cognitive stylistics; MERGENTHALER, E.: An outline of knowledge based text analysis; NAMENWIRTH, J.Z.: Ideography in computer-aided content analysis; WEBER, R.P. u. J.Z. Namenwirth: Content-analytic indicators: a self-critique; McKINNON, A.: Optimizing the aberrant frequency word technique; ROSATI, R.: Factor analysis in classical archaeology: export patterns of Attic pottery trade; PETRILLO, P.S.: Old and new worlds: ancient coinage and modern technology; DARANYI, S., S. MARJAI u.a.: Caryatids and the measurement of semiosis in architecture; ZARRI, G.P.: Intelligent information retrieval: an application in the field of historical biographical data; BOUCHARD, G., R. ROY u.a.: Computers and genealogy: from family reconstitution to population reconstruction; DEMÉLAS-BOHY, M.-D. u. M. RENAUD: Instability, networks and political parties: a political history expert system prototype; DARANYI, S., A. ABRANYI u. G. KOVACS: Knowledge extraction from ethnopoetic texts by multivariate statistical methods; FRAUTSCHI, R.L.: Measures of narrative voice in French prose fiction applied to textual samples from the enlightenment to the twentieth century; DANNENBERG, R. u.a.: A project in computer music: the musician's workbench
  17. Buckland, M.; Shaw, R.: 4W vocabulary mapping across diiverse reference genres (2008) 0.01
    0.009553901 = product of:
      0.019107802 = sum of:
        0.019107802 = product of:
          0.028661702 = sum of:
            0.018783338 = weight(_text_:m in 2258) [ClassicSimilarity], result of:
              0.018783338 = score(doc=2258,freq=2.0), product of:
                0.11386436 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045757167 = queryNorm
                0.1649624 = fieldWeight in 2258, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2258)
            0.009878363 = weight(_text_:a in 2258) [ClassicSimilarity], result of:
              0.009878363 = score(doc=2258,freq=12.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.18723148 = fieldWeight in 2258, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2258)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Content
    This paper examines three themes in the design of search support services: linking different genres of reference resources (e.g. bibliographies, biographical dictionaries, catalogs, encyclopedias, place name gazetteers); the division of vocabularies by facet (e.g. What, Where, When, and Who); and mapping between both similar and dissimilar vocabularies. Different vocabularies within a facet can be used in conjunction, e.g. a place name combined with spatial coordinates for Where. In practice, vocabularies of different facets are used in combination in the representation or description of complex topics. Rich opportunities arise from mapping across vocabularies of dissimilar reference genres to recreate the amenities of a reference library. In a network environment, in which vocabulary control cannot be imposed, semantic correspondence across diverse vocabularies is a challenge and an opportunity.
    Type
    a
  18. Beghtol, C.: ¬The classification of fiction : the development of a system based on theoretical principles (1994) 0.01
    0.009522572 = product of:
      0.019045144 = sum of:
        0.019045144 = product of:
          0.028567716 = sum of:
            0.021913894 = weight(_text_:m in 3413) [ClassicSimilarity], result of:
              0.021913894 = score(doc=3413,freq=2.0), product of:
                0.11386436 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045757167 = queryNorm
                0.19245613 = fieldWeight in 3413, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3413)
            0.0066538225 = weight(_text_:a in 3413) [ClassicSimilarity], result of:
              0.0066538225 = score(doc=3413,freq=4.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.12611452 = fieldWeight in 3413, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3413)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    The work is an adaptation of the author's dissertation and has the following chapters: (1) background and introduction; (2) a problem in classification theory; (3) previous fiction analysis theories and systems and 'The left hand of darkness'; (4) fiction warrant and critical warrant; (5) experimental fiction analysis system (EFAS); (6) application and evaluation of EFAS. Appendix 1 gives references to fiction analysis systems and appendix 2 lists EFAS coding sheets
    Type
    m
  19. Short, M.: Text mining and subject analysis for fiction; or, using machine learning and information extraction to assign subject headings to dime novels (2019) 0.01
    0.009522572 = product of:
      0.019045144 = sum of:
        0.019045144 = product of:
          0.028567716 = sum of:
            0.021913894 = weight(_text_:m in 5481) [ClassicSimilarity], result of:
              0.021913894 = score(doc=5481,freq=2.0), product of:
                0.11386436 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045757167 = queryNorm
                0.19245613 = fieldWeight in 5481, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5481)
            0.0066538225 = weight(_text_:a in 5481) [ClassicSimilarity], result of:
              0.0066538225 = score(doc=5481,freq=4.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.12611452 = fieldWeight in 5481, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5481)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    This article describes multiple experiments in text mining at Northern Illinois University that were undertaken to improve the efficiency and accuracy of cataloging. It focuses narrowly on subject analysis of dime novels, a format of inexpensive fiction that was popular in the United States between 1860 and 1915. NIU holds more than 55,000 dime novels in its collections, which it is in the process of comprehensively digitizing. Classification, keyword extraction, named-entity recognition, clustering, and topic modeling are discussed as means of assigning subject headings to improve their discoverability by researchers and to increase the productivity of digitization workflows.
    Type
    a
  20. Lassak, L.: ¬Ein Versuch zur Repräsentation von Charakteren der Kinder- und Jugendbuchserie "Die drei ???" in einer Datenbank (2017) 0.01
    0.008872952 = product of:
      0.017745905 = sum of:
        0.017745905 = product of:
          0.026618857 = sum of:
            0.021913894 = weight(_text_:m in 1784) [ClassicSimilarity], result of:
              0.021913894 = score(doc=1784,freq=2.0), product of:
                0.11386436 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045757167 = queryNorm
                0.19245613 = fieldWeight in 1784, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1784)
            0.0047049625 = weight(_text_:a in 1784) [ClassicSimilarity], result of:
              0.0047049625 = score(doc=1784,freq=2.0), product of:
                0.05276016 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045757167 = queryNorm
                0.089176424 = fieldWeight in 1784, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1784)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Footnote
    Masterarbeit zur Erlangung des akademischen Grades Master of Arts (M. A.)

Authors

Languages

  • e 133
  • d 20
  • f 2
  • nl 1
  • More… Less…

Types

  • a 139
  • m 13
  • el 3
  • x 2
  • d 1
  • s 1
  • More… Less…