Search (75 results, page 1 of 4)

  • × theme_ss:"Inhaltsanalyse"
  1. Weimer, K.H.: ¬The nexus of subject analysis and bibliographic description : the case of multipart videos (1996) 0.07
    0.07476875 = product of:
      0.1495375 = sum of:
        0.0797216 = weight(_text_:subject in 6525) [ClassicSimilarity], result of:
          0.0797216 = score(doc=6525,freq=8.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.4741941 = fieldWeight in 6525, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=6525)
        0.06981591 = sum of:
          0.031604223 = weight(_text_:classification in 6525) [ClassicSimilarity], result of:
            0.031604223 = score(doc=6525,freq=2.0), product of:
              0.14969917 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.04700564 = queryNorm
              0.21111822 = fieldWeight in 6525, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.046875 = fieldNorm(doc=6525)
          0.03821169 = weight(_text_:22 in 6525) [ClassicSimilarity], result of:
            0.03821169 = score(doc=6525,freq=2.0), product of:
              0.16460574 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04700564 = queryNorm
              0.23214069 = fieldWeight in 6525, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=6525)
      0.5 = coord(2/4)
    
    Abstract
    Examines the goals of bibliographic control, subject analysis and their relationship for audiovisual materials in general and multipart videotape recordings in particular. Concludes that intellectual access to multipart works is not adequately provided for when these materials are catalogues in collective set records. An alternative is to catalogue the parts separately. This method increases intellectual access by providing more detailed descriptive notes and subject analysis. As evidenced by the large number of records in the national database for parts of multipart videos, cataloguers have made the intellectual content of multipart videos more accessible by cataloguing the parts separately rather than collectively. This reverses the traditional cataloguing process to begin with subject analysis, resulting in the intellectual content of these materials driving the bibliographic description. Suggests ways of determining when multipart videos are best catalogued as sets or separately
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.5-18
  2. Langridge, D.W.: Subject analysis : principles and procedures (1989) 0.07
    0.07292173 = product of:
      0.14584346 = sum of:
        0.113911726 = weight(_text_:subject in 2021) [ClassicSimilarity], result of:
          0.113911726 = score(doc=2021,freq=12.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.6775613 = fieldWeight in 2021, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2021)
        0.031931736 = product of:
          0.06386347 = sum of:
            0.06386347 = weight(_text_:classification in 2021) [ClassicSimilarity], result of:
              0.06386347 = score(doc=2021,freq=6.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.42661208 = fieldWeight in 2021, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2021)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Subject analysis is the basis of all classifying and indexing techniques and is equally applicable to automatic and manual indexing systems. This book discusses subject analysis as an activity in its own right, independent of any indexing language. It examines the theoretical basis of subject analysis using the concepts of forms of knowledge as applicable to classification schemes.
    LCSH
    Subject cataloging
    Classification / Books
    Subject
    Subject cataloging
    Classification / Books
  3. Hjoerland, B.: ¬The concept of 'subject' in information science (1992) 0.07
    0.07092651 = product of:
      0.14185302 = sum of:
        0.1260509 = weight(_text_:subject in 2247) [ClassicSimilarity], result of:
          0.1260509 = score(doc=2247,freq=20.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.7497667 = fieldWeight in 2247, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=2247)
        0.015802111 = product of:
          0.031604223 = sum of:
            0.031604223 = weight(_text_:classification in 2247) [ClassicSimilarity], result of:
              0.031604223 = score(doc=2247,freq=2.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.21111822 = fieldWeight in 2247, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2247)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This article presents a theoretical investigation of the concept of 'subject' or 'subject matter' in library and information science. Most conceptions of 'subject' in the literature are not explicit but implicit. Various indexing and classification theories, including automatic indexing and citation indexing, have their own more or less implicit concepts of subject. This fact puts the emphasis on making the implicit theorie of 'subject matter' explicit as the first step. ... The different conceptions of 'subject' can therefore be classified into epistemological positions, e.g. 'subjective idealism' (or the empiric/positivistic viewpoint), 'objective idealism' (the rationalistic viewpoint), 'pragmatism' and 'materialism/realism'. The third and final step is to propose a new theory of subject matter based on an explicit theory of knowledge. In this article this is done from the point of view of a realistic/materialistic epistemology. From this standpoint the subject of a document is defined as the epistemological potentials of that document
    Footnote
    Ergänzung zu Langridge, D.W.: Subject analysis
  4. Svenonius, E.; McGarry, D.: Objectivity in evaluating subject heading assignment (1993) 0.07
    0.07073726 = product of:
      0.14147452 = sum of:
        0.123038724 = weight(_text_:subject in 5612) [ClassicSimilarity], result of:
          0.123038724 = score(doc=5612,freq=14.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.73184985 = fieldWeight in 5612, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5612)
        0.018435795 = product of:
          0.03687159 = sum of:
            0.03687159 = weight(_text_:classification in 5612) [ClassicSimilarity], result of:
              0.03687159 = score(doc=5612,freq=2.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.24630459 = fieldWeight in 5612, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5612)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Recent papers have called attention to discrepancies in the assignment of LCSH. While philosophical arguments can be made that subject analysis, if not a logical impossibility, at least is point-of-view dependent, subject headings continue to be assigned and continue to be useful. The hypothesis advanced in the present project is that to a considerable degree there is a clear-cut right and wrong to LCSH subject heading assignment. To test the hypothesis, it was postulated that the assignment of a subject heading is correct if it is supported by textual warrant (at least 20% of the book being cataloged is on the topic) and is constructed in accordance with the LoC Subject Cataloging Manual: Subject Headings. A sample of 100 books on scientific subjects was used to test the hypothesis
    Source
    Cataloging and classification quarterly. 16(1993) no.2, S.5-40
  5. Marshall, L.: Specific and generic subject headings : increasing subject access to library materials (2003) 0.07
    0.07073726 = product of:
      0.14147452 = sum of:
        0.123038724 = weight(_text_:subject in 5497) [ClassicSimilarity], result of:
          0.123038724 = score(doc=5497,freq=14.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.73184985 = fieldWeight in 5497, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5497)
        0.018435795 = product of:
          0.03687159 = sum of:
            0.03687159 = weight(_text_:classification in 5497) [ClassicSimilarity], result of:
              0.03687159 = score(doc=5497,freq=2.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.24630459 = fieldWeight in 5497, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5497)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The principle of specificity for subject headings provides a clear advantage to many researchers for the precision it brings to subject searching. However, for some researchers very specific subject headings hinder an efficient and comprehensive search. An appropriate broader heading, especially when made narrower in scope by the addition of subheadings, can benefit researchers by providing generic access to their topic. Assigning both specific and generic subject headings to a work would enhance the subject accessibility for the diverse approaches and research needs of different catalog users. However, it can be difficult for catalogers to assign broader terms consistently to different works and without consistency the gathering function of those terms may not be realized.
    Source
    Cataloging and classification quarterly. 36(2003) no.2, S.59-87
  6. Short, M.: Text mining and subject analysis for fiction; or, using machine learning and information extraction to assign subject headings to dime novels (2019) 0.06
    0.059540343 = product of:
      0.119080685 = sum of:
        0.09300853 = weight(_text_:subject in 5481) [ClassicSimilarity], result of:
          0.09300853 = score(doc=5481,freq=8.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.5532265 = fieldWeight in 5481, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5481)
        0.026072152 = product of:
          0.052144304 = sum of:
            0.052144304 = weight(_text_:classification in 5481) [ClassicSimilarity], result of:
              0.052144304 = score(doc=5481,freq=4.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.34832728 = fieldWeight in 5481, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5481)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This article describes multiple experiments in text mining at Northern Illinois University that were undertaken to improve the efficiency and accuracy of cataloging. It focuses narrowly on subject analysis of dime novels, a format of inexpensive fiction that was popular in the United States between 1860 and 1915. NIU holds more than 55,000 dime novels in its collections, which it is in the process of comprehensively digitizing. Classification, keyword extraction, named-entity recognition, clustering, and topic modeling are discussed as means of assigning subject headings to improve their discoverability by researchers and to increase the productivity of digitization workflows.
    Source
    Cataloging and classification quarterly. 57(2019) no.5, S.315-336
  7. Hoover, L.: ¬A beginners' guide for subject analysis of theses and dissertations in the hard sciences (2005) 0.06
    0.05641021 = product of:
      0.11282042 = sum of:
        0.09965199 = weight(_text_:subject in 5740) [ClassicSimilarity], result of:
          0.09965199 = score(doc=5740,freq=18.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.5927426 = fieldWeight in 5740, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5740)
        0.013168425 = product of:
          0.02633685 = sum of:
            0.02633685 = weight(_text_:classification in 5740) [ClassicSimilarity], result of:
              0.02633685 = score(doc=5740,freq=2.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.17593184 = fieldWeight in 5740, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5740)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This guide, for beginning catalogers with humanities or social sciences backgrounds, provides assistance in subject analysis (based on Library of Congress Subject Headings) of theses and dissertations (T/Ds) that are produced by graduate students in university departments in the hard sciences (physical sciences and engineering). It is aimed at those who have had little or no experience in cataloging, especially of this type of material, and for those who desire to supplement local mentoring resources for subject analysis in the hard sciences. Theses and dissertations from these departments present a special challenge because they are the results of current research representing specific new concepts with which the cataloger may not be familiar. In fact, subject headings often have not yet been created for the specific concept(s) being researched. Additionally, T/D authors often use jargon/terminology specific to their department. Catalogers often have many other duties in addition to subject analysis of T/Ds in the hard sciences, yet they desire to provide optimal access through accurate, thorough subject analysis. Tips are provided for determining the content of the T/D, strategic searches on WorldCat for possible subject headings, evaluating the relevancy of these subject headings for final selection, and selecting appropriate subdivisions where needed. Lists of basic reference resources are also provided.
    Source
    Cataloging and classification quarterly. 41(2005) no.1, S.133-161
  8. Pejtersen, A.M.: ¬A new approach to the classification of fiction (1982) 0.06
    0.056025714 = product of:
      0.11205143 = sum of:
        0.06643467 = weight(_text_:subject in 7240) [ClassicSimilarity], result of:
          0.06643467 = score(doc=7240,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.39516178 = fieldWeight in 7240, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.078125 = fieldNorm(doc=7240)
        0.045616765 = product of:
          0.09123353 = sum of:
            0.09123353 = weight(_text_:classification in 7240) [ClassicSimilarity], result of:
              0.09123353 = score(doc=7240,freq=6.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.6094458 = fieldWeight in 7240, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.078125 = fieldNorm(doc=7240)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Universal classification I: subject analysis and ordering systems. Proc. of the 4th Int. Study Conf. on Classification Research, Augsburg, 28.6.-2.7.1982. Ed. I. Dahlberg
  9. Sauperl, A.: Subject determination during the cataloging process : the development of a system based on theoretical principles (2002) 0.05
    0.053384013 = product of:
      0.10676803 = sum of:
        0.071860075 = weight(_text_:subject in 2293) [ClassicSimilarity], result of:
          0.071860075 = score(doc=2293,freq=26.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.4274328 = fieldWeight in 2293, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2293)
        0.034907956 = sum of:
          0.015802111 = weight(_text_:classification in 2293) [ClassicSimilarity], result of:
            0.015802111 = score(doc=2293,freq=2.0), product of:
              0.14969917 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.04700564 = queryNorm
              0.10555911 = fieldWeight in 2293, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.0234375 = fieldNorm(doc=2293)
          0.019105844 = weight(_text_:22 in 2293) [ClassicSimilarity], result of:
            0.019105844 = score(doc=2293,freq=2.0), product of:
              0.16460574 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04700564 = queryNorm
              0.116070345 = fieldWeight in 2293, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=2293)
      0.5 = coord(2/4)
    
    Date
    27. 9.2005 14:22:19
    Footnote
    Rez. in: Knowledge organization 30(2003) no.2, S.114-115 (M. Hudon); "This most interesting contribution to the literature of subject cataloguing originates in the author's doctoral dissertation, prepared under the direction of jerry Saye at the University of North Carolina at Chapel Hill. In seven highly readable chapters, Alenka Sauperl develops possible answers to her principal research question: How do cataloguers determine or identify the topic of a document and choose appropriate subject representations? Specific questions at the source of this research an a process which has not been a frequent object of study include: Where do cataloguers look for an overall sense of what a document is about? How do they get an overall sense of what a document is about, especially when they are not familiar with the discipline? Do they consider only one or several possible interpretations? How do they translate meanings in appropriate and valid class numbers and subject headings? Using a strictly qualitative methodology, Dr. Sauperl's research is a study of twelve cataloguers in reallife situation. The author insists an the holistic rather than purely theoretical understanding of the process she is targeting. Participants in the study were professional cataloguers, with at least one year experience in their current job at one of three large academic libraries in the Southeastern United States. All three libraries have a large central cataloguing department, and use OCLC sources and the same automated system; the context of cataloguing tasks is thus considered to be reasonably comparable. All participants were volunteers in this study which combined two datagathering techniques: the think-aloud method and time-line interviews. A model of the subject cataloguing process was first developed from observations of a group of six cataloguers who were asked to independently perform original cataloguing an three nonfiction, non-serial items selected from materials regularly assigned to them for processing. The model was then used for follow-up interviews. Each participant in the second group of cataloguers was invited to reflect an his/her work process for a recent challenging document they had catalogued. Results are presented in 12 stories describing as many personal approaches to subject cataloguing. From these stories a summarization is offered and a theoretical model of subject cataloguing is developed which, according to the author, represents a realistic approach to subject cataloguing. Stories alternate comments from the researcher and direct quotations from the observed or interviewed cataloguers. Not surprisingly, the participants' stories reveal similarities in the sequence and accomplishment of several tasks in the process of subject cataloguing. Sauperl's proposed model, described in Chapter 5, includes as main stages: 1) Examination of the book and subject identification; 2) Search for subject headings; 3) Classification. Chapter 6 is a hypothetical Gase study, using the proposed model to describe the various stages of cataloguing a hypothetical resource. ...
    This document will be particularly useful to subject cataloguing teachers and trainers who could use the model to design case descriptions and exercises. We believe it is an accurate description of the reality of subject cataloguing today. But now that we know how things are dope, the next interesting question may be: Is that the best way? Is there a better, more efficient, way to do things? We can only hope that Dr. Sauperl will soon provide her own view of methods and techniques that could improve the flow of work or address the cataloguers' concern as to the lack of feedback an their work. Her several excellent suggestions for further research in this area all build an bits and pieces of what is done already, and stay well away from what could be done by the various actors in the area, from the designers of controlled vocabularies and authority files to those who use these tools an a daily basis to index, classify, or search for information."
  10. Naun, C.C.: Objectivity and subject access in the print library (2006) 0.05
    0.049491778 = product of:
      0.098983556 = sum of:
        0.08054776 = weight(_text_:subject in 236) [ClassicSimilarity], result of:
          0.08054776 = score(doc=236,freq=6.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.4791082 = fieldWeight in 236, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0546875 = fieldNorm(doc=236)
        0.018435795 = product of:
          0.03687159 = sum of:
            0.03687159 = weight(_text_:classification in 236) [ClassicSimilarity], result of:
              0.03687159 = score(doc=236,freq=2.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.24630459 = fieldWeight in 236, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=236)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Librarians have inherited from the print environment a particular way of thinking about subject representation, one based on the conscious identification by librarians of appropriate subject classes and terminology. This conception has played a central role in shaping the profession's characteristic approach to upholding one of its core values: objectivity. It is argued that the social and technological roots of traditional indexing practice are closely intertwined. It is further argued that in traditional library practice objectivity is to be understood as impartiality, and reflects the mediating role that librarians have played in society. The case presented here is not a historical one based on empirical research, but rather a conceptual examination of practices that are already familiar to most librarians.
    Source
    Cataloging and classification quarterly. 43(2006) no.2, S.83-94
  11. Studwell, W.E.: Subject suggestions 6 : some concerns relating to quantity of subjects (1990) 0.05
    0.048115864 = product of:
      0.09623173 = sum of:
        0.07516225 = weight(_text_:subject in 466) [ClassicSimilarity], result of:
          0.07516225 = score(doc=466,freq=4.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.4470745 = fieldWeight in 466, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0625 = fieldNorm(doc=466)
        0.02106948 = product of:
          0.04213896 = sum of:
            0.04213896 = weight(_text_:classification in 466) [ClassicSimilarity], result of:
              0.04213896 = score(doc=466,freq=2.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.28149095 = fieldWeight in 466, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0625 = fieldNorm(doc=466)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The number of subject headings for any individual bibliographic record is discussed. Four policy proposals are presented: how many different persons, places, and organisations should be used; how many uses of the same person, place, organisation, or topic should be allowed; an overall policy on secondary headings; how many subjects should be as a general policy.
    Source
    Cataloging and classification quarterly. 10(1990) no.4, S.99-104
  12. Bland, R.N.: ¬The concept of intellectual level in cataloging and classification (1983) 0.05
    0.04764335 = product of:
      0.0952867 = sum of:
        0.053147733 = weight(_text_:subject in 321) [ClassicSimilarity], result of:
          0.053147733 = score(doc=321,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.31612942 = fieldWeight in 321, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0625 = fieldNorm(doc=321)
        0.04213896 = product of:
          0.08427792 = sum of:
            0.08427792 = weight(_text_:classification in 321) [ClassicSimilarity], result of:
              0.08427792 = score(doc=321,freq=8.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.5629819 = fieldWeight in 321, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0625 = fieldNorm(doc=321)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper traces the history of the concept of intellectual level in cataloging and classification in the United States. Past cataloging codes, subject-heading practice, and classification systems have provided library users with little systematic information concerning the intellectual level or intended audience of works. Reasons for this omission are discussed, and arguments are developed to show that this kind of information would be a useful addition to the catalog record of the present and the future.
    Source
    Cataloging and classification quarterly. 4(1983) no.1, S.53-63
  13. Shatford, S.: Analyzing the subject of a picture : a theoretical approach (1986) 0.04
    0.03628821 = product of:
      0.07257642 = sum of:
        0.046504267 = weight(_text_:subject in 354) [ClassicSimilarity], result of:
          0.046504267 = score(doc=354,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.27661324 = fieldWeight in 354, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0546875 = fieldNorm(doc=354)
        0.026072152 = product of:
          0.052144304 = sum of:
            0.052144304 = weight(_text_:classification in 354) [ClassicSimilarity], result of:
              0.052144304 = score(doc=354,freq=4.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.34832728 = fieldWeight in 354, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=354)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper suggests a theoretical basis for identifying and classifying the kinds of subjects a picture may have, using previously developed principles of cataloging and classification, and concepts taken from the philosophy of art, from meaning in language, and from visual perception. The purpose of developing this theoretical basis is to provide the reader with a means for evaluating, adapting, and applying presently existing indexing languages, or for devising new languages for pictorial materials; this paper does not attempt to invent or prescribe a particular indexing language.
    Source
    Cataloging and classification quarterly. 6(1986) no.3, S.39-62
  14. Holley, R.M.; Joudrey, D.N.: Aboutness and conceptual analysis : a review (2021) 0.04
    0.03628821 = product of:
      0.07257642 = sum of:
        0.046504267 = weight(_text_:subject in 703) [ClassicSimilarity], result of:
          0.046504267 = score(doc=703,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.27661324 = fieldWeight in 703, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0546875 = fieldNorm(doc=703)
        0.026072152 = product of:
          0.052144304 = sum of:
            0.052144304 = weight(_text_:classification in 703) [ClassicSimilarity], result of:
              0.052144304 = score(doc=703,freq=4.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.34832728 = fieldWeight in 703, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=703)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The purpose of this paper is to provide an overview of aboutness and conceptual analysis, essential concepts for LIS practitioners to understand. Aboutness refers to the subject matter and genre/form properties of a resource. It is identified during conceptual analysis, which yields an aboutness statement, a summary of a resource's aboutness. While few scholars have discussed the aboutness determination process in detail, the methods described by Patrick Wilson, D.W. Langridge, Arlene G. Taylor, and Daniel N. Joudrey provide exemplary frameworks for determining aboutness and are presented here. Discussions of how to construct an aboutness statement and the challenges associated with aboutness determination follow.
    Content
    Vgl.: https://doi.org/10.1080/01639374.2020.1856992. Teil eines Themenheftes: Cataloging and Classification: Back to Basics
    Source
    Cataloging and classification quarterly. 59(2021) no.2/3, S.159-185
  15. Fairthorne, R.A.: Temporal structure in bibliographic classification (1985) 0.03
    0.031781983 = product of:
      0.063563965 = sum of:
        0.0398608 = weight(_text_:subject in 3651) [ClassicSimilarity], result of:
          0.0398608 = score(doc=3651,freq=8.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.23709705 = fieldWeight in 3651, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3651)
        0.023703163 = product of:
          0.047406327 = sum of:
            0.047406327 = weight(_text_:classification in 3651) [ClassicSimilarity], result of:
              0.047406327 = score(doc=3651,freq=18.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.3166773 = fieldWeight in 3651, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3651)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper, presented at the Ottawa Conference an the Conceptual Basis of the Classification of Knowledge, in 1971, is one of Fairthorne's more perceptive works and deserves a wide audience, especially as it breaks new ground in classification theory. In discussing the notion of discourse, he makes a "distinction between what discourse mentions and what discourse is about" [emphasis added], considered as a "fundamental factor to the relativistic nature of bibliographic classification" (p. 360). A table of mathematical functions, for example, describes exactly something represented by a collection of digits, but, without a preface, this table does not fit into a broader context. Some indication of the author's intent ls needed to fit the table into a broader context. This intent may appear in a title, chapter heading, class number or some other aid. Discourse an and discourse about something "cannot be determined solely from what it mentions" (p. 361). Some kind of background is needed. Fairthorne further develops the theme that knowledge about a subject comes from previous knowledge, thus adding a temporal factor to classification. "Some extra textual criteria are needed" in order to classify (p. 362). For example, "documents that mention the same things, but are an different topics, will have different ancestors, in the sense of preceding documents to which they are linked by various bibliographic characteristics ... [and] ... they will have different descendants" (p. 363). The classifier has to distinguish between documents that "mention exactly the same thing" but are not about the same thing. The classifier does this by classifying "sets of documents that form their histories, their bibliographic world lines" (p. 363). The practice of citation is one method of performing the linking and presents a "fan" of documents connected by a chain of citations to past work. The fan is seen as the effect of generations of documents - each generation connected to the previous one, and all ancestral to the present document. Thus, there are levels in temporal structure-that is, antecedent and successor documents-and these require that documents be identified in relation to other documents. This gives a set of documents an "irrevocable order," a loose order which Fairthorne calls "bibliographic time," and which is "generated by the fact of continual growth" (p. 364). He does not consider "bibliographic time" to be an equivalent to physical time because bibliographic events, as part of communication, require delay. Sets of documents, as indicated above, rather than single works, are used in classification. While an event, a person, a unique feature of the environment, may create a class of one-such as the French Revolution, Napoleon, Niagara Falls-revolutions, emperors, and waterfalls are sets which, as sets, will subsume individuals and make normal classes.
    The fan of past documents may be seen across time as a philosophical "wake," translated documents as a sideways relationship and future documents as another fan spreading forward from a given document (p. 365). The "overlap of reading histories can be used to detect common interests among readers," (p. 365) and readers may be classified accordingly. Finally, Fairthorne rejects the notion of a "general" classification, which he regards as a mirage, to be replaced by a citation-type network to identify classes. An interesting feature of his work lies in his linkage between old and new documents via a bibliographic method-citations, authors' names, imprints, style, and vocabulary - rather than topical (subject) terms. This is an indirect method of creating classes. The subject (aboutness) is conceived as a finite, common sharing of knowledge over time (past, present, and future) as opposed to the more common hierarchy of topics in an infinite schema assumed to be universally useful. Fairthorne, a mathematician by training, is a prolific writer an the foundations of classification and information. His professional career includes work with the Royal Engineers Chemical Warfare Section and the Royal Aircraft Establishment (RAE). He was the founder of the Computing Unit which became the RAE Mathematics Department.
    Footnote
    Original in: Ottawa Conference on the Conceptual Basis of the Classification of Knowledge, Ottawa, 1971. Ed.: Jerzy A Wojceichowski. Pullach: Verlag Dokumentation 1974. S.404-412.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
  16. Buckland, M.K.: Obsolescence in subject description (2012) 0.03
    0.0298956 = product of:
      0.1195824 = sum of:
        0.1195824 = weight(_text_:subject in 299) [ClassicSimilarity], result of:
          0.1195824 = score(doc=299,freq=18.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.7112912 = fieldWeight in 299, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=299)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The paper aims to explain the character and causes of obsolescence in assigned subject descriptors. Design/methodology/approach - The paper takes the form of a conceptual analysis with examples and reference to existing literature. Findings - Subject description comes in two forms: assigning the name or code of a subject to a document and assigning a document to a named subject category. Each method associates a document with the name of a subject. This naming activity is the site of tensions between the procedural need of information systems for stable records and the inherent multiplicity and instability of linguistic expressions. As languages change, previously assigned subject descriptions become obsolescent. The issues, tensions, and compromises involved are introduced. Originality/value - Drawing on the work of Robert Fairthorne and others, an explanation of the unavoidable obsolescence of assigned subject headings is presented. The discussion relates to libraries, but the same issues arise in any context in which subject description is expected to remain useful for an extended period of time.
  17. Dooley, J.M.: Subject indexing in context : subject cataloging of MARC AMC format archical records (1992) 0.03
    0.029710487 = product of:
      0.118841946 = sum of:
        0.118841946 = weight(_text_:subject in 2199) [ClassicSimilarity], result of:
          0.118841946 = score(doc=2199,freq=10.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.7068869 = fieldWeight in 2199, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0625 = fieldNorm(doc=2199)
      0.25 = coord(1/4)
    
    Abstract
    Integration of archival materials catalogued in the USMARC AMC format into online catalogues has given a new urgency to the need for direct subject access. Offers a broad definition of the concepts to be considered under the subject access heading, including not only topical subjects but also proper names, forms of material, time periods, geographic places, occupations, and functions. It is both necessary and possible to provide more consistent subject access to archives and manuscripts than currently is being achieved. Describes current efforts that are under way in the profession to address this need
  18. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.03
    0.029089965 = product of:
      0.11635986 = sum of:
        0.11635986 = sum of:
          0.0526737 = weight(_text_:classification in 5835) [ClassicSimilarity], result of:
            0.0526737 = score(doc=5835,freq=2.0), product of:
              0.14969917 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.04700564 = queryNorm
              0.35186368 = fieldWeight in 5835, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.078125 = fieldNorm(doc=5835)
          0.063686155 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
            0.063686155 = score(doc=5835,freq=2.0), product of:
              0.16460574 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04700564 = queryNorm
              0.38690117 = fieldWeight in 5835, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=5835)
      0.25 = coord(1/4)
    
    Date
    5. 8.2006 13:22:44
  19. Bade, D.: ¬The creation and persistence of misinformation in shared library catalogs : language and subject knowledge in a technological era (2002) 0.03
    0.028914345 = product of:
      0.05782869 = sum of:
        0.051460076 = weight(_text_:subject in 1858) [ClassicSimilarity], result of:
          0.051460076 = score(doc=1858,freq=30.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.306091 = fieldWeight in 1858, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.015625 = fieldNorm(doc=1858)
        0.006368615 = product of:
          0.01273723 = sum of:
            0.01273723 = weight(_text_:22 in 1858) [ClassicSimilarity], result of:
              0.01273723 = score(doc=1858,freq=2.0), product of:
                0.16460574 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04700564 = queryNorm
                0.07738023 = fieldWeight in 1858, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1858)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 9.1997 19:16:05
    Footnote
    Rez. in JASIST 54(2003) no.4, S.356-357 (S.J. Lincicum): "Reliance upon shared cataloging in academic libraries in the United States has been driven largely by the need to reduce the expense of cataloging operations without muck regard for the Impact that this approach might have an the quality of the records included in local catalogs. In recent years, ever increasing pressures have prompted libraries to adopt practices such as "rapid" copy cataloging that purposely reduce the scrutiny applied to bibliographic records downloaded from shared databases, possibly increasing the number of errors that slip through unnoticed. Errors in bibliographic records can lead to serious problems for library catalog users. If the data contained in bibliographic records is inaccurate, users will have difficulty discovering and recognizing resources in a library's collection that are relevant to their needs. Thus, it has become increasingly important to understand the extent and nature of errors that occur in the records found in large shared bibliographic databases, such as OCLC WorldCat, to develop cataloging practices optimized for the shared cataloging environment. Although this monograph raises a few legitimate concerns about recent trends in cataloging practice, it fails to provide the "detailed look" at misinformation in library catalogs arising from linguistic errors and mistakes in subject analysis promised by the publisher. A basic premise advanced throughout the text is that a certain amount of linguistic and subject knowledge is required to catalog library materials effectively. The author emphasizes repeatedly that most catalogers today are asked to catalog an increasingly diverse array of materials, and that they are often required to work in languages or subject areas of which they have little or no knowledge. He argues that the records contributed to shared databases are increasingly being created by catalogers with inadequate linguistic or subject expertise. This adversely affects the quality of individual library catalogs because errors often go uncorrected as records are downloaded from shared databases to local catalogs by copy catalogers who possess even less knowledge. Calling misinformation an "evil phenomenon," Bade states that his main goal is to discuss, "two fundamental types of misinformation found in bibliographic and authority records in library catalogs: that arising from linguistic errors, and that caused by errors in subject analysis, including missing or wrong subject headings" (p. 2). After a superficial discussion of "other" types of errors that can occur in bibliographic records, such as typographical errors and errors in the application of descriptive cataloging rules, Bade begins his discussion of linguistic errors. He asserts that sharing bibliographic records created by catalogers with inadequate linguistic or subject knowledge has, "disastrous effects an the library community" (p. 6). To support this bold assertion, Bade provides as evidence little more than a laundry list of errors that he has personally observed in bibliographic records over the years. When he eventually cites several studies that have addressed the availability and quality of records available for materials in languages other than English, he fails to describe the findings of these studies in any detail, let alone relate the findings to his own observations in a meaningful way. Bade claims that a lack of linguistic expertise among catalogers is the "primary source for linguistic misinformation in our databases" (p. 10), but he neither cites substantive data from existing studies nor provides any new data regarding the overall level of linguistic knowledge among catalogers to support this claim. The section concludes with a brief list of eight sensible, if unoriginal, suggestions for coping with the challenge of cataloging materials in unfamiliar languages.
    Bade begins his discussion of errors in subject analysis by summarizing the contents of seven records containing what he considers to be egregious errors. The examples were drawn only from items that he has encountered in the course of his work. Five of the seven records were full-level ("I" level) records for Eastern European materials created between 1996 and 2000 in the OCLC WorldCat database. The final two examples were taken from records created by Bade himself over an unspecified period of time. Although he is to be commended for examining the actual items cataloged and for examining mostly items that he claims to have adequate linguistic and subject expertise to evaluate reliably, Bade's methodology has major flaws. First and foremost, the number of examples provided is completely inadequate to draw any conclusions about the extent of the problem. Although an in-depth qualitative analysis of a small number of records might have yielded some valuable insight into factors that contribute to errors in subject analysis, Bade provides no Information about the circumstances under which the live OCLC records he critiques were created. Instead, he offers simplistic explanations for the errors based solely an his own assumptions. He supplements his analysis of examples with an extremely brief survey of other studies regarding errors in subject analysis, which consists primarily of criticism of work done by Sheila Intner. In the end, it is impossible to draw any reliable conclusions about the nature or extent of errors in subject analysis found in records in shared bibliographic databases based an Bade's analysis. In the final third of the essay, Bade finally reveals his true concern: the deintellectualization of cataloging. It would strengthen the essay tremendously to present this as the primary premise from the very beginning, as this section offers glimpses of a compelling argument. Bade laments, "Many librarians simply do not sec cataloging as an intellectual activity requiring an educated mind" (p. 20). Commenting an recent trends in copy cataloging practice, he declares, "The disaster of our time is that this work is being done more and more by people who can neither evaluate nor correct imported errors and offen are forbidden from even thinking about it" (p. 26). Bade argues that the most valuable content found in catalog records is the intellectual content contributed by knowledgeable catalogers, and he asserts that to perform intellectually demanding tasks such as subject analysis reliably and effectively, catalogers must have the linguistic and subject knowledge required to gain at least a rudimentary understanding of the materials that they describe. He contends that requiring catalogers to quickly dispense with materials in unfamiliar languages and subjects clearly undermines their ability to perform the intellectual work of cataloging and leads to an increasing number of errors in the bibliographic records contributed to shared databases.
  20. Naves, M.M.L.: Analise de assunto : concepcoes (1996) 0.03
    0.028767055 = product of:
      0.11506822 = sum of:
        0.11506822 = weight(_text_:subject in 607) [ClassicSimilarity], result of:
          0.11506822 = score(doc=607,freq=6.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.68444026 = fieldWeight in 607, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.078125 = fieldNorm(doc=607)
      0.25 = coord(1/4)
    
    Abstract
    Discusses subject analysis as an important stage in the indexing process and observes confusions that can occur in the meaning of the term. Considers questions and difficulties about subject analysis and the concept of aboutness
    Footnote
    Übers. d. Titels: Subject analysis: concepts

Languages

  • e 70
  • d 3
  • f 1
  • nl 1
  • More… Less…

Types

  • a 67
  • m 6
  • el 2
  • d 1
  • x 1
  • More… Less…