Search (38 results, page 1 of 2)

  • × theme_ss:"Inhaltsanalyse"
  1. Weimer, K.H.: ¬The nexus of subject analysis and bibliographic description : the case of multipart videos (1996) 0.10
    0.09750496 = product of:
      0.19500992 = sum of:
        0.10090631 = weight(_text_:description in 6525) [ClassicSimilarity], result of:
          0.10090631 = score(doc=6525,freq=4.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.43587846 = fieldWeight in 6525, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.046875 = fieldNorm(doc=6525)
        0.09410361 = sum of:
          0.053626917 = weight(_text_:access in 6525) [ClassicSimilarity], result of:
            0.053626917 = score(doc=6525,freq=4.0), product of:
              0.16876608 = queryWeight, product of:
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.04979191 = queryNorm
              0.31775886 = fieldWeight in 6525, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.046875 = fieldNorm(doc=6525)
          0.040476695 = weight(_text_:22 in 6525) [ClassicSimilarity], result of:
            0.040476695 = score(doc=6525,freq=2.0), product of:
              0.17436278 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04979191 = queryNorm
              0.23214069 = fieldWeight in 6525, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=6525)
      0.5 = coord(2/4)
    
    Abstract
    Examines the goals of bibliographic control, subject analysis and their relationship for audiovisual materials in general and multipart videotape recordings in particular. Concludes that intellectual access to multipart works is not adequately provided for when these materials are catalogues in collective set records. An alternative is to catalogue the parts separately. This method increases intellectual access by providing more detailed descriptive notes and subject analysis. As evidenced by the large number of records in the national database for parts of multipart videos, cataloguers have made the intellectual content of multipart videos more accessible by cataloguing the parts separately rather than collectively. This reverses the traditional cataloguing process to begin with subject analysis, resulting in the intellectual content of these materials driving the bibliographic description. Suggests ways of determining when multipart videos are best catalogued as sets or separately
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.5-18
  2. Caldera-Serrano, J.: Thematic description of audio-visual information on television (2010) 0.07
    0.07127224 = product of:
      0.14254448 = sum of:
        0.123584494 = weight(_text_:description in 3953) [ClassicSimilarity], result of:
          0.123584494 = score(doc=3953,freq=6.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.53383994 = fieldWeight in 3953, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.046875 = fieldNorm(doc=3953)
        0.018959979 = product of:
          0.037919957 = sum of:
            0.037919957 = weight(_text_:access in 3953) [ClassicSimilarity], result of:
              0.037919957 = score(doc=3953,freq=2.0), product of:
                0.16876608 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.04979191 = queryNorm
                0.22468945 = fieldWeight in 3953, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3953)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - This paper endeavours to show the possibilities for thematic description of audio-visual documents for television with the aim of promoting and facilitating information retrieval. Design/methodology/approach - To achieve these goals different database fields are shown, as well as the way in which they are organised for indexing and thematic element description, analysed and used as an example. Some of the database fields are extracted from an analytical study of the documentary system of television in Spain. Others are being tested in university television on which indexing experiments are carried out. Findings - Not all thematic descriptions are used on television information systems; nevertheless, some television channels do use thematic descriptions of both image and sound, applying thesauri. Moreover, it is possible to access sequences using full text retrieval as well. Originality/value - The development of the documentary task, applying the described techniques, promotes thematic indexing and hence thematic retrieval. Given the fact that this is without doubt one of the aspects most demanded by television journalists (along with people's names). This conceptualisation translates into the adaptation of databases to new indexing methods.
  3. Winget, M.: Describing art : an alternative approach to subject access and interpretation (2009) 0.04
    0.040902082 = product of:
      0.081804164 = sum of:
        0.05945961 = weight(_text_:description in 3618) [ClassicSimilarity], result of:
          0.05945961 = score(doc=3618,freq=2.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.25684384 = fieldWeight in 3618, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3618)
        0.02234455 = product of:
          0.0446891 = sum of:
            0.0446891 = weight(_text_:access in 3618) [ClassicSimilarity], result of:
              0.0446891 = score(doc=3618,freq=4.0), product of:
                0.16876608 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.04979191 = queryNorm
                0.26479906 = fieldWeight in 3618, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3618)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - The purpose of this paper is to examine the art historical antecedents of providing subject access to images. After reviewing the assumptions and limitations inherent in the most prevalent descriptive method, the paper seeks to introduce a new model that allows for more comprehensive representation of visually-based cultural materials. Design/methodology/approach - The paper presents a literature-based conceptual analysis, taking Panofsky's theory of iconography and iconology as the starting-point. Panofsky's conceptual model, while appropriate for art created in the Western academic tradition, ignores or misrepresents work from other eras or cultures. Continued dependence on Panofskian descriptive methods limits the functionality and usefulness of image representation systems. Findings - The paper recommends the development of a more precise and inclusive descriptive model for art objects, which is based on the premise that art is not another sort of text, and should not be interpreted as such. Practical implications - The paper provides suggestions for the development of representation models that will enhance the description of non-textual artifacts. Originality/value - The paper addresses issues in information science, the history of art, and computer science, and suggests that a new descriptive model would be of great value to both humanist and social science scholars.
  4. Buckland, M.K.: Obsolescence in subject description (2012) 0.03
    0.030896123 = product of:
      0.123584494 = sum of:
        0.123584494 = weight(_text_:description in 299) [ClassicSimilarity], result of:
          0.123584494 = score(doc=299,freq=6.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.53383994 = fieldWeight in 299, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.046875 = fieldNorm(doc=299)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The paper aims to explain the character and causes of obsolescence in assigned subject descriptors. Design/methodology/approach - The paper takes the form of a conceptual analysis with examples and reference to existing literature. Findings - Subject description comes in two forms: assigning the name or code of a subject to a document and assigning a document to a named subject category. Each method associates a document with the name of a subject. This naming activity is the site of tensions between the procedural need of information systems for stable records and the inherent multiplicity and instability of linguistic expressions. As languages change, previously assigned subject descriptions become obsolescent. The issues, tensions, and compromises involved are introduced. Originality/value - Drawing on the work of Robert Fairthorne and others, an explanation of the unavoidable obsolescence of assigned subject headings is presented. The discussion relates to libraries, but the same issues arise in any context in which subject description is expected to remain useful for an extended period of time.
  5. Raieli, R.: ¬The semantic hole : enthusiasm and caution around multimedia information retrieval (2012) 0.03
    0.027725544 = product of:
      0.110902175 = sum of:
        0.110902175 = sum of:
          0.06319993 = weight(_text_:access in 4888) [ClassicSimilarity], result of:
            0.06319993 = score(doc=4888,freq=8.0), product of:
              0.16876608 = queryWeight, product of:
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.04979191 = queryNorm
              0.37448242 = fieldWeight in 4888, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4888)
          0.047702245 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
            0.047702245 = score(doc=4888,freq=4.0), product of:
              0.17436278 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04979191 = queryNorm
              0.27358043 = fieldWeight in 4888, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4888)
      0.25 = coord(1/4)
    
    Abstract
    This paper centres on the tools for the management of new digital documents, which are not only textual, but also visual-video, audio or multimedia in the full sense. Among the aims is to demonstrate that operating within the terms of generic Information Retrieval through textual language only is limiting, and it is instead necessary to consider ampler criteria, such as those of MultiMedia Information Retrieval, according to which, every type of digital document can be analyzed and searched by the proper elements of language for its proper nature. MMIR is presented as the organic complex of the systems of Text Retrieval, Visual Retrieval, Video Retrieval, and Audio Retrieval, each of which has an approach to information management that handles the concrete textual, visual, audio, or video content of the documents directly, here defined as content-based. In conclusion, the limits of this content-based objective access to documents is underlined. The discrepancy known as the semantic gap is that which occurs between semantic-interpretive access and content-based access. Finally, the integration of these conceptions is explained, gathering and composing the merits and the advantages of each of the approaches and of the systems to access to information.
    Date
    22. 1.2012 13:02:10
    Source
    Knowledge organization. 39(2012) no.1, S.13-22
  6. Wilson, P.: Subjects and the sense of position (1985) 0.03
    0.026340859 = product of:
      0.052681718 = sum of:
        0.04162173 = weight(_text_:description in 3648) [ClassicSimilarity], result of:
          0.04162173 = score(doc=3648,freq=2.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.17979069 = fieldWeight in 3648, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3648)
        0.011059988 = product of:
          0.022119977 = sum of:
            0.022119977 = weight(_text_:access in 3648) [ClassicSimilarity], result of:
              0.022119977 = score(doc=3648,freq=2.0), product of:
                0.16876608 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.04979191 = queryNorm
                0.13106886 = fieldWeight in 3648, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3648)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    One knows one is in the presence of "theory" when fundamental questions of a "why" nature are asked. Too often it happens that those involved in the design of bibliographic information systems have no time for brooding. It is thus noteworthy when someone appears an the bibliographic scene who troubles to address, and pursue with philosophic rigor, fundamental questions about the way we organize information. Such a person is Patrick Wilson, formerly philosophy professor at the University of California, Los Angeles, and since 1965, an the faculty of the School of Library and Information Studies, University of California, Berkeley. Bibliographic control is the central concept of Wilson's book Two Kinds of Power. It is represented as a kind of power-a power over knowledge. That power is of two kinds: descriptive and exploitive. Descriptive power is the power to retrieve all writings that satisfy some "evaluatively neutral" description, for instance, all writings by Hobbes or all writings an the subject of eternat recurrence. Descriptive power is achieved insofar as the items in our bibliographic universe are fitted with descriptions and these descriptions are syndetically related. Exploitive power is a less-familiar concept, but it is more important since it can be used to explain why we attempt to order our bibliographic universe in the first place. Exploitive power is the power to obtain the best textual means to an end. Unlike the concept of descriptive power, that of exploitive power has a normative aspect to it. Someone possessing such power would understand the goal of all bibliographic activity; that is, he would understand the diversity of user purposes and the relativity of what is valuable; he would be omniscient both as a bibliographer and as a psychologist. Since exploitive power is ever out of reach, descriptive power is used as a substitute or approximation for it. How adequate this approximation is is the subject of Wilson's book. The particular chapter excerpted in this volume deals with the adequacy of subject access methods. Cutter's statement that one of the objects of a library catalog is to show what the library has an a given subject is generally accepted, as though it were obvious what "being an a given subject" means. It is far from obvious. Wilson challenges the underlying presumption that for any document a heading can be found that is coextensive with its subject. This presumption implies that there is such a thing as the (singular) subject of a document and that it can be identified. But, as Wilson Shows in his elaborate explication, the notion of "subject" is essentially indeterminate, with the consequence that we are limited in our attempts to achieve either descriptive or exploitive power.
  7. Wilson, P.: Subjects and the sense of position (1968) 0.02
    0.023783846 = product of:
      0.09513538 = sum of:
        0.09513538 = weight(_text_:description in 1353) [ClassicSimilarity], result of:
          0.09513538 = score(doc=1353,freq=2.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.41095015 = fieldWeight in 1353, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.0625 = fieldNorm(doc=1353)
      0.25 = coord(1/4)
    
    Abstract
    Wilson argues that the subject of a writing is indetermined, by which he means either it is impossible to say which of two descriptions is 'the' description of the subject of a writing or it is impossible to say if a writing has two subjects rather than one.
  8. Sauperl, A.: Subject determination during the cataloging process : the development of a system based on theoretical principles (2002) 0.02
    0.02289747 = product of:
      0.04579494 = sum of:
        0.035675768 = weight(_text_:description in 2293) [ClassicSimilarity], result of:
          0.035675768 = score(doc=2293,freq=2.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.1541063 = fieldWeight in 2293, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2293)
        0.010119174 = product of:
          0.020238347 = sum of:
            0.020238347 = weight(_text_:22 in 2293) [ClassicSimilarity], result of:
              0.020238347 = score(doc=2293,freq=2.0), product of:
                0.17436278 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04979191 = queryNorm
                0.116070345 = fieldWeight in 2293, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2293)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    27. 9.2005 14:22:19
    Footnote
    This document will be particularly useful to subject cataloguing teachers and trainers who could use the model to design case descriptions and exercises. We believe it is an accurate description of the reality of subject cataloguing today. But now that we know how things are dope, the next interesting question may be: Is that the best way? Is there a better, more efficient, way to do things? We can only hope that Dr. Sauperl will soon provide her own view of methods and techniques that could improve the flow of work or address the cataloguers' concern as to the lack of feedback an their work. Her several excellent suggestions for further research in this area all build an bits and pieces of what is done already, and stay well away from what could be done by the various actors in the area, from the designers of controlled vocabularies and authority files to those who use these tools an a daily basis to index, classify, or search for information."
  9. Mai, J.-E.: Semiotics and indexing : an analysis of the subject indexing process (2001) 0.02
    0.021022148 = product of:
      0.08408859 = sum of:
        0.08408859 = weight(_text_:description in 4480) [ClassicSimilarity], result of:
          0.08408859 = score(doc=4480,freq=4.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.36323205 = fieldWeight in 4480, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4480)
      0.25 = coord(1/4)
    
    Abstract
    This paper explains at least some of the major problems related to the subject indexing process and proposes a new approach to understanding the process, which is ordinarily described as a process that takes a number of steps. The subject is first determined, then it is described in a few sentences and, lastly, the description of the subject is converted into the indexing language. It is argued that this typical approach characteristically lacks an understanding of what the central nature of the process is. Indexing is not a neutral and objective representation of a document's subject matter but the representation of an interpretation of a document for future use. Semiotics is offered here as a framework for understanding the "interpretative" nature of the subject indexing process. By placing this process within Peirce's semiotic framework of ideas and terminology, a more detailed description of the process is offered which shows that the uncertainty generally associated with this process is created by the fact that the indexer goes through a number of steps and creates the subject matter of the document during this process. The creation of the subject matter is based on the indexer's social and cultural context. The paper offers an explanation of what occurs in the indexing process and suggests that there is only little certainty to its result.
  10. Zarri, G.P.: Indexing and querying of narrative documents, a knowledge representation approach (2003) 0.02
    0.020810865 = product of:
      0.08324346 = sum of:
        0.08324346 = weight(_text_:description in 2691) [ClassicSimilarity], result of:
          0.08324346 = score(doc=2691,freq=2.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.35958138 = fieldWeight in 2691, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2691)
      0.25 = coord(1/4)
    
    Abstract
    We describe here NKRL (Narrative Knowledge Representation Language), a semantic formalism for taking into account the characteristics of narrative multimedia documents. In these documents, the information content consists in the description of 'events' that relate the real or intended behaviour of some 'actors' (characters, personages, etc.). Narrative documents of an economic interest correspond to news stories, corporate documents, normative and legal texts, intelligence messages, representation of patient's medical records, etc. NKRL is characterised by the use of several knowledge representation principles and several high-level inference tools.
  11. Austin, J.; Pejtersen, A.M.: Fiction retrieval: experimental design and evaluation of a search system based on user's value criteria. Pt.1 (1983) 0.02
    0.02058303 = product of:
      0.08233212 = sum of:
        0.08233212 = weight(_text_:26 in 142) [ClassicSimilarity], result of:
          0.08233212 = score(doc=142,freq=2.0), product of:
            0.17584132 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.04979191 = queryNorm
            0.4682183 = fieldWeight in 142, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.09375 = fieldNorm(doc=142)
      0.25 = coord(1/4)
    
    Date
    5. 8.2006 13:27:26
  12. Buckland, M.; Shaw, R.: 4W vocabulary mapping across diiverse reference genres (2008) 0.02
    0.017837884 = product of:
      0.071351536 = sum of:
        0.071351536 = weight(_text_:description in 2258) [ClassicSimilarity], result of:
          0.071351536 = score(doc=2258,freq=2.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.3082126 = fieldWeight in 2258, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.046875 = fieldNorm(doc=2258)
      0.25 = coord(1/4)
    
    Content
    This paper examines three themes in the design of search support services: linking different genres of reference resources (e.g. bibliographies, biographical dictionaries, catalogs, encyclopedias, place name gazetteers); the division of vocabularies by facet (e.g. What, Where, When, and Who); and mapping between both similar and dissimilar vocabularies. Different vocabularies within a facet can be used in conjunction, e.g. a place name combined with spatial coordinates for Where. In practice, vocabularies of different facets are used in combination in the representation or description of complex topics. Rich opportunities arise from mapping across vocabularies of dissimilar reference genres to recreate the amenities of a reference library. In a network environment, in which vocabulary control cannot be imposed, semantic correspondence across diverse vocabularies is a challenge and an opportunity.
  13. Sauperl, A.: Subject cataloging process of Slovenian and American catalogers (2005) 0.01
    0.014864903 = product of:
      0.05945961 = sum of:
        0.05945961 = weight(_text_:description in 4702) [ClassicSimilarity], result of:
          0.05945961 = score(doc=4702,freq=2.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.25684384 = fieldWeight in 4702, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4702)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - An empirical study has shown that the real process of subject cataloging does not correspond entirely to theoretical descriptions in textbooks and international standards. The purpose of this is paper is to address the issue of whether it be possible for catalogers who have not received formal training to perform subject cataloging in a different way to their trained colleagues. Design/methodology/approach - A qualitative study was conducted in 2001 among five Slovenian public library catalogers. The resulting model is compared to previous findings. Findings - First, all catalogers attempted to determine what the book was about. While the American catalogers tried to understand the topic and the author's intent, the Slovenian catalogers appeared to focus on the topic only. Slovenian and American academic library catalogers did not demonstrate any anticipation of possible uses that users might have of the book, while this was important for American public library catalogers. All catalogers used existing records to build new ones and/or to search for subject headings. The verification of subject representation with the indexing language was the last step in the subject cataloging process of American catalogers, often skipped by Slovenian catalogers. Research limitations/implications - The small and convenient sample limits the findings. Practical implications - Comparison of subject cataloging processes of Slovenian and American catalogers, two different groups, is important because they both contribute to OCLC's WorldCat database. If the cataloging community is building a universal catalog and approaches to subject description are different, then the resulting subject representations might also be different. Originality/value - This is one of the very few empirical studies of subject cataloging and indexing.
  14. Studwell, W.E.: Subject suggestions 6 : some concerns relating to quantity of subjects (1990) 0.01
    0.01372202 = product of:
      0.05488808 = sum of:
        0.05488808 = weight(_text_:26 in 466) [ClassicSimilarity], result of:
          0.05488808 = score(doc=466,freq=2.0), product of:
            0.17584132 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.04979191 = queryNorm
            0.31214553 = fieldWeight in 466, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.0625 = fieldNorm(doc=466)
      0.25 = coord(1/4)
    
    Date
    7. 1.2007 21:14:26
  15. Ornager, S.: View a picture : theoretical image analysis and empirical user studies on indexing and retrieval (1996) 0.01
    0.012006768 = product of:
      0.048027072 = sum of:
        0.048027072 = weight(_text_:26 in 904) [ClassicSimilarity], result of:
          0.048027072 = score(doc=904,freq=2.0), product of:
            0.17584132 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.04979191 = queryNorm
            0.27312735 = fieldWeight in 904, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.0546875 = fieldNorm(doc=904)
      0.25 = coord(1/4)
    
    Date
    26. 7.1996 19:42:42
  16. Jens-Erik Mai, J.-E.: ¬The role of documents, domains and decisions in indexing (2004) 0.01
    0.011891923 = product of:
      0.04756769 = sum of:
        0.04756769 = weight(_text_:description in 2653) [ClassicSimilarity], result of:
          0.04756769 = score(doc=2653,freq=2.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.20547508 = fieldWeight in 2653, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.03125 = fieldNorm(doc=2653)
      0.25 = coord(1/4)
    
    Content
    1. Introduction The document at hand is often regarded as the most important entity for analysis in the indexing situation. The indexer's focus is directed to the "entity and its faithful description" (Soergel, 1985, 227) and the indexer is advised to "stick to the text and the author's claims" (Lancaster, 2003, 37). The indexer's aim is to establish the subject matter based an an analysis of the document with the goal of representing the document as truthfully as possible and to ensure the subject representation's validity by remaining neutral and objective. To help indexers with their task they are guided towards particular and important attributes of the document that could help them determine the document's subject matter. The exact attributes the indexer is recommended to examine varies, but typical examples are: the title, the abstract, the table of contents, chapter headings, chapter subheadings, preface, introduction, foreword, the text itself, bibliographical references, index entries, illustrations, diagrams, and tables and their captions. The exact recommendations vary according to the type of document that is being indexed (monographs vs. periodical articles, for instance). It is clear that indexers should provide faithful descriptions, that indexers should represent the author's claims, and that the document's attributes are helpful points of analysis. However, indexers need much more guidance when determining the subject than simply the documents themselves. One approach that could be taken to handle the Situation is a useroriented approach in which it is argued that the indexer should ask, "how should I make this document ... visible to potential users? What terms should I use to convey its knowledge to those interested?" (Albrechtsen, 1993, 222). The basic idea is that indexers need to have the users' information needs and terminology in mind when determining the subject matter of documents as well as when selecting index terms.
  17. Dooley, J.M.: Subject indexing in context : subject cataloging of MARC AMC format archical records (1992) 0.01
    0.0109465495 = product of:
      0.043786198 = sum of:
        0.043786198 = product of:
          0.087572396 = sum of:
            0.087572396 = weight(_text_:access in 2199) [ClassicSimilarity], result of:
              0.087572396 = score(doc=2199,freq=6.0), product of:
                0.16876608 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.04979191 = queryNorm
                0.51889807 = fieldWeight in 2199, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2199)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Integration of archival materials catalogued in the USMARC AMC format into online catalogues has given a new urgency to the need for direct subject access. Offers a broad definition of the concepts to be considered under the subject access heading, including not only topical subjects but also proper names, forms of material, time periods, geographic places, occupations, and functions. It is both necessary and possible to provide more consistent subject access to archives and manuscripts than currently is being achieved. Describes current efforts that are under way in the profession to address this need
  18. Bednarek, M.: Intellectual access to pictorial information (1993) 0.01
    0.010598951 = product of:
      0.042395804 = sum of:
        0.042395804 = product of:
          0.08479161 = sum of:
            0.08479161 = weight(_text_:access in 5631) [ClassicSimilarity], result of:
              0.08479161 = score(doc=5631,freq=10.0), product of:
                0.16876608 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.04979191 = queryNorm
                0.5024209 = fieldWeight in 5631, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5631)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Visual materials represent a significantly different type of communication to textual materials and therefore present distinct challenges for the process of retrieval, especially if by retireval we mean intellectual access to the content of images. This paper outlines the special characteristics of visual materials, focusing on their pontential complexity and subjectivity, and the methods used and explored for gaining access to visual materials as reported in the literature. It concludes that methods of access to visual materials are dominated by the relative mature systems developed for textual materials and that access methods based on visual communication are still largely in the developmental or prototype stage. Although reported research on user requirements in the retrieval of visual information is noticeably lacking, the results of at least one study indicate that the visually-based retrieval methods of structured and unstructered browsing seem to be preferred for visula materials and that effective retrieval methods are ultimately related to characteristics of the enquirer and the visual information sought
  19. Andersson, R.; Holst, E.: Indexes and other depictions of fictions : a new model for analysis empirically tested (1996) 0.01
    0.010291515 = product of:
      0.04116606 = sum of:
        0.04116606 = weight(_text_:26 in 473) [ClassicSimilarity], result of:
          0.04116606 = score(doc=473,freq=2.0), product of:
            0.17584132 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.04979191 = queryNorm
            0.23410915 = fieldWeight in 473, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.046875 = fieldNorm(doc=473)
      0.25 = coord(1/4)
    
    Date
    26. 7.1996 19:42:42
  20. Bade, D.: ¬The creation and persistence of misinformation in shared library catalogs : language and subject knowledge in a technological era (2002) 0.01
    0.010234068 = product of:
      0.020468136 = sum of:
        0.01372202 = weight(_text_:26 in 1858) [ClassicSimilarity], result of:
          0.01372202 = score(doc=1858,freq=2.0), product of:
            0.17584132 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.04979191 = queryNorm
            0.07803638 = fieldWeight in 1858, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.015625 = fieldNorm(doc=1858)
        0.006746116 = product of:
          0.013492232 = sum of:
            0.013492232 = weight(_text_:22 in 1858) [ClassicSimilarity], result of:
              0.013492232 = score(doc=1858,freq=2.0), product of:
                0.17436278 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04979191 = queryNorm
                0.07738023 = fieldWeight in 1858, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1858)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 9.1997 19:16:05
    Footnote
    Bade begins his discussion of errors in subject analysis by summarizing the contents of seven records containing what he considers to be egregious errors. The examples were drawn only from items that he has encountered in the course of his work. Five of the seven records were full-level ("I" level) records for Eastern European materials created between 1996 and 2000 in the OCLC WorldCat database. The final two examples were taken from records created by Bade himself over an unspecified period of time. Although he is to be commended for examining the actual items cataloged and for examining mostly items that he claims to have adequate linguistic and subject expertise to evaluate reliably, Bade's methodology has major flaws. First and foremost, the number of examples provided is completely inadequate to draw any conclusions about the extent of the problem. Although an in-depth qualitative analysis of a small number of records might have yielded some valuable insight into factors that contribute to errors in subject analysis, Bade provides no Information about the circumstances under which the live OCLC records he critiques were created. Instead, he offers simplistic explanations for the errors based solely an his own assumptions. He supplements his analysis of examples with an extremely brief survey of other studies regarding errors in subject analysis, which consists primarily of criticism of work done by Sheila Intner. In the end, it is impossible to draw any reliable conclusions about the nature or extent of errors in subject analysis found in records in shared bibliographic databases based an Bade's analysis. In the final third of the essay, Bade finally reveals his true concern: the deintellectualization of cataloging. It would strengthen the essay tremendously to present this as the primary premise from the very beginning, as this section offers glimpses of a compelling argument. Bade laments, "Many librarians simply do not sec cataloging as an intellectual activity requiring an educated mind" (p. 20). Commenting an recent trends in copy cataloging practice, he declares, "The disaster of our time is that this work is being done more and more by people who can neither evaluate nor correct imported errors and offen are forbidden from even thinking about it" (p. 26). Bade argues that the most valuable content found in catalog records is the intellectual content contributed by knowledgeable catalogers, and he asserts that to perform intellectually demanding tasks such as subject analysis reliably and effectively, catalogers must have the linguistic and subject knowledge required to gain at least a rudimentary understanding of the materials that they describe. He contends that requiring catalogers to quickly dispense with materials in unfamiliar languages and subjects clearly undermines their ability to perform the intellectual work of cataloging and leads to an increasing number of errors in the bibliographic records contributed to shared databases.