Search (21 results, page 1 of 2)

  • × theme_ss:"Inhaltsanalyse"
  1. Jens-Erik Mai, J.-E.: ¬The role of documents, domains and decisions in indexing (2004) 0.04
    0.044518895 = sum of:
      0.024905367 = product of:
        0.12452683 = sum of:
          0.12452683 = weight(_text_:author's in 2653) [ClassicSimilarity], result of:
            0.12452683 = score(doc=2653,freq=4.0), product of:
              0.2964857 = queryWeight, product of:
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.04411889 = queryNorm
              0.42000958 = fieldWeight in 2653, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.03125 = fieldNorm(doc=2653)
        0.2 = coord(1/5)
      0.019613529 = product of:
        0.039227057 = sum of:
          0.039227057 = weight(_text_:i in 2653) [ClassicSimilarity], result of:
            0.039227057 = score(doc=2653,freq=4.0), product of:
              0.16640453 = queryWeight, product of:
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.04411889 = queryNorm
              0.2357331 = fieldWeight in 2653, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.03125 = fieldNorm(doc=2653)
        0.5 = coord(1/2)
    
    Content
    1. Introduction The document at hand is often regarded as the most important entity for analysis in the indexing situation. The indexer's focus is directed to the "entity and its faithful description" (Soergel, 1985, 227) and the indexer is advised to "stick to the text and the author's claims" (Lancaster, 2003, 37). The indexer's aim is to establish the subject matter based an an analysis of the document with the goal of representing the document as truthfully as possible and to ensure the subject representation's validity by remaining neutral and objective. To help indexers with their task they are guided towards particular and important attributes of the document that could help them determine the document's subject matter. The exact attributes the indexer is recommended to examine varies, but typical examples are: the title, the abstract, the table of contents, chapter headings, chapter subheadings, preface, introduction, foreword, the text itself, bibliographical references, index entries, illustrations, diagrams, and tables and their captions. The exact recommendations vary according to the type of document that is being indexed (monographs vs. periodical articles, for instance). It is clear that indexers should provide faithful descriptions, that indexers should represent the author's claims, and that the document's attributes are helpful points of analysis. However, indexers need much more guidance when determining the subject than simply the documents themselves. One approach that could be taken to handle the Situation is a useroriented approach in which it is argued that the indexer should ask, "how should I make this document ... visible to potential users? What terms should I use to convey its knowledge to those interested?" (Albrechtsen, 1993, 222). The basic idea is that indexers need to have the users' information needs and terminology in mind when determining the subject matter of documents as well as when selecting index terms.
  2. Pejtersen, A.M.: ¬A new approach to the classification of fiction (1982) 0.02
    0.02451691 = product of:
      0.04903382 = sum of:
        0.04903382 = product of:
          0.09806764 = sum of:
            0.09806764 = weight(_text_:i in 7240) [ClassicSimilarity], result of:
              0.09806764 = score(doc=7240,freq=4.0), product of:
                0.16640453 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04411889 = queryNorm
                0.58933276 = fieldWeight in 7240, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.078125 = fieldNorm(doc=7240)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Universal classification I: subject analysis and ordering systems. Proc. of the 4th Int. Study Conf. on Classification Research, Augsburg, 28.6.-2.7.1982. Ed. I. Dahlberg
  3. Sauperl, A.: Subject determination during the cataloging process : the development of a system based on theoretical principles (2002) 0.02
    0.022174317 = sum of:
      0.013208066 = product of:
        0.06604033 = sum of:
          0.06604033 = weight(_text_:author's in 2293) [ClassicSimilarity], result of:
            0.06604033 = score(doc=2293,freq=2.0), product of:
              0.2964857 = queryWeight, product of:
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.04411889 = queryNorm
              0.22274372 = fieldWeight in 2293, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.0234375 = fieldNorm(doc=2293)
        0.2 = coord(1/5)
      0.00896625 = product of:
        0.0179325 = sum of:
          0.0179325 = weight(_text_:22 in 2293) [ClassicSimilarity], result of:
            0.0179325 = score(doc=2293,freq=2.0), product of:
              0.15449683 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04411889 = queryNorm
              0.116070345 = fieldWeight in 2293, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=2293)
        0.5 = coord(1/2)
    
    Date
    27. 9.2005 14:22:19
    Footnote
    Rez. in: Knowledge organization 30(2003) no.2, S.114-115 (M. Hudon); "This most interesting contribution to the literature of subject cataloguing originates in the author's doctoral dissertation, prepared under the direction of jerry Saye at the University of North Carolina at Chapel Hill. In seven highly readable chapters, Alenka Sauperl develops possible answers to her principal research question: How do cataloguers determine or identify the topic of a document and choose appropriate subject representations? Specific questions at the source of this research an a process which has not been a frequent object of study include: Where do cataloguers look for an overall sense of what a document is about? How do they get an overall sense of what a document is about, especially when they are not familiar with the discipline? Do they consider only one or several possible interpretations? How do they translate meanings in appropriate and valid class numbers and subject headings? Using a strictly qualitative methodology, Dr. Sauperl's research is a study of twelve cataloguers in reallife situation. The author insists an the holistic rather than purely theoretical understanding of the process she is targeting. Participants in the study were professional cataloguers, with at least one year experience in their current job at one of three large academic libraries in the Southeastern United States. All three libraries have a large central cataloguing department, and use OCLC sources and the same automated system; the context of cataloguing tasks is thus considered to be reasonably comparable. All participants were volunteers in this study which combined two datagathering techniques: the think-aloud method and time-line interviews. A model of the subject cataloguing process was first developed from observations of a group of six cataloguers who were asked to independently perform original cataloguing an three nonfiction, non-serial items selected from materials regularly assigned to them for processing. The model was then used for follow-up interviews. Each participant in the second group of cataloguers was invited to reflect an his/her work process for a recent challenging document they had catalogued. Results are presented in 12 stories describing as many personal approaches to subject cataloguing. From these stories a summarization is offered and a theoretical model of subject cataloguing is developed which, according to the author, represents a realistic approach to subject cataloguing. Stories alternate comments from the researcher and direct quotations from the observed or interviewed cataloguers. Not surprisingly, the participants' stories reveal similarities in the sequence and accomplishment of several tasks in the process of subject cataloguing. Sauperl's proposed model, described in Chapter 5, includes as main stages: 1) Examination of the book and subject identification; 2) Search for subject headings; 3) Classification. Chapter 6 is a hypothetical Gase study, using the proposed model to describe the various stages of cataloguing a hypothetical resource. ...
  4. Sigel, A.: How can user-oriented depth analysis be constructively guided? (2000) 0.02
    0.01605343 = product of:
      0.03210686 = sum of:
        0.03210686 = product of:
          0.06421372 = sum of:
            0.06421372 = weight(_text_:i in 133) [ClassicSimilarity], result of:
              0.06421372 = score(doc=133,freq=14.0), product of:
                0.16640453 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04411889 = queryNorm
                0.38588926 = fieldWeight in 133, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=133)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    It is vital for library and information science to understand the subject indexing process thoroughly. However, document analysis, the first and most important step in indexing, has not received sufficient attention. As this is an exceptionally hard problem, we still do not dispose of a sound indexing theory. Therefore we have difficulties in teaching indexing and in explaining why a given subject representation is "better" than another. Technological advancements have not helped to close this fundamental gap. To proceed, we should ask the right questions instead. Several types of indexer inconsistencies can be explained as acceptable, yet different conceptualizations which resulting of the variety of groups dealing with a problem from their respective viewpoints. Multiple indexed documents are regarded as the normal case. Intersubjectively replicable indexing results are often questionable or do not constitute interesting cases of indexing at all. In the context of my ongoing dissertation in which I intend to develop an enhanced indexing theory by investigating improvements within a social sciences domain, this paper explains user-oriented selective depth analysis and why I chose that configuration. Strongly influenced by Mai's dissertation, I also communicate my first insights concerning current indexing theories. I agree that I cannot ignore epistemological stances and philosophical issues in language and meaning related to indexing and accept the openness of the interpretive nature of the indexing process. Although I present arguments against the employment of an indexing language as well, it is still indispensable in situations which demand easier access and control by devices. Despite the enormous difficulties the user-oriented and selective depth analysis poses, I argue that it is both feasible and useful if one achieves careful guidance of the possible interpretations. There is some hope because the number of useful interpretations is limited: Every summary is tailored to a purpose, audience and situation. Domain, discourse and social practice entail additional constraints. A pluralistic method mix that focusses on ecologically valid, holistic contexts and employs qualitative methods is recommended. Domain analysis urgently has to be made more practical and applicable. Only then we will be able to investigate empirically domains in order to identify their structures shaped by the corresponding discourse communities. We plan to represent the recognized problem structures and indexing questions of relevance to a small domain in formal, ontological computer models -- if we can find such stable knowledge structures. This would allow us to tailor dynamically summaries for user communities. For practical purposes we suggest to assume a less demanding position than Hjorland's "totality of the epistemological potential". It is sufficent that we identify and represent iteratively the information needs of today's user groups in interactive knowledge-based systems. The best way to formalize such knowledge gained about discourse communities is however unknown. Indexers should stay in direct contact with the community they serve or be part of it to ensure agreement with their viewpoints. Checklist/request-oriented indexing could be very helpful but it remains to be demonstrated how well it will be applicable in the social sciences. A frame-based representation or at least a sophisticated grouping of terms could help to express relational knowledge structures. There remains much work to do since in practice no one has shown yet how such an improved indexing system would work and if the indexing results were really "better".
  5. Beghtol, C.: ¬The classification of fiction : the development of a system based on theoretical principles (1994) 0.02
    0.015409409 = product of:
      0.030818818 = sum of:
        0.030818818 = product of:
          0.15409409 = sum of:
            0.15409409 = weight(_text_:author's in 3413) [ClassicSimilarity], result of:
              0.15409409 = score(doc=3413,freq=2.0), product of:
                0.2964857 = queryWeight, product of:
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.04411889 = queryNorm
                0.51973534 = fieldWeight in 3413, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3413)
          0.2 = coord(1/5)
      0.5 = coord(1/2)
    
    Abstract
    The work is an adaptation of the author's dissertation and has the following chapters: (1) background and introduction; (2) a problem in classification theory; (3) previous fiction analysis theories and systems and 'The left hand of darkness'; (4) fiction warrant and critical warrant; (5) experimental fiction analysis system (EFAS); (6) application and evaluation of EFAS. Appendix 1 gives references to fiction analysis systems and appendix 2 lists EFAS coding sheets
  6. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.01
    0.0149437515 = product of:
      0.029887503 = sum of:
        0.029887503 = product of:
          0.059775006 = sum of:
            0.059775006 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
              0.059775006 = score(doc=5835,freq=2.0), product of:
                0.15449683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04411889 = queryNorm
                0.38690117 = fieldWeight in 5835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5835)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 8.2006 13:22:44
  7. Kessel, K.: Who's afraid of the big, bad uktena mster? : subject cataloging for images (2016) 0.01
    0.013868858 = product of:
      0.027737716 = sum of:
        0.027737716 = product of:
          0.055475432 = sum of:
            0.055475432 = weight(_text_:i in 3003) [ClassicSimilarity], result of:
              0.055475432 = score(doc=3003,freq=2.0), product of:
                0.16640453 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04411889 = queryNorm
                0.33337694 = fieldWeight in 3003, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3003)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article describes the difference between cataloging images and cataloging books, the obstacles to including subject data in image cataloging records and how these obstacles can be overcome to make image collections more accessible. I call for participants to help create a subject authority reference resource for non-Western art. This article is an expanded and revised version of a presentation for the 2016 Joint ARLIS/VRA conference in Seattle.
  8. Andersson, R.; Holst, E.: Indexes and other depictions of fictions : a new model for analysis empirically tested (1996) 0.01
    0.013208066 = product of:
      0.026416132 = sum of:
        0.026416132 = product of:
          0.13208066 = sum of:
            0.13208066 = weight(_text_:author's in 473) [ClassicSimilarity], result of:
              0.13208066 = score(doc=473,freq=2.0), product of:
                0.2964857 = queryWeight, product of:
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.04411889 = queryNorm
                0.44548744 = fieldWeight in 473, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.046875 = fieldNorm(doc=473)
          0.2 = coord(1/5)
      0.5 = coord(1/2)
    
    Abstract
    In this study descriptions of a novel by 100 users at 2 Swedish public libraries, Malmö and Molndal, Mar-Apr 95, were compared to the index terms used for the novels at these libraries. Describes previous systems for fiction indexing, the 2 libraries, and the users interviewed. Compares the AMP system with their own model. The latter operates with terms under the headings phenomena, frame and author's intention. The similarities between the users' and indexers' descriptions were sufficiently close to make it possible to retrieve fiction in accordance with users' wishes in Molndal, and would have been in Malmö, had more books been indexed with more terms. Sometimes the similarities were close enough for users to retrieve fiction on their own
  9. Clavier, V.; Paganelli, C.: Including authorial stance in the indexing of scientific documents (2012) 0.01
    0.013208066 = product of:
      0.026416132 = sum of:
        0.026416132 = product of:
          0.13208066 = sum of:
            0.13208066 = weight(_text_:author's in 320) [ClassicSimilarity], result of:
              0.13208066 = score(doc=320,freq=2.0), product of:
                0.2964857 = queryWeight, product of:
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.04411889 = queryNorm
                0.44548744 = fieldWeight in 320, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.046875 = fieldNorm(doc=320)
          0.2 = coord(1/5)
      0.5 = coord(1/2)
    
    Abstract
    This article argues that authorial stance should be taken into account in the indexing of scientific documents. Authorial stance has been widely studied in linguistics and is a typical feature of scientific writing that reveals the uniqueness of each author's perspective, their scientific contribution, and their thinking. We argue that authorial stance guides the reading of scientific documents and that it can be used to characterize the knowledge contained in such documents. Our research has previously shown that people reading dissertations are interested both in a topic and in a document's authorial stance. Now, we would like to propose a two-tiered indexing system. Dissertations would first be divided into paragraphs; then, each information unit would be defined by topic and by the markers of authorial stance present in the document.
  10. Bade, D.: ¬The creation and persistence of misinformation in shared library catalogs : language and subject knowledge in a technological era (2002) 0.01
    0.012911929 = product of:
      0.025823858 = sum of:
        0.025823858 = sum of:
          0.013868858 = weight(_text_:i in 1858) [ClassicSimilarity], result of:
            0.013868858 = score(doc=1858,freq=2.0), product of:
              0.16640453 = queryWeight, product of:
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.04411889 = queryNorm
              0.083344236 = fieldWeight in 1858, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.015625 = fieldNorm(doc=1858)
          0.011955 = weight(_text_:22 in 1858) [ClassicSimilarity], result of:
            0.011955 = score(doc=1858,freq=2.0), product of:
              0.15449683 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04411889 = queryNorm
              0.07738023 = fieldWeight in 1858, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.015625 = fieldNorm(doc=1858)
      0.5 = coord(1/2)
    
    Date
    22. 9.1997 19:16:05
    Footnote
    Bade begins his discussion of errors in subject analysis by summarizing the contents of seven records containing what he considers to be egregious errors. The examples were drawn only from items that he has encountered in the course of his work. Five of the seven records were full-level ("I" level) records for Eastern European materials created between 1996 and 2000 in the OCLC WorldCat database. The final two examples were taken from records created by Bade himself over an unspecified period of time. Although he is to be commended for examining the actual items cataloged and for examining mostly items that he claims to have adequate linguistic and subject expertise to evaluate reliably, Bade's methodology has major flaws. First and foremost, the number of examples provided is completely inadequate to draw any conclusions about the extent of the problem. Although an in-depth qualitative analysis of a small number of records might have yielded some valuable insight into factors that contribute to errors in subject analysis, Bade provides no Information about the circumstances under which the live OCLC records he critiques were created. Instead, he offers simplistic explanations for the errors based solely an his own assumptions. He supplements his analysis of examples with an extremely brief survey of other studies regarding errors in subject analysis, which consists primarily of criticism of work done by Sheila Intner. In the end, it is impossible to draw any reliable conclusions about the nature or extent of errors in subject analysis found in records in shared bibliographic databases based an Bade's analysis. In the final third of the essay, Bade finally reveals his true concern: the deintellectualization of cataloging. It would strengthen the essay tremendously to present this as the primary premise from the very beginning, as this section offers glimpses of a compelling argument. Bade laments, "Many librarians simply do not sec cataloging as an intellectual activity requiring an educated mind" (p. 20). Commenting an recent trends in copy cataloging practice, he declares, "The disaster of our time is that this work is being done more and more by people who can neither evaluate nor correct imported errors and offen are forbidden from even thinking about it" (p. 26). Bade argues that the most valuable content found in catalog records is the intellectual content contributed by knowledgeable catalogers, and he asserts that to perform intellectually demanding tasks such as subject analysis reliably and effectively, catalogers must have the linguistic and subject knowledge required to gain at least a rudimentary understanding of the materials that they describe. He contends that requiring catalogers to quickly dispense with materials in unfamiliar languages and subjects clearly undermines their ability to perform the intellectual work of cataloging and leads to an increasing number of errors in the bibliographic records contributed to shared databases.
  11. Shaw, R.: Information organization and the philosophy of history (2013) 0.01
    0.012135251 = product of:
      0.024270503 = sum of:
        0.024270503 = product of:
          0.048541006 = sum of:
            0.048541006 = weight(_text_:i in 946) [ClassicSimilarity], result of:
              0.048541006 = score(doc=946,freq=2.0), product of:
                0.16640453 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04411889 = queryNorm
                0.29170483 = fieldWeight in 946, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=946)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The philosophy of history can help articulate problems relevant to information organization. One such problem is "aboutness": How do texts relate to the world? In response to this problem, philosophers of history have developed theories of colligation describing how authors bind together phenomena under organizing concepts. Drawing on these ideas, I present a theory of subject analysis that avoids the problematic illusion of an independent "landscape" of subjects. This theory points to a broad vision of the future of information organization and some specific challenges to be met.
  12. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.01
    0.011955 = product of:
      0.02391 = sum of:
        0.02391 = product of:
          0.04782 = sum of:
            0.04782 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
              0.04782 = score(doc=5830,freq=2.0), product of:
                0.15449683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04411889 = queryNorm
                0.30952093 = fieldWeight in 5830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5830)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 8.2006 13:22:08
  13. Hauff-Hartig, S.: Automatische Transkription von Videos : Fernsehen 3.0: Automatisierte Sentimentanalyse und Zusammenstellung von Kurzvideos mit hohem Aufregungslevel KI-generierte Metadaten: Von der Technologiebeobachtung bis zum produktiven Einsatz (2021) 0.01
    0.011955 = product of:
      0.02391 = sum of:
        0.02391 = product of:
          0.04782 = sum of:
            0.04782 = weight(_text_:22 in 251) [ClassicSimilarity], result of:
              0.04782 = score(doc=251,freq=2.0), product of:
                0.15449683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04411889 = queryNorm
                0.30952093 = fieldWeight in 251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=251)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 5.2021 12:43:05
  14. Sauperl, A.: Catalogers' common ground and shared knowledge (2004) 0.01
    0.01100672 = product of:
      0.02201344 = sum of:
        0.02201344 = product of:
          0.1100672 = sum of:
            0.1100672 = weight(_text_:author's in 2069) [ClassicSimilarity], result of:
              0.1100672 = score(doc=2069,freq=2.0), product of:
                0.2964857 = queryWeight, product of:
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.04411889 = queryNorm
                0.3712395 = fieldWeight in 2069, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2069)
          0.2 = coord(1/5)
      0.5 = coord(1/2)
    
    Abstract
    The problem of multiple interpretations of meaning in the indexing process has been mostly avoided by information scientists. Among the few who have addressed this question are Clare Beghtol and Jens Erik Mai. Their findings and findings of other researchers in the area of information science, social psychology, and psycholinguistics indicate that the source of the problem might lie in the background and culture of each indexer or cataloger. Are the catalogers aware of the problem? A general model of the indexing process was developed from observations and interviews of 12 catalogers in three American academic libraries. The model is illustrated with a hypothetical cataloger's process. The study with catalogers revealed that catalogers are aware of the author's, the user's, and their own meaning, but do not try to accommodate them all. On the other hand, they make every effort to build common ground with catalog users by studying documents related to the document being cataloged, and by considering catalog records and subject headings related to the subject identified in the document being cataloged. They try to build common ground with other catalogers by using cataloging tools and by inferring unstated rules of cataloging from examples in the catalogs.
  15. Sauperl, A.: Subject cataloging process of Slovenian and American catalogers (2005) 0.01
    0.01100672 = product of:
      0.02201344 = sum of:
        0.02201344 = product of:
          0.1100672 = sum of:
            0.1100672 = weight(_text_:author's in 4702) [ClassicSimilarity], result of:
              0.1100672 = score(doc=4702,freq=2.0), product of:
                0.2964857 = queryWeight, product of:
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.04411889 = queryNorm
                0.3712395 = fieldWeight in 4702, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4702)
          0.2 = coord(1/5)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - An empirical study has shown that the real process of subject cataloging does not correspond entirely to theoretical descriptions in textbooks and international standards. The purpose of this is paper is to address the issue of whether it be possible for catalogers who have not received formal training to perform subject cataloging in a different way to their trained colleagues. Design/methodology/approach - A qualitative study was conducted in 2001 among five Slovenian public library catalogers. The resulting model is compared to previous findings. Findings - First, all catalogers attempted to determine what the book was about. While the American catalogers tried to understand the topic and the author's intent, the Slovenian catalogers appeared to focus on the topic only. Slovenian and American academic library catalogers did not demonstrate any anticipation of possible uses that users might have of the book, while this was important for American public library catalogers. All catalogers used existing records to build new ones and/or to search for subject headings. The verification of subject representation with the indexing language was the last step in the subject cataloging process of American catalogers, often skipped by Slovenian catalogers. Research limitations/implications - The small and convenient sample limits the findings. Practical implications - Comparison of subject cataloging processes of Slovenian and American catalogers, two different groups, is important because they both contribute to OCLC's WorldCat database. If the cataloging community is building a universal catalog and approaches to subject description are different, then the resulting subject representations might also be different. Originality/value - This is one of the very few empirical studies of subject cataloging and indexing.
  16. Raieli, R.: ¬The semantic hole : enthusiasm and caution around multimedia information retrieval (2012) 0.01
    0.010566828 = product of:
      0.021133656 = sum of:
        0.021133656 = product of:
          0.04226731 = sum of:
            0.04226731 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.04226731 = score(doc=4888,freq=4.0), product of:
                0.15449683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04411889 = queryNorm
                0.27358043 = fieldWeight in 4888, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2012 13:02:10
    Source
    Knowledge organization. 39(2012) no.1, S.13-22
  17. Weimer, K.H.: ¬The nexus of subject analysis and bibliographic description : the case of multipart videos (1996) 0.01
    0.00896625 = product of:
      0.0179325 = sum of:
        0.0179325 = product of:
          0.035865 = sum of:
            0.035865 = weight(_text_:22 in 6525) [ClassicSimilarity], result of:
              0.035865 = score(doc=6525,freq=2.0), product of:
                0.15449683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04411889 = queryNorm
                0.23214069 = fieldWeight in 6525, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6525)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.5-18
  18. Chen, S.-J.; Lee, H.-L.: Art images and mental associations : a preliminary exploration (2014) 0.01
    0.00896625 = product of:
      0.0179325 = sum of:
        0.0179325 = product of:
          0.035865 = sum of:
            0.035865 = weight(_text_:22 in 1416) [ClassicSimilarity], result of:
              0.035865 = score(doc=1416,freq=2.0), product of:
                0.15449683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04411889 = queryNorm
                0.23214069 = fieldWeight in 1416, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1416)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  19. White, M.D.; Marsh, E.E.: Content analysis : a flexible methodology (2006) 0.01
    0.00896625 = product of:
      0.0179325 = sum of:
        0.0179325 = product of:
          0.035865 = sum of:
            0.035865 = weight(_text_:22 in 5589) [ClassicSimilarity], result of:
              0.035865 = score(doc=5589,freq=2.0), product of:
                0.15449683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04411889 = queryNorm
                0.23214069 = fieldWeight in 5589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5589)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Library trends. 55(2006) no.1, S.22-45
  20. From information to knowledge : conceptual and content analysis by computer (1995) 0.01
    0.008668036 = product of:
      0.017336072 = sum of:
        0.017336072 = product of:
          0.034672145 = sum of:
            0.034672145 = weight(_text_:i in 5392) [ClassicSimilarity], result of:
              0.034672145 = score(doc=5392,freq=2.0), product of:
                0.16640453 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04411889 = queryNorm
                0.20836058 = fieldWeight in 5392, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5392)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    SCHMIDT, K.M.: Concepts - content - meaning: an introduction; DUCHASTEL, J. et al.: The SACAO project: using computation toward textual data analysis; PAQUIN, L.-C. u. L. DUPUY: An approach to expertise transfer: computer-assisted text analysis; HOGENRAAD, R., Y. BESTGEN u. J.-L. NYSTEN: Terrorist rhetoric: texture and architecture; MOHLER, P.P.: On the interaction between reading and computing: an interpretative approach to content analysis; LANCASHIRE, I.: Computer tools for cognitive stylistics; MERGENTHALER, E.: An outline of knowledge based text analysis; NAMENWIRTH, J.Z.: Ideography in computer-aided content analysis; WEBER, R.P. u. J.Z. Namenwirth: Content-analytic indicators: a self-critique; McKINNON, A.: Optimizing the aberrant frequency word technique; ROSATI, R.: Factor analysis in classical archaeology: export patterns of Attic pottery trade; PETRILLO, P.S.: Old and new worlds: ancient coinage and modern technology; DARANYI, S., S. MARJAI u.a.: Caryatids and the measurement of semiosis in architecture; ZARRI, G.P.: Intelligent information retrieval: an application in the field of historical biographical data; BOUCHARD, G., R. ROY u.a.: Computers and genealogy: from family reconstitution to population reconstruction; DEMÉLAS-BOHY, M.-D. u. M. RENAUD: Instability, networks and political parties: a political history expert system prototype; DARANYI, S., A. ABRANYI u. G. KOVACS: Knowledge extraction from ethnopoetic texts by multivariate statistical methods; FRAUTSCHI, R.L.: Measures of narrative voice in French prose fiction applied to textual samples from the enlightenment to the twentieth century; DANNENBERG, R. u.a.: A project in computer music: the musician's workbench