Search (147 results, page 1 of 8)

  • × theme_ss:"Inhaltsanalyse"
  1. Clavier, V.; Paganelli, C.: Including authorial stance in the indexing of scientific documents (2012) 0.08
    0.078055054 = sum of:
      0.03750447 = product of:
        0.15001789 = sum of:
          0.15001789 = weight(_text_:author's in 320) [ClassicSimilarity], result of:
            0.15001789 = score(doc=320,freq=2.0), product of:
              0.33674997 = queryWeight, product of:
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.05011046 = queryNorm
              0.44548744 = fieldWeight in 320, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.046875 = fieldNorm(doc=320)
        0.25 = coord(1/4)
      0.009875605 = weight(_text_:a in 320) [ClassicSimilarity], result of:
        0.009875605 = score(doc=320,freq=10.0), product of:
          0.057779714 = queryWeight, product of:
            1.153047 = idf(docFreq=37942, maxDocs=44218)
            0.05011046 = queryNorm
          0.1709182 = fieldWeight in 320, product of:
            3.1622777 = tf(freq=10.0), with freq of:
              10.0 = termFreq=10.0
            1.153047 = idf(docFreq=37942, maxDocs=44218)
            0.046875 = fieldNorm(doc=320)
      0.030674977 = product of:
        0.061349954 = sum of:
          0.061349954 = weight(_text_:de in 320) [ClassicSimilarity], result of:
            0.061349954 = score(doc=320,freq=2.0), product of:
              0.21534915 = queryWeight, product of:
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.05011046 = queryNorm
              0.28488597 = fieldWeight in 320, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.046875 = fieldNorm(doc=320)
        0.5 = coord(1/2)
    
    Abstract
    This article argues that authorial stance should be taken into account in the indexing of scientific documents. Authorial stance has been widely studied in linguistics and is a typical feature of scientific writing that reveals the uniqueness of each author's perspective, their scientific contribution, and their thinking. We argue that authorial stance guides the reading of scientific documents and that it can be used to characterize the knowledge contained in such documents. Our research has previously shown that people reading dissertations are interested both in a topic and in a document's authorial stance. Now, we would like to propose a two-tiered indexing system. Dissertations would first be divided into paragraphs; then, each information unit would be defined by topic and by the markers of authorial stance present in the document.
    Content
    Beitrag aus: Selected Papers from the 8th ISKO-France Conference, 27-28 June 2011, Lille, Université Charles-De-Gaulle Lille 3. Vgl.: http://www.ergon-verlag.de/isko_ko/downloads/ko_39_2012_4_g.pdf.
    Type
    a
  2. Vieira, L.: Modèle d'analyse pur une classification du document iconographique (1999) 0.08
    0.07510649 = product of:
      0.11265973 = sum of:
        0.010409801 = weight(_text_:a in 6320) [ClassicSimilarity], result of:
          0.010409801 = score(doc=6320,freq=4.0), product of:
            0.057779714 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05011046 = queryNorm
            0.18016359 = fieldWeight in 6320, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=6320)
        0.10224993 = product of:
          0.20449986 = sum of:
            0.20449986 = weight(_text_:de in 6320) [ClassicSimilarity], result of:
              0.20449986 = score(doc=6320,freq=8.0), product of:
                0.21534915 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.05011046 = queryNorm
                0.94961995 = fieldWeight in 6320, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6320)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    Übers. d. Titels: Analyse model for a classification of iconographic documents
    Imprint
    Lille : Université Charles-de-Gaulle
    Source
    Organisation des connaissances en vue de leur intégration dans les systèmes de représentation et de recherche d'information. Ed.: J. Maniez, et al
    Type
    a
  3. Naves, M.M.L.: Analise de assunto : concepcoes (1996) 0.06
    0.063941255 = product of:
      0.095911875 = sum of:
        0.00736084 = weight(_text_:a in 607) [ClassicSimilarity], result of:
          0.00736084 = score(doc=607,freq=2.0), product of:
            0.057779714 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05011046 = queryNorm
            0.12739488 = fieldWeight in 607, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=607)
        0.08855104 = product of:
          0.17710207 = sum of:
            0.17710207 = weight(_text_:de in 607) [ClassicSimilarity], result of:
              0.17710207 = score(doc=607,freq=6.0), product of:
                0.21534915 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.05011046 = queryNorm
                0.822395 = fieldWeight in 607, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.078125 = fieldNorm(doc=607)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Revista de Biblioteconomia de Brasilia. 20(1996) no.2, S.215-226
    Type
    a
  4. Riesthuis, G.J.A.; Stuurman, P.: Tendenzen in de onderwerpsontsluiting : T.1: Inhoudsanalyse (1989) 0.05
    0.05458675 = product of:
      0.08188012 = sum of:
        0.0103051765 = weight(_text_:a in 1841) [ClassicSimilarity], result of:
          0.0103051765 = score(doc=1841,freq=2.0), product of:
            0.057779714 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05011046 = queryNorm
            0.17835285 = fieldWeight in 1841, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=1841)
        0.07157495 = product of:
          0.1431499 = sum of:
            0.1431499 = weight(_text_:de in 1841) [ClassicSimilarity], result of:
              0.1431499 = score(doc=1841,freq=2.0), product of:
                0.21534915 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.05011046 = queryNorm
                0.66473395 = fieldWeight in 1841, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1841)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Type
    a
  5. Hildebrandt, B.; Moratz, R.; Rickheit, G.; Sagerer, G.: Kognitive Modellierung von Sprach- und Bildverstehen (1996) 0.05
    0.046788644 = product of:
      0.070182964 = sum of:
        0.008833009 = weight(_text_:a in 7292) [ClassicSimilarity], result of:
          0.008833009 = score(doc=7292,freq=2.0), product of:
            0.057779714 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05011046 = queryNorm
            0.15287387 = fieldWeight in 7292, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=7292)
        0.061349954 = product of:
          0.12269991 = sum of:
            0.12269991 = weight(_text_:de in 7292) [ClassicSimilarity], result of:
              0.12269991 = score(doc=7292,freq=2.0), product of:
                0.21534915 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.05011046 = queryNorm
                0.56977195 = fieldWeight in 7292, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7292)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Imprint
    Berlin : Mouton de Gruyter
    Type
    a
  6. Sauperl, A.: Subject determination during the cataloging process : the development of a system based on theoretical principles (2002) 0.04
    0.038811754 = sum of:
      0.018752236 = product of:
        0.07500894 = sum of:
          0.07500894 = weight(_text_:author's in 2293) [ClassicSimilarity], result of:
            0.07500894 = score(doc=2293,freq=2.0), product of:
              0.33674997 = queryWeight, product of:
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.05011046 = queryNorm
              0.22274372 = fieldWeight in 2293, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.7201533 = idf(docFreq=144, maxDocs=44218)
                0.0234375 = fieldNorm(doc=2293)
        0.25 = coord(1/4)
      0.009875605 = weight(_text_:a in 2293) [ClassicSimilarity], result of:
        0.009875605 = score(doc=2293,freq=40.0), product of:
          0.057779714 = queryWeight, product of:
            1.153047 = idf(docFreq=37942, maxDocs=44218)
            0.05011046 = queryNorm
          0.1709182 = fieldWeight in 2293, product of:
            6.3245554 = tf(freq=40.0), with freq of:
              40.0 = termFreq=40.0
            1.153047 = idf(docFreq=37942, maxDocs=44218)
            0.0234375 = fieldNorm(doc=2293)
      0.010183913 = product of:
        0.020367825 = sum of:
          0.020367825 = weight(_text_:22 in 2293) [ClassicSimilarity], result of:
            0.020367825 = score(doc=2293,freq=2.0), product of:
              0.1754783 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05011046 = queryNorm
              0.116070345 = fieldWeight in 2293, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=2293)
        0.5 = coord(1/2)
    
    Date
    27. 9.2005 14:22:19
    Footnote
    Rez. in: Knowledge organization 30(2003) no.2, S.114-115 (M. Hudon); "This most interesting contribution to the literature of subject cataloguing originates in the author's doctoral dissertation, prepared under the direction of jerry Saye at the University of North Carolina at Chapel Hill. In seven highly readable chapters, Alenka Sauperl develops possible answers to her principal research question: How do cataloguers determine or identify the topic of a document and choose appropriate subject representations? Specific questions at the source of this research an a process which has not been a frequent object of study include: Where do cataloguers look for an overall sense of what a document is about? How do they get an overall sense of what a document is about, especially when they are not familiar with the discipline? Do they consider only one or several possible interpretations? How do they translate meanings in appropriate and valid class numbers and subject headings? Using a strictly qualitative methodology, Dr. Sauperl's research is a study of twelve cataloguers in reallife situation. The author insists an the holistic rather than purely theoretical understanding of the process she is targeting. Participants in the study were professional cataloguers, with at least one year experience in their current job at one of three large academic libraries in the Southeastern United States. All three libraries have a large central cataloguing department, and use OCLC sources and the same automated system; the context of cataloguing tasks is thus considered to be reasonably comparable. All participants were volunteers in this study which combined two datagathering techniques: the think-aloud method and time-line interviews. A model of the subject cataloguing process was first developed from observations of a group of six cataloguers who were asked to independently perform original cataloguing an three nonfiction, non-serial items selected from materials regularly assigned to them for processing. The model was then used for follow-up interviews. Each participant in the second group of cataloguers was invited to reflect an his/her work process for a recent challenging document they had catalogued. Results are presented in 12 stories describing as many personal approaches to subject cataloguing. From these stories a summarization is offered and a theoretical model of subject cataloguing is developed which, according to the author, represents a realistic approach to subject cataloguing. Stories alternate comments from the researcher and direct quotations from the observed or interviewed cataloguers. Not surprisingly, the participants' stories reveal similarities in the sequence and accomplishment of several tasks in the process of subject cataloguing. Sauperl's proposed model, described in Chapter 5, includes as main stages: 1) Examination of the book and subject identification; 2) Search for subject headings; 3) Classification. Chapter 6 is a hypothetical Gase study, using the proposed model to describe the various stages of cataloguing a hypothetical resource. ...
    This document will be particularly useful to subject cataloguing teachers and trainers who could use the model to design case descriptions and exercises. We believe it is an accurate description of the reality of subject cataloguing today. But now that we know how things are dope, the next interesting question may be: Is that the best way? Is there a better, more efficient, way to do things? We can only hope that Dr. Sauperl will soon provide her own view of methods and techniques that could improve the flow of work or address the cataloguers' concern as to the lack of feedback an their work. Her several excellent suggestions for further research in this area all build an bits and pieces of what is done already, and stay well away from what could be done by the various actors in the area, from the designers of controlled vocabularies and authority files to those who use these tools an a daily basis to index, classify, or search for information."
  7. Beghtol, C.: ¬The classification of fiction : the development of a system based on theoretical principles (1994) 0.03
    0.034028053 = product of:
      0.051042076 = sum of:
        0.043755215 = product of:
          0.17502086 = sum of:
            0.17502086 = weight(_text_:author's in 3413) [ClassicSimilarity], result of:
              0.17502086 = score(doc=3413,freq=2.0), product of:
                0.33674997 = queryWeight, product of:
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.05011046 = queryNorm
                0.51973534 = fieldWeight in 3413, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3413)
          0.25 = coord(1/4)
        0.0072868606 = weight(_text_:a in 3413) [ClassicSimilarity], result of:
          0.0072868606 = score(doc=3413,freq=4.0), product of:
            0.057779714 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05011046 = queryNorm
            0.12611452 = fieldWeight in 3413, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3413)
      0.6666667 = coord(2/3)
    
    Abstract
    The work is an adaptation of the author's dissertation and has the following chapters: (1) background and introduction; (2) a problem in classification theory; (3) previous fiction analysis theories and systems and 'The left hand of darkness'; (4) fiction warrant and critical warrant; (5) experimental fiction analysis system (EFAS); (6) application and evaluation of EFAS. Appendix 1 gives references to fiction analysis systems and appendix 2 lists EFAS coding sheets
  8. Moraes, J.B.E. de: Aboutness in fiction : methodological perspectives for knowledge organization (2012) 0.03
    0.03281854 = product of:
      0.04922781 = sum of:
        0.008327841 = weight(_text_:a in 856) [ClassicSimilarity], result of:
          0.008327841 = score(doc=856,freq=4.0), product of:
            0.057779714 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05011046 = queryNorm
            0.14413087 = fieldWeight in 856, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=856)
        0.04089997 = product of:
          0.08179994 = sum of:
            0.08179994 = weight(_text_:de in 856) [ClassicSimilarity], result of:
              0.08179994 = score(doc=856,freq=2.0), product of:
                0.21534915 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.05011046 = queryNorm
                0.37984797 = fieldWeight in 856, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0625 = fieldNorm(doc=856)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Categories, contexts and relations in knowledge organization: Proceedings of the Twelfth International ISKO Conference 6-9 August 2012, Mysore, India. Eds.: Neelameghan, A. u. K.S. Raghavan
    Type
    a
  9. Pozzi de Sousa, B.; Ortega, C.D.: Aspects regarding the notion of subject in the context of different theoretical trends : teaching approaches in Brazil (2018) 0.03
    0.031192428 = product of:
      0.04678864 = sum of:
        0.005888672 = weight(_text_:a in 4707) [ClassicSimilarity], result of:
          0.005888672 = score(doc=4707,freq=2.0), product of:
            0.057779714 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05011046 = queryNorm
            0.10191591 = fieldWeight in 4707, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=4707)
        0.04089997 = product of:
          0.08179994 = sum of:
            0.08179994 = weight(_text_:de in 4707) [ClassicSimilarity], result of:
              0.08179994 = score(doc=4707,freq=2.0), product of:
                0.21534915 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.05011046 = queryNorm
                0.37984797 = fieldWeight in 4707, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4707)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Type
    a
  10. Andersson, R.; Holst, E.: Indexes and other depictions of fictions : a new model for analysis empirically tested (1996) 0.03
    0.030102722 = product of:
      0.045154084 = sum of:
        0.03750447 = product of:
          0.15001789 = sum of:
            0.15001789 = weight(_text_:author's in 473) [ClassicSimilarity], result of:
              0.15001789 = score(doc=473,freq=2.0), product of:
                0.33674997 = queryWeight, product of:
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.05011046 = queryNorm
                0.44548744 = fieldWeight in 473, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.046875 = fieldNorm(doc=473)
          0.25 = coord(1/4)
        0.0076496103 = weight(_text_:a in 473) [ClassicSimilarity], result of:
          0.0076496103 = score(doc=473,freq=6.0), product of:
            0.057779714 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05011046 = queryNorm
            0.13239266 = fieldWeight in 473, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=473)
      0.6666667 = coord(2/3)
    
    Abstract
    In this study descriptions of a novel by 100 users at 2 Swedish public libraries, Malmö and Molndal, Mar-Apr 95, were compared to the index terms used for the novels at these libraries. Describes previous systems for fiction indexing, the 2 libraries, and the users interviewed. Compares the AMP system with their own model. The latter operates with terms under the headings phenomena, frame and author's intention. The similarities between the users' and indexers' descriptions were sufficiently close to make it possible to retrieve fiction in accordance with users' wishes in Molndal, and would have been in Malmö, had more books been indexed with more terms. Sometimes the similarities were close enough for users to retrieve fiction on their own
    Type
    a
  11. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.03
    0.029570788 = product of:
      0.044356182 = sum of:
        0.010409801 = weight(_text_:a in 5835) [ClassicSimilarity], result of:
          0.010409801 = score(doc=5835,freq=4.0), product of:
            0.057779714 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05011046 = queryNorm
            0.18016359 = fieldWeight in 5835, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=5835)
        0.03394638 = product of:
          0.06789276 = sum of:
            0.06789276 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
              0.06789276 = score(doc=5835,freq=2.0), product of:
                0.1754783 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05011046 = queryNorm
                0.38690117 = fieldWeight in 5835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5835)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    5. 8.2006 13:22:44
    Type
    a
  12. Jens-Erik Mai, J.-E.: ¬The role of documents, domains and decisions in indexing (2004) 0.03
    0.02912493 = product of:
      0.043687396 = sum of:
        0.035359554 = product of:
          0.14143822 = sum of:
            0.14143822 = weight(_text_:author's in 2653) [ClassicSimilarity], result of:
              0.14143822 = score(doc=2653,freq=4.0), product of:
                0.33674997 = queryWeight, product of:
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.05011046 = queryNorm
                0.42000958 = fieldWeight in 2653, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2653)
          0.25 = coord(1/4)
        0.008327841 = weight(_text_:a in 2653) [ClassicSimilarity], result of:
          0.008327841 = score(doc=2653,freq=16.0), product of:
            0.057779714 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05011046 = queryNorm
            0.14413087 = fieldWeight in 2653, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=2653)
      0.6666667 = coord(2/3)
    
    Abstract
    The paper demonstrates that indexing is a complex phenomenon and presents a domain centered approach to indexing. The indexing process is analysed using the Means-Ends Analysis, a tool developed for the Cognitive Work Analysis framework. A Means-Ends Analysis of indexing provides a holistic understanding of indexing and Shows the importance of understanding the users' activities when indexing. The paper presents a domain-centered approach to indexing that includes an analysis of the users' activities and the paper outlines that approach to indexing.
    Content
    1. Introduction The document at hand is often regarded as the most important entity for analysis in the indexing situation. The indexer's focus is directed to the "entity and its faithful description" (Soergel, 1985, 227) and the indexer is advised to "stick to the text and the author's claims" (Lancaster, 2003, 37). The indexer's aim is to establish the subject matter based an an analysis of the document with the goal of representing the document as truthfully as possible and to ensure the subject representation's validity by remaining neutral and objective. To help indexers with their task they are guided towards particular and important attributes of the document that could help them determine the document's subject matter. The exact attributes the indexer is recommended to examine varies, but typical examples are: the title, the abstract, the table of contents, chapter headings, chapter subheadings, preface, introduction, foreword, the text itself, bibliographical references, index entries, illustrations, diagrams, and tables and their captions. The exact recommendations vary according to the type of document that is being indexed (monographs vs. periodical articles, for instance). It is clear that indexers should provide faithful descriptions, that indexers should represent the author's claims, and that the document's attributes are helpful points of analysis. However, indexers need much more guidance when determining the subject than simply the documents themselves. One approach that could be taken to handle the Situation is a useroriented approach in which it is argued that the indexer should ask, "how should I make this document ... visible to potential users? What terms should I use to convey its knowledge to those interested?" (Albrechtsen, 1993, 222). The basic idea is that indexers need to have the users' information needs and terminology in mind when determining the subject matter of documents as well as when selecting index terms.
    Type
    a
  13. Volpers, H.: Inhaltsanalyse (2013) 0.03
    0.027293375 = product of:
      0.04094006 = sum of:
        0.0051525882 = weight(_text_:a in 1018) [ClassicSimilarity], result of:
          0.0051525882 = score(doc=1018,freq=2.0), product of:
            0.057779714 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05011046 = queryNorm
            0.089176424 = fieldWeight in 1018, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1018)
        0.035787474 = product of:
          0.07157495 = sum of:
            0.07157495 = weight(_text_:de in 1018) [ClassicSimilarity], result of:
              0.07157495 = score(doc=1018,freq=2.0), product of:
                0.21534915 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.05011046 = queryNorm
                0.33236697 = fieldWeight in 1018, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1018)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Imprint
    Berlin : De Gruyter Saur
    Type
    a
  14. Fremery, W. De; Buckland, M.K.: Context, relevance, and labor (2022) 0.03
    0.026338657 = product of:
      0.039507985 = sum of:
        0.008833009 = weight(_text_:a in 4240) [ClassicSimilarity], result of:
          0.008833009 = score(doc=4240,freq=8.0), product of:
            0.057779714 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05011046 = queryNorm
            0.15287387 = fieldWeight in 4240, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4240)
        0.030674977 = product of:
          0.061349954 = sum of:
            0.061349954 = weight(_text_:de in 4240) [ClassicSimilarity], result of:
              0.061349954 = score(doc=4240,freq=2.0), product of:
                0.21534915 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.05011046 = queryNorm
                0.28488597 = fieldWeight in 4240, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4240)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Since information science concerns the transmission of records, it concerns context. The transmission of documents ensures their arrival in new contexts. Documents and their copies are spread across times and places. The amount of labor required to discover and retrieve relevant documents is also formulated by context. Thus, any serious consideration of communication and of information technologies quickly leads to a concern with context, relevance, and labor. Information scientists have developed many theories of context, relevance, and labor but not a framework for organizing them and describing their relationship with one another. We propose the words context and relevance can be used to articulate a useful framework for considering the diversity of approaches to context and relevance in information science, as well as their relations with each other and with labor.
    Type
    a
  15. Sauperl, A.: Subject cataloging process of Slovenian and American catalogers (2005) 0.03
    0.026322264 = product of:
      0.039483394 = sum of:
        0.031253725 = product of:
          0.1250149 = sum of:
            0.1250149 = weight(_text_:author's in 4702) [ClassicSimilarity], result of:
              0.1250149 = score(doc=4702,freq=2.0), product of:
                0.33674997 = queryWeight, product of:
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.05011046 = queryNorm
                0.3712395 = fieldWeight in 4702, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4702)
          0.25 = coord(1/4)
        0.00822967 = weight(_text_:a in 4702) [ClassicSimilarity], result of:
          0.00822967 = score(doc=4702,freq=10.0), product of:
            0.057779714 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05011046 = queryNorm
            0.14243183 = fieldWeight in 4702, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4702)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose - An empirical study has shown that the real process of subject cataloging does not correspond entirely to theoretical descriptions in textbooks and international standards. The purpose of this is paper is to address the issue of whether it be possible for catalogers who have not received formal training to perform subject cataloging in a different way to their trained colleagues. Design/methodology/approach - A qualitative study was conducted in 2001 among five Slovenian public library catalogers. The resulting model is compared to previous findings. Findings - First, all catalogers attempted to determine what the book was about. While the American catalogers tried to understand the topic and the author's intent, the Slovenian catalogers appeared to focus on the topic only. Slovenian and American academic library catalogers did not demonstrate any anticipation of possible uses that users might have of the book, while this was important for American public library catalogers. All catalogers used existing records to build new ones and/or to search for subject headings. The verification of subject representation with the indexing language was the last step in the subject cataloging process of American catalogers, often skipped by Slovenian catalogers. Research limitations/implications - The small and convenient sample limits the findings. Practical implications - Comparison of subject cataloging processes of Slovenian and American catalogers, two different groups, is important because they both contribute to OCLC's WorldCat database. If the cataloging community is building a universal catalog and approaches to subject description are different, then the resulting subject representations might also be different. Originality/value - This is one of the very few empirical studies of subject cataloging and indexing.
    Type
    a
  16. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.03
    0.0259563 = product of:
      0.038934447 = sum of:
        0.011777344 = weight(_text_:a in 5830) [ClassicSimilarity], result of:
          0.011777344 = score(doc=5830,freq=8.0), product of:
            0.057779714 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05011046 = queryNorm
            0.20383182 = fieldWeight in 5830, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=5830)
        0.027157102 = product of:
          0.054314204 = sum of:
            0.054314204 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
              0.054314204 = score(doc=5830,freq=2.0), product of:
                0.1754783 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05011046 = queryNorm
                0.30952093 = fieldWeight in 5830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5830)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper examnines various isues that arise in establishing a theoretical basis for an experimental fiction analysis system. It analyzes the warrants of fiction and of works about fiction. From this analysis, it derives classificatory requirements for a fiction system. Classificatory techniques that may contribute to the specification of data elements in fiction are suggested
    Date
    5. 8.2006 13:22:08
    Type
    a
  17. Sauperl, A.: Catalogers' common ground and shared knowledge (2004) 0.03
    0.025743043 = product of:
      0.038614564 = sum of:
        0.031253725 = product of:
          0.1250149 = sum of:
            0.1250149 = weight(_text_:author's in 2069) [ClassicSimilarity], result of:
              0.1250149 = score(doc=2069,freq=2.0), product of:
                0.33674997 = queryWeight, product of:
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.05011046 = queryNorm
                0.3712395 = fieldWeight in 2069, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2069)
          0.25 = coord(1/4)
        0.00736084 = weight(_text_:a in 2069) [ClassicSimilarity], result of:
          0.00736084 = score(doc=2069,freq=8.0), product of:
            0.057779714 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05011046 = queryNorm
            0.12739488 = fieldWeight in 2069, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2069)
      0.6666667 = coord(2/3)
    
    Abstract
    The problem of multiple interpretations of meaning in the indexing process has been mostly avoided by information scientists. Among the few who have addressed this question are Clare Beghtol and Jens Erik Mai. Their findings and findings of other researchers in the area of information science, social psychology, and psycholinguistics indicate that the source of the problem might lie in the background and culture of each indexer or cataloger. Are the catalogers aware of the problem? A general model of the indexing process was developed from observations and interviews of 12 catalogers in three American academic libraries. The model is illustrated with a hypothetical cataloger's process. The study with catalogers revealed that catalogers are aware of the author's, the user's, and their own meaning, but do not try to accommodate them all. On the other hand, they make every effort to build common ground with catalog users by studying documents related to the document being cataloged, and by considering catalog records and subject headings related to the subject identified in the document being cataloged. They try to build common ground with other catalogers by using cataloging tools and by inferring unstated rules of cataloging from examples in the catalogs.
    Type
    a
  18. Hauff-Hartig, S.: Automatische Transkription von Videos : Fernsehen 3.0: Automatisierte Sentimentanalyse und Zusammenstellung von Kurzvideos mit hohem Aufregungslevel KI-generierte Metadaten: Von der Technologiebeobachtung bis zum produktiven Einsatz (2021) 0.02
    0.022030516 = product of:
      0.033045772 = sum of:
        0.005888672 = weight(_text_:a in 251) [ClassicSimilarity], result of:
          0.005888672 = score(doc=251,freq=2.0), product of:
            0.057779714 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05011046 = queryNorm
            0.10191591 = fieldWeight in 251, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=251)
        0.027157102 = product of:
          0.054314204 = sum of:
            0.054314204 = weight(_text_:22 in 251) [ClassicSimilarity], result of:
              0.054314204 = score(doc=251,freq=2.0), product of:
                0.1754783 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05011046 = queryNorm
                0.30952093 = fieldWeight in 251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=251)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    22. 5.2021 12:43:05
    Type
    a
  19. White, M.D.; Marsh, E.E.: Content analysis : a flexible methodology (2006) 0.02
    0.020790672 = product of:
      0.031186007 = sum of:
        0.010818182 = weight(_text_:a in 5589) [ClassicSimilarity], result of:
          0.010818182 = score(doc=5589,freq=12.0), product of:
            0.057779714 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05011046 = queryNorm
            0.18723148 = fieldWeight in 5589, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=5589)
        0.020367825 = product of:
          0.04073565 = sum of:
            0.04073565 = weight(_text_:22 in 5589) [ClassicSimilarity], result of:
              0.04073565 = score(doc=5589,freq=2.0), product of:
                0.1754783 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05011046 = queryNorm
                0.23214069 = fieldWeight in 5589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5589)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Content analysis is a highly flexible research method that has been widely used in library and information science (LIS) studies with varying research goals and objectives. The research method is applied in qualitative, quantitative, and sometimes mixed modes of research frameworks and employs a wide range of analytical techniques to generate findings and put them into context. This article characterizes content analysis as a systematic, rigorous approach to analyzing documents obtained or generated in the course of research. It briefly describes the steps involved in content analysis, differentiates between quantitative and qualitative content analysis, and shows that content analysis serves the purposes of both quantitative research and qualitative research. The authors draw on selected LIS studies that have used content analysis to illustrate the concepts addressed in the article. The article also serves as a gateway to methodological books and articles that provide more detail about aspects of content analysis discussed only briefly in the article.
    Source
    Library trends. 55(2006) no.1, S.22-45
    Type
    a
  20. Fairthorne, R.A.: Temporal structure in bibliographic classification (1985) 0.02
    0.020698175 = product of:
      0.031047262 = sum of:
        0.018752236 = product of:
          0.07500894 = sum of:
            0.07500894 = weight(_text_:author's in 3651) [ClassicSimilarity], result of:
              0.07500894 = score(doc=3651,freq=2.0), product of:
                0.33674997 = queryWeight, product of:
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.05011046 = queryNorm
                0.22274372 = fieldWeight in 3651, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3651)
          0.25 = coord(1/4)
        0.012295027 = weight(_text_:a in 3651) [ClassicSimilarity], result of:
          0.012295027 = score(doc=3651,freq=62.0), product of:
            0.057779714 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.05011046 = queryNorm
            0.21279141 = fieldWeight in 3651, product of:
              7.8740077 = tf(freq=62.0), with freq of:
                62.0 = termFreq=62.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3651)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper, presented at the Ottawa Conference an the Conceptual Basis of the Classification of Knowledge, in 1971, is one of Fairthorne's more perceptive works and deserves a wide audience, especially as it breaks new ground in classification theory. In discussing the notion of discourse, he makes a "distinction between what discourse mentions and what discourse is about" [emphasis added], considered as a "fundamental factor to the relativistic nature of bibliographic classification" (p. 360). A table of mathematical functions, for example, describes exactly something represented by a collection of digits, but, without a preface, this table does not fit into a broader context. Some indication of the author's intent ls needed to fit the table into a broader context. This intent may appear in a title, chapter heading, class number or some other aid. Discourse an and discourse about something "cannot be determined solely from what it mentions" (p. 361). Some kind of background is needed. Fairthorne further develops the theme that knowledge about a subject comes from previous knowledge, thus adding a temporal factor to classification. "Some extra textual criteria are needed" in order to classify (p. 362). For example, "documents that mention the same things, but are an different topics, will have different ancestors, in the sense of preceding documents to which they are linked by various bibliographic characteristics ... [and] ... they will have different descendants" (p. 363). The classifier has to distinguish between documents that "mention exactly the same thing" but are not about the same thing. The classifier does this by classifying "sets of documents that form their histories, their bibliographic world lines" (p. 363). The practice of citation is one method of performing the linking and presents a "fan" of documents connected by a chain of citations to past work. The fan is seen as the effect of generations of documents - each generation connected to the previous one, and all ancestral to the present document. Thus, there are levels in temporal structure-that is, antecedent and successor documents-and these require that documents be identified in relation to other documents. This gives a set of documents an "irrevocable order," a loose order which Fairthorne calls "bibliographic time," and which is "generated by the fact of continual growth" (p. 364). He does not consider "bibliographic time" to be an equivalent to physical time because bibliographic events, as part of communication, require delay. Sets of documents, as indicated above, rather than single works, are used in classification. While an event, a person, a unique feature of the environment, may create a class of one-such as the French Revolution, Napoleon, Niagara Falls-revolutions, emperors, and waterfalls are sets which, as sets, will subsume individuals and make normal classes.
    The fan of past documents may be seen across time as a philosophical "wake," translated documents as a sideways relationship and future documents as another fan spreading forward from a given document (p. 365). The "overlap of reading histories can be used to detect common interests among readers," (p. 365) and readers may be classified accordingly. Finally, Fairthorne rejects the notion of a "general" classification, which he regards as a mirage, to be replaced by a citation-type network to identify classes. An interesting feature of his work lies in his linkage between old and new documents via a bibliographic method-citations, authors' names, imprints, style, and vocabulary - rather than topical (subject) terms. This is an indirect method of creating classes. The subject (aboutness) is conceived as a finite, common sharing of knowledge over time (past, present, and future) as opposed to the more common hierarchy of topics in an infinite schema assumed to be universally useful. Fairthorne, a mathematician by training, is a prolific writer an the foundations of classification and information. His professional career includes work with the Royal Engineers Chemical Warfare Section and the Royal Aircraft Establishment (RAE). He was the founder of the Computing Unit which became the RAE Mathematics Department.
    Footnote
    Original in: Ottawa Conference on the Conceptual Basis of the Classification of Knowledge, Ottawa, 1971. Ed.: Jerzy A Wojceichowski. Pullach: Verlag Dokumentation 1974. S.404-412.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
    Type
    a

Authors

Languages

  • e 131
  • d 14
  • f 1
  • nl 1
  • More… Less…

Types

  • a 139
  • m 4
  • el 3
  • x 2
  • d 1
  • s 1
  • More… Less…