Search (43 results, page 1 of 3)

  • × theme_ss:"Inhaltsanalyse"
  1. Sauperl, A.: Subject determination during the cataloging process : the development of a system based on theoretical principles (2002) 0.05
    0.04661455 = product of:
      0.06992182 = sum of:
        0.03217218 = weight(_text_:index in 2293) [ClassicSimilarity], result of:
          0.03217218 = score(doc=2293,freq=2.0), product of:
            0.2221244 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.05083213 = queryNorm
            0.14483857 = fieldWeight in 2293, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2293)
        0.037749637 = sum of:
          0.017088482 = weight(_text_:classification in 2293) [ClassicSimilarity], result of:
            0.017088482 = score(doc=2293,freq=2.0), product of:
              0.16188543 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.05083213 = queryNorm
              0.10555911 = fieldWeight in 2293, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.0234375 = fieldNorm(doc=2293)
          0.020661155 = weight(_text_:22 in 2293) [ClassicSimilarity], result of:
            0.020661155 = score(doc=2293,freq=2.0), product of:
              0.17800546 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05083213 = queryNorm
              0.116070345 = fieldWeight in 2293, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=2293)
      0.6666667 = coord(2/3)
    
    Date
    27. 9.2005 14:22:19
    Footnote
    Rez. in: Knowledge organization 30(2003) no.2, S.114-115 (M. Hudon); "This most interesting contribution to the literature of subject cataloguing originates in the author's doctoral dissertation, prepared under the direction of jerry Saye at the University of North Carolina at Chapel Hill. In seven highly readable chapters, Alenka Sauperl develops possible answers to her principal research question: How do cataloguers determine or identify the topic of a document and choose appropriate subject representations? Specific questions at the source of this research an a process which has not been a frequent object of study include: Where do cataloguers look for an overall sense of what a document is about? How do they get an overall sense of what a document is about, especially when they are not familiar with the discipline? Do they consider only one or several possible interpretations? How do they translate meanings in appropriate and valid class numbers and subject headings? Using a strictly qualitative methodology, Dr. Sauperl's research is a study of twelve cataloguers in reallife situation. The author insists an the holistic rather than purely theoretical understanding of the process she is targeting. Participants in the study were professional cataloguers, with at least one year experience in their current job at one of three large academic libraries in the Southeastern United States. All three libraries have a large central cataloguing department, and use OCLC sources and the same automated system; the context of cataloguing tasks is thus considered to be reasonably comparable. All participants were volunteers in this study which combined two datagathering techniques: the think-aloud method and time-line interviews. A model of the subject cataloguing process was first developed from observations of a group of six cataloguers who were asked to independently perform original cataloguing an three nonfiction, non-serial items selected from materials regularly assigned to them for processing. The model was then used for follow-up interviews. Each participant in the second group of cataloguers was invited to reflect an his/her work process for a recent challenging document they had catalogued. Results are presented in 12 stories describing as many personal approaches to subject cataloguing. From these stories a summarization is offered and a theoretical model of subject cataloguing is developed which, according to the author, represents a realistic approach to subject cataloguing. Stories alternate comments from the researcher and direct quotations from the observed or interviewed cataloguers. Not surprisingly, the participants' stories reveal similarities in the sequence and accomplishment of several tasks in the process of subject cataloguing. Sauperl's proposed model, described in Chapter 5, includes as main stages: 1) Examination of the book and subject identification; 2) Search for subject headings; 3) Classification. Chapter 6 is a hypothetical Gase study, using the proposed model to describe the various stages of cataloguing a hypothetical resource. ...
    This document will be particularly useful to subject cataloguing teachers and trainers who could use the model to design case descriptions and exercises. We believe it is an accurate description of the reality of subject cataloguing today. But now that we know how things are dope, the next interesting question may be: Is that the best way? Is there a better, more efficient, way to do things? We can only hope that Dr. Sauperl will soon provide her own view of methods and techniques that could improve the flow of work or address the cataloguers' concern as to the lack of feedback an their work. Her several excellent suggestions for further research in this area all build an bits and pieces of what is done already, and stay well away from what could be done by the various actors in the area, from the designers of controlled vocabularies and authority files to those who use these tools an a daily basis to index, classify, or search for information."
  2. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.04
    0.04194404 = product of:
      0.12583213 = sum of:
        0.12583213 = sum of:
          0.056961603 = weight(_text_:classification in 5835) [ClassicSimilarity], result of:
            0.056961603 = score(doc=5835,freq=2.0), product of:
              0.16188543 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.05083213 = queryNorm
              0.35186368 = fieldWeight in 5835, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.078125 = fieldNorm(doc=5835)
          0.06887052 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
            0.06887052 = score(doc=5835,freq=2.0), product of:
              0.17800546 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05083213 = queryNorm
              0.38690117 = fieldWeight in 5835, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=5835)
      0.33333334 = coord(1/3)
    
    Date
    5. 8.2006 13:22:44
  3. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.04
    0.03984704 = product of:
      0.11954111 = sum of:
        0.11954111 = sum of:
          0.0644447 = weight(_text_:classification in 5830) [ClassicSimilarity], result of:
            0.0644447 = score(doc=5830,freq=4.0), product of:
              0.16188543 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.05083213 = queryNorm
              0.39808834 = fieldWeight in 5830, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.0625 = fieldNorm(doc=5830)
          0.055096414 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
            0.055096414 = score(doc=5830,freq=2.0), product of:
              0.17800546 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05083213 = queryNorm
              0.30952093 = fieldWeight in 5830, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=5830)
      0.33333334 = coord(1/3)
    
    Date
    5. 8.2006 13:22:08
    Source
    Classification research for knowledge representation and organization. Proc. 5th Int. Study Conf. on Classification Research, Toronto, Canada, 24.-28.6.1991. Ed. by N.J. Williamson u. M. Hudon
  4. Farrow, J.: All in the mind : concept analysis in indexing (1995) 0.03
    0.028597495 = product of:
      0.08579248 = sum of:
        0.08579248 = weight(_text_:index in 2926) [ClassicSimilarity], result of:
          0.08579248 = score(doc=2926,freq=2.0), product of:
            0.2221244 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.05083213 = queryNorm
            0.3862362 = fieldWeight in 2926, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0625 = fieldNorm(doc=2926)
      0.33333334 = coord(1/3)
    
    Abstract
    The indexing process consists of the comprehension of the document to be indexed, followed by the production of a set of index terms. Differences between academic indexing and back-of-the-book indexing are discussed. Text comprehension is a branch of human information processing, and it is argued that the model of text comprehension and production debeloped by van Dijk and Kintsch can form the basis for a cognitive process model of indexing. Strategies for testing such a model are suggested
  5. Weimer, K.H.: ¬The nexus of subject analysis and bibliographic description : the case of multipart videos (1996) 0.03
    0.025166426 = product of:
      0.075499274 = sum of:
        0.075499274 = sum of:
          0.034176964 = weight(_text_:classification in 6525) [ClassicSimilarity], result of:
            0.034176964 = score(doc=6525,freq=2.0), product of:
              0.16188543 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.05083213 = queryNorm
              0.21111822 = fieldWeight in 6525, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.046875 = fieldNorm(doc=6525)
          0.04132231 = weight(_text_:22 in 6525) [ClassicSimilarity], result of:
            0.04132231 = score(doc=6525,freq=2.0), product of:
              0.17800546 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05083213 = queryNorm
              0.23214069 = fieldWeight in 6525, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=6525)
      0.33333334 = coord(1/3)
    
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.5-18
  6. Hutchins, W.J.: ¬The concept of 'aboutness' in subject indexing (1978) 0.03
    0.025022808 = product of:
      0.07506842 = sum of:
        0.07506842 = weight(_text_:index in 1961) [ClassicSimilarity], result of:
          0.07506842 = score(doc=1961,freq=2.0), product of:
            0.2221244 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.05083213 = queryNorm
            0.33795667 = fieldWeight in 1961, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1961)
      0.33333334 = coord(1/3)
    
    Abstract
    The common view of the 'aboutness' of documents is that the index entries (or classifications) assigned to documents represent or indicate in some way the total contents of documents; indexing and classifying are seen as processes involving the 'summerization' of the texts of documents. In this paper an alternative concept of 'aboutness' is proposed based on an analysis of the linguistic organization of texts, which is felt to be more appropriate in many indexing environments (particularly in non-specialized libraries and information services) and which has implications for the evaluation of the effectiveness of indexing systems
  7. Andersson, R.; Holst, E.: Indexes and other depictions of fictions : a new model for analysis empirically tested (1996) 0.02
    0.02144812 = product of:
      0.06434436 = sum of:
        0.06434436 = weight(_text_:index in 473) [ClassicSimilarity], result of:
          0.06434436 = score(doc=473,freq=2.0), product of:
            0.2221244 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.05083213 = queryNorm
            0.28967714 = fieldWeight in 473, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.046875 = fieldNorm(doc=473)
      0.33333334 = coord(1/3)
    
    Abstract
    In this study descriptions of a novel by 100 users at 2 Swedish public libraries, Malmö and Molndal, Mar-Apr 95, were compared to the index terms used for the novels at these libraries. Describes previous systems for fiction indexing, the 2 libraries, and the users interviewed. Compares the AMP system with their own model. The latter operates with terms under the headings phenomena, frame and author's intention. The similarities between the users' and indexers' descriptions were sufficiently close to make it possible to retrieve fiction in accordance with users' wishes in Molndal, and would have been in Malmö, had more books been indexed with more terms. Sometimes the similarities were close enough for users to retrieve fiction on their own
  8. Jens-Erik Mai, J.-E.: ¬The role of documents, domains and decisions in indexing (2004) 0.02
    0.020221483 = product of:
      0.06066445 = sum of:
        0.06066445 = weight(_text_:index in 2653) [ClassicSimilarity], result of:
          0.06066445 = score(doc=2653,freq=4.0), product of:
            0.2221244 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.05083213 = queryNorm
            0.27311024 = fieldWeight in 2653, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.03125 = fieldNorm(doc=2653)
      0.33333334 = coord(1/3)
    
    Content
    1. Introduction The document at hand is often regarded as the most important entity for analysis in the indexing situation. The indexer's focus is directed to the "entity and its faithful description" (Soergel, 1985, 227) and the indexer is advised to "stick to the text and the author's claims" (Lancaster, 2003, 37). The indexer's aim is to establish the subject matter based an an analysis of the document with the goal of representing the document as truthfully as possible and to ensure the subject representation's validity by remaining neutral and objective. To help indexers with their task they are guided towards particular and important attributes of the document that could help them determine the document's subject matter. The exact attributes the indexer is recommended to examine varies, but typical examples are: the title, the abstract, the table of contents, chapter headings, chapter subheadings, preface, introduction, foreword, the text itself, bibliographical references, index entries, illustrations, diagrams, and tables and their captions. The exact recommendations vary according to the type of document that is being indexed (monographs vs. periodical articles, for instance). It is clear that indexers should provide faithful descriptions, that indexers should represent the author's claims, and that the document's attributes are helpful points of analysis. However, indexers need much more guidance when determining the subject than simply the documents themselves. One approach that could be taken to handle the Situation is a useroriented approach in which it is argued that the indexer should ask, "how should I make this document ... visible to potential users? What terms should I use to convey its knowledge to those interested?" (Albrechtsen, 1993, 222). The basic idea is that indexers need to have the users' information needs and terminology in mind when determining the subject matter of documents as well as when selecting index terms.
  9. Weinberg, B.H.: Why indexing fails the researcher (1988) 0.02
    0.017873434 = product of:
      0.0536203 = sum of:
        0.0536203 = weight(_text_:index in 703) [ClassicSimilarity], result of:
          0.0536203 = score(doc=703,freq=2.0), product of:
            0.2221244 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.05083213 = queryNorm
            0.24139762 = fieldWeight in 703, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=703)
      0.33333334 = coord(1/3)
    
    Abstract
    It is a truism in information science that indexing is associated with 'aboutness', and that index terms that accurately represent what a document is about will serve the needs of the user/searcher well. It is contended in this paper that indexing which is limited to the representation of aboutness serves the novice in a discipline adequately, but does not serve the scholar or researcher, who is concerned with highly specific aspects of or points-of-view on a subject. The linguistic analogs of 'aboutness' and 'aspects' are 'topic' and 'comment' respectively. Serial indexing services deal with topics at varyng levels of specificity, but neglect comment almost entirely. This may explain the underutilization of secondary information services by scholars, as has been repeatedly demonstrated in user studies. It may also account for the incomplete lists of bibliographic references in many research papers. Natural language searching of fulltext databases does not solve this problem, because the aspect of a topic of interest to researchers is often inexpressible in concrete terms. The thesis is illustrated with examples of indexing failures in research projects the author has conducted on a range of linguistic and library-information science topics. Finally, the question of whether indexing can be improved to meet the needs of researchers is examined
  10. Pejtersen, A.M.: ¬A new approach to the classification of fiction (1982) 0.02
    0.0164434 = product of:
      0.049330197 = sum of:
        0.049330197 = product of:
          0.098660395 = sum of:
            0.098660395 = weight(_text_:classification in 7240) [ClassicSimilarity], result of:
              0.098660395 = score(doc=7240,freq=6.0), product of:
                0.16188543 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05083213 = queryNorm
                0.6094458 = fieldWeight in 7240, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.078125 = fieldNorm(doc=7240)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Universal classification I: subject analysis and ordering systems. Proc. of the 4th Int. Study Conf. on Classification Research, Augsburg, 28.6.-2.7.1982. Ed. I. Dahlberg
  11. Pejtersen, A.M.: Fiction and library classification (1978) 0.02
    0.015189761 = product of:
      0.045569282 = sum of:
        0.045569282 = product of:
          0.091138564 = sum of:
            0.091138564 = weight(_text_:classification in 722) [ClassicSimilarity], result of:
              0.091138564 = score(doc=722,freq=2.0), product of:
                0.16188543 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05083213 = queryNorm
                0.5629819 = fieldWeight in 722, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.125 = fieldNorm(doc=722)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  12. Beghtol, C.: Bibliographic classification theory and text linguistics : aboutness, analysis, intertextuality and the cognitive act of classifying documents (1986) 0.02
    0.015189761 = product of:
      0.045569282 = sum of:
        0.045569282 = product of:
          0.091138564 = sum of:
            0.091138564 = weight(_text_:classification in 1346) [ClassicSimilarity], result of:
              0.091138564 = score(doc=1346,freq=2.0), product of:
                0.16188543 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05083213 = queryNorm
                0.5629819 = fieldWeight in 1346, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.125 = fieldNorm(doc=1346)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  13. Bland, R.N.: ¬The concept of intellectual level in cataloging and classification (1983) 0.02
    0.015189761 = product of:
      0.045569282 = sum of:
        0.045569282 = product of:
          0.091138564 = sum of:
            0.091138564 = weight(_text_:classification in 321) [ClassicSimilarity], result of:
              0.091138564 = score(doc=321,freq=8.0), product of:
                0.16188543 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05083213 = queryNorm
                0.5629819 = fieldWeight in 321, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0625 = fieldNorm(doc=321)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper traces the history of the concept of intellectual level in cataloging and classification in the United States. Past cataloging codes, subject-heading practice, and classification systems have provided library users with little systematic information concerning the intellectual level or intended audience of works. Reasons for this omission are discussed, and arguments are developed to show that this kind of information would be a useful addition to the catalog record of the present and the future.
    Source
    Cataloging and classification quarterly. 4(1983) no.1, S.53-63
  14. Vieira, L.: Modèle d'analyse pur une classification du document iconographique (1999) 0.01
    0.013425979 = product of:
      0.040277936 = sum of:
        0.040277936 = product of:
          0.08055587 = sum of:
            0.08055587 = weight(_text_:classification in 6320) [ClassicSimilarity], result of:
              0.08055587 = score(doc=6320,freq=4.0), product of:
                0.16188543 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05083213 = queryNorm
                0.49761042 = fieldWeight in 6320, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6320)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Footnote
    Übers. d. Titels: Analyse model for a classification of iconographic documents
  15. Merrill, W.S.: Code for classifiers : principles governing the consistent placing of books in a system of classification (1969) 0.01
    0.013291041 = product of:
      0.039873123 = sum of:
        0.039873123 = product of:
          0.07974625 = sum of:
            0.07974625 = weight(_text_:classification in 1640) [ClassicSimilarity], result of:
              0.07974625 = score(doc=1640,freq=2.0), product of:
                0.16188543 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05083213 = queryNorm
                0.49260917 = fieldWeight in 1640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1640)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  16. Jörgensen, C.: ¬The applicability of selected classification systems to image attributes (1996) 0.01
    0.01151038 = product of:
      0.03453114 = sum of:
        0.03453114 = product of:
          0.06906228 = sum of:
            0.06906228 = weight(_text_:classification in 5175) [ClassicSimilarity], result of:
              0.06906228 = score(doc=5175,freq=6.0), product of:
                0.16188543 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05083213 = queryNorm
                0.42661208 = fieldWeight in 5175, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5175)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Recent research investigated image attributes as reported by participants in describing, sorting, and searching tasks with images and defined 46 specific image attributes which were then organized into 12 major classes. Attributes were also grouped as being 'perceptual' (directly stimulated by visual percepts), 'interpretive' (requiring inference from visual percepts), and 'reactive' (cognitive and affective responses to the images). This research describes the coverage of two image indexing and classification systems and one general classification system in relation to the previous findings and analyzes the extent to which components of these systems are capable of describing the range of image attributes as revealed by the previous research
  17. Langridge, D.W.: Subject analysis : principles and procedures (1989) 0.01
    0.01151038 = product of:
      0.03453114 = sum of:
        0.03453114 = product of:
          0.06906228 = sum of:
            0.06906228 = weight(_text_:classification in 2021) [ClassicSimilarity], result of:
              0.06906228 = score(doc=2021,freq=6.0), product of:
                0.16188543 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05083213 = queryNorm
                0.42661208 = fieldWeight in 2021, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2021)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Subject analysis is the basis of all classifying and indexing techniques and is equally applicable to automatic and manual indexing systems. This book discusses subject analysis as an activity in its own right, independent of any indexing language. It examines the theoretical basis of subject analysis using the concepts of forms of knowledge as applicable to classification schemes.
    LCSH
    Classification / Books
    Subject
    Classification / Books
  18. Wyllie, J.: Concept indexing : the world beyond the windows (1990) 0.01
    0.011392321 = product of:
      0.034176964 = sum of:
        0.034176964 = product of:
          0.06835393 = sum of:
            0.06835393 = weight(_text_:classification in 2977) [ClassicSimilarity], result of:
              0.06835393 = score(doc=2977,freq=2.0), product of:
                0.16188543 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05083213 = queryNorm
                0.42223644 = fieldWeight in 2977, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2977)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper argues that the realisation of the electronic hypermedia of the future depends on integrating the technology of free text retrieval with the classification-based discipline of content analysis
  19. Wilkinson, C.L.: Intellectual level as a search enhancement in the online environment : summation and implications (1990) 0.01
    0.010740783 = product of:
      0.03222235 = sum of:
        0.03222235 = product of:
          0.0644447 = sum of:
            0.0644447 = weight(_text_:classification in 479) [ClassicSimilarity], result of:
              0.0644447 = score(doc=479,freq=4.0), product of:
                0.16188543 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05083213 = queryNorm
                0.39808834 = fieldWeight in 479, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0625 = fieldNorm(doc=479)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper summarizes the papers presented by the members of the panel on "The Concept of Intellectual Level in Cataloging and Classification." The implication of adding intellectual level to the MARC record and creating intellectual level indexes in online catalogs are discussed. Conclusion is reached that providing intellectual level will not only be costly but may perhaps even be a disservice to library users.
    Source
    Cataloging and classification quarterly. 11(1990) no.1, S.89-97
  20. Bi, Y.: Sentiment classification in social media data by combining triplet belief functions (2022) 0.01
    0.00986604 = product of:
      0.029598119 = sum of:
        0.029598119 = product of:
          0.059196237 = sum of:
            0.059196237 = weight(_text_:classification in 613) [ClassicSimilarity], result of:
              0.059196237 = score(doc=613,freq=6.0), product of:
                0.16188543 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05083213 = queryNorm
                0.3656675 = fieldWeight in 613, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.046875 = fieldNorm(doc=613)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Sentiment analysis is an emerging technique that caters for semantic orientation and opinion mining. It is increasingly used to analyze online reviews and posts for identifying people's opinions and attitudes to products and events in order to improve business performance of companies and aid to make better organizing strategies of events. This paper presents an innovative approach to combining the outputs of sentiment classifiers under the framework of belief functions. It consists of the formulation of sentiment classifier outputs in the triplet evidence structure and the development of general formulas for combining triplet functions derived from sentiment classification results via three evidential combination rules along with comparative analyses. The empirical studies have been conducted on examining the effectiveness of our method for sentiment classification individually and in combination, and the results demonstrate that the best combined classifiers by our method outperforms the best individual classifiers over five review datasets.