Search (42 results, page 1 of 3)

  • × theme_ss:"Inhaltsanalyse"
  1. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.04
    0.04164443 = product of:
      0.12493329 = sum of:
        0.12493329 = sum of:
          0.056554716 = weight(_text_:classification in 5835) [ClassicSimilarity], result of:
            0.056554716 = score(doc=5835,freq=2.0), product of:
              0.16072905 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.05046903 = queryNorm
              0.35186368 = fieldWeight in 5835, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.078125 = fieldNorm(doc=5835)
          0.06837857 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
            0.06837857 = score(doc=5835,freq=2.0), product of:
              0.17673394 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05046903 = queryNorm
              0.38690117 = fieldWeight in 5835, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=5835)
      0.33333334 = coord(1/3)
    
    Date
    5. 8.2006 13:22:44
  2. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.04
    0.039562404 = product of:
      0.11868721 = sum of:
        0.11868721 = sum of:
          0.06398436 = weight(_text_:classification in 5830) [ClassicSimilarity], result of:
            0.06398436 = score(doc=5830,freq=4.0), product of:
              0.16072905 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.05046903 = queryNorm
              0.39808834 = fieldWeight in 5830, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.0625 = fieldNorm(doc=5830)
          0.054702852 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
            0.054702852 = score(doc=5830,freq=2.0), product of:
              0.17673394 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05046903 = queryNorm
              0.30952093 = fieldWeight in 5830, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=5830)
      0.33333334 = coord(1/3)
    
    Date
    5. 8.2006 13:22:08
    Source
    Classification research for knowledge representation and organization. Proc. 5th Int. Study Conf. on Classification Research, Toronto, Canada, 24.-28.6.1991. Ed. by N.J. Williamson u. M. Hudon
  3. Zarri, G.P.: Indexing and querying of narrative documents, a knowledge representation approach (2003) 0.03
    0.032115534 = product of:
      0.0963466 = sum of:
        0.0963466 = weight(_text_:interest in 2691) [ClassicSimilarity], result of:
          0.0963466 = score(doc=2691,freq=2.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.38424414 = fieldWeight in 2691, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2691)
      0.33333334 = coord(1/3)
    
    Abstract
    We describe here NKRL (Narrative Knowledge Representation Language), a semantic formalism for taking into account the characteristics of narrative multimedia documents. In these documents, the information content consists in the description of 'events' that relate the real or intended behaviour of some 'actors' (characters, personages, etc.). Narrative documents of an economic interest correspond to news stories, corporate documents, normative and legal texts, intelligence messages, representation of patient's medical records, etc. NKRL is characterised by the use of several knowledge representation principles and several high-level inference tools.
  4. Weimer, K.H.: ¬The nexus of subject analysis and bibliographic description : the case of multipart videos (1996) 0.02
    0.024986658 = product of:
      0.07495997 = sum of:
        0.07495997 = sum of:
          0.03393283 = weight(_text_:classification in 6525) [ClassicSimilarity], result of:
            0.03393283 = score(doc=6525,freq=2.0), product of:
              0.16072905 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.05046903 = queryNorm
              0.21111822 = fieldWeight in 6525, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.046875 = fieldNorm(doc=6525)
          0.04102714 = weight(_text_:22 in 6525) [ClassicSimilarity], result of:
            0.04102714 = score(doc=6525,freq=2.0), product of:
              0.17673394 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05046903 = queryNorm
              0.23214069 = fieldWeight in 6525, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=6525)
      0.33333334 = coord(1/3)
    
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.5-18
  5. Weinberg, B.H.: Why indexing fails the researcher (1988) 0.02
    0.022939665 = product of:
      0.068818994 = sum of:
        0.068818994 = weight(_text_:interest in 703) [ClassicSimilarity], result of:
          0.068818994 = score(doc=703,freq=2.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.27446008 = fieldWeight in 703, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.0390625 = fieldNorm(doc=703)
      0.33333334 = coord(1/3)
    
    Abstract
    It is a truism in information science that indexing is associated with 'aboutness', and that index terms that accurately represent what a document is about will serve the needs of the user/searcher well. It is contended in this paper that indexing which is limited to the representation of aboutness serves the novice in a discipline adequately, but does not serve the scholar or researcher, who is concerned with highly specific aspects of or points-of-view on a subject. The linguistic analogs of 'aboutness' and 'aspects' are 'topic' and 'comment' respectively. Serial indexing services deal with topics at varyng levels of specificity, but neglect comment almost entirely. This may explain the underutilization of secondary information services by scholars, as has been repeatedly demonstrated in user studies. It may also account for the incomplete lists of bibliographic references in many research papers. Natural language searching of fulltext databases does not solve this problem, because the aspect of a topic of interest to researchers is often inexpressible in concrete terms. The thesis is illustrated with examples of indexing failures in research projects the author has conducted on a range of linguistic and library-information science topics. Finally, the question of whether indexing can be improved to meet the needs of researchers is examined
  6. Thelwall, M.; Buckley, K.; Paltoglou, G.: Sentiment strength detection for the social web (2012) 0.02
    0.022939665 = product of:
      0.068818994 = sum of:
        0.068818994 = weight(_text_:interest in 4972) [ClassicSimilarity], result of:
          0.068818994 = score(doc=4972,freq=2.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.27446008 = fieldWeight in 4972, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4972)
      0.33333334 = coord(1/3)
    
    Abstract
    Sentiment analysis is concerned with the automatic extraction of sentiment-related information from text. Although most sentiment analysis addresses commercial tasks, such as extracting opinions from product reviews, there is increasing interest in the affective dimension of the social web, and Twitter in particular. Most sentiment analysis algorithms are not ideally suited to this task because they exploit indirect indicators of sentiment that can reflect genre or topic instead. Hence, such algorithms used to process social web texts can identify spurious sentiment patterns caused by topics rather than affective phenomena. This article assesses an improved version of the algorithm SentiStrength for sentiment strength detection across the social web that primarily uses direct indications of sentiment. The results from six diverse social web data sets (MySpace, Twitter, YouTube, Digg, Runners World, BBC Forums) indicate that SentiStrength 2 is successful in the sense of performing better than a baseline approach for all data sets in both supervised and unsupervised cases. SentiStrength is not always better than machine-learning approaches that exploit indirect indicators of sentiment, however, and is particularly weaker for positive sentiment in news-related discussions. Overall, the results suggest that, even unsupervised, SentiStrength is robust enough to be applied to a wide variety of different social web contexts.
  7. Huang, X.; Soergel, D.; Klavans, J.L.: Modeling and analyzing the topicality of art images (2015) 0.02
    0.022939665 = product of:
      0.068818994 = sum of:
        0.068818994 = weight(_text_:interest in 2127) [ClassicSimilarity], result of:
          0.068818994 = score(doc=2127,freq=2.0), product of:
            0.25074318 = queryWeight, product of:
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.05046903 = queryNorm
            0.27446008 = fieldWeight in 2127, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9682584 = idf(docFreq=835, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2127)
      0.33333334 = coord(1/3)
    
    Abstract
    This study demonstrates an improved conceptual foundation to support well-structured analysis of image topicality. First we present a conceptual framework for analyzing image topicality, explicating the layers, the perspectives, and the topical relevance relationships involved in modeling the topicality of art images. We adapt a generic relevance typology to image analysis by extending it with definitions and relationships specific to the visual art domain and integrating it with schemes of image-text relationships that are important for image subject indexing. We then apply the adapted typology to analyze the topical relevance relationships between 11 art images and 768 image tags assigned by art historians and librarians. The original contribution of our work is the topical structure analysis of image tags that allows the viewer to more easily grasp the content, context, and meaning of an image and quickly tune into aspects of interest; it could also guide both the indexer and the searcher to specify image tags/descriptors in a more systematic and precise manner and thus improve the match between the two parties. An additional contribution is systematically examining and integrating the variety of image-text relationships from a relevance perspective. The paper concludes with implications for relational indexing and social tagging.
  8. Pejtersen, A.M.: ¬A new approach to the classification of fiction (1982) 0.02
    0.016325941 = product of:
      0.048977822 = sum of:
        0.048977822 = product of:
          0.097955644 = sum of:
            0.097955644 = weight(_text_:classification in 7240) [ClassicSimilarity], result of:
              0.097955644 = score(doc=7240,freq=6.0), product of:
                0.16072905 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05046903 = queryNorm
                0.6094458 = fieldWeight in 7240, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.078125 = fieldNorm(doc=7240)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Universal classification I: subject analysis and ordering systems. Proc. of the 4th Int. Study Conf. on Classification Research, Augsburg, 28.6.-2.7.1982. Ed. I. Dahlberg
  9. Pejtersen, A.M.: Fiction and library classification (1978) 0.02
    0.0150812585 = product of:
      0.045243774 = sum of:
        0.045243774 = product of:
          0.09048755 = sum of:
            0.09048755 = weight(_text_:classification in 722) [ClassicSimilarity], result of:
              0.09048755 = score(doc=722,freq=2.0), product of:
                0.16072905 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05046903 = queryNorm
                0.5629819 = fieldWeight in 722, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.125 = fieldNorm(doc=722)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  10. Beghtol, C.: Bibliographic classification theory and text linguistics : aboutness, analysis, intertextuality and the cognitive act of classifying documents (1986) 0.02
    0.0150812585 = product of:
      0.045243774 = sum of:
        0.045243774 = product of:
          0.09048755 = sum of:
            0.09048755 = weight(_text_:classification in 1346) [ClassicSimilarity], result of:
              0.09048755 = score(doc=1346,freq=2.0), product of:
                0.16072905 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05046903 = queryNorm
                0.5629819 = fieldWeight in 1346, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.125 = fieldNorm(doc=1346)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  11. Bland, R.N.: ¬The concept of intellectual level in cataloging and classification (1983) 0.02
    0.0150812585 = product of:
      0.045243774 = sum of:
        0.045243774 = product of:
          0.09048755 = sum of:
            0.09048755 = weight(_text_:classification in 321) [ClassicSimilarity], result of:
              0.09048755 = score(doc=321,freq=8.0), product of:
                0.16072905 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05046903 = queryNorm
                0.5629819 = fieldWeight in 321, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0625 = fieldNorm(doc=321)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper traces the history of the concept of intellectual level in cataloging and classification in the United States. Past cataloging codes, subject-heading practice, and classification systems have provided library users with little systematic information concerning the intellectual level or intended audience of works. Reasons for this omission are discussed, and arguments are developed to show that this kind of information would be a useful addition to the catalog record of the present and the future.
    Source
    Cataloging and classification quarterly. 4(1983) no.1, S.53-63
  12. Vieira, L.: Modèle d'analyse pur une classification du document iconographique (1999) 0.01
    0.013330075 = product of:
      0.039990224 = sum of:
        0.039990224 = product of:
          0.07998045 = sum of:
            0.07998045 = weight(_text_:classification in 6320) [ClassicSimilarity], result of:
              0.07998045 = score(doc=6320,freq=4.0), product of:
                0.16072905 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05046903 = queryNorm
                0.49761042 = fieldWeight in 6320, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6320)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Footnote
    Übers. d. Titels: Analyse model for a classification of iconographic documents
  13. Merrill, W.S.: Code for classifiers : principles governing the consistent placing of books in a system of classification (1969) 0.01
    0.013196101 = product of:
      0.039588302 = sum of:
        0.039588302 = product of:
          0.079176605 = sum of:
            0.079176605 = weight(_text_:classification in 1640) [ClassicSimilarity], result of:
              0.079176605 = score(doc=1640,freq=2.0), product of:
                0.16072905 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05046903 = queryNorm
                0.49260917 = fieldWeight in 1640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1640)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  14. Sauperl, A.: Subject determination during the cataloging process : the development of a system based on theoretical principles (2002) 0.01
    0.012493329 = product of:
      0.037479986 = sum of:
        0.037479986 = sum of:
          0.016966416 = weight(_text_:classification in 2293) [ClassicSimilarity], result of:
            0.016966416 = score(doc=2293,freq=2.0), product of:
              0.16072905 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.05046903 = queryNorm
              0.10555911 = fieldWeight in 2293, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.0234375 = fieldNorm(doc=2293)
          0.02051357 = weight(_text_:22 in 2293) [ClassicSimilarity], result of:
            0.02051357 = score(doc=2293,freq=2.0), product of:
              0.17673394 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05046903 = queryNorm
              0.116070345 = fieldWeight in 2293, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=2293)
      0.33333334 = coord(1/3)
    
    Date
    27. 9.2005 14:22:19
    Footnote
    Rez. in: Knowledge organization 30(2003) no.2, S.114-115 (M. Hudon); "This most interesting contribution to the literature of subject cataloguing originates in the author's doctoral dissertation, prepared under the direction of jerry Saye at the University of North Carolina at Chapel Hill. In seven highly readable chapters, Alenka Sauperl develops possible answers to her principal research question: How do cataloguers determine or identify the topic of a document and choose appropriate subject representations? Specific questions at the source of this research an a process which has not been a frequent object of study include: Where do cataloguers look for an overall sense of what a document is about? How do they get an overall sense of what a document is about, especially when they are not familiar with the discipline? Do they consider only one or several possible interpretations? How do they translate meanings in appropriate and valid class numbers and subject headings? Using a strictly qualitative methodology, Dr. Sauperl's research is a study of twelve cataloguers in reallife situation. The author insists an the holistic rather than purely theoretical understanding of the process she is targeting. Participants in the study were professional cataloguers, with at least one year experience in their current job at one of three large academic libraries in the Southeastern United States. All three libraries have a large central cataloguing department, and use OCLC sources and the same automated system; the context of cataloguing tasks is thus considered to be reasonably comparable. All participants were volunteers in this study which combined two datagathering techniques: the think-aloud method and time-line interviews. A model of the subject cataloguing process was first developed from observations of a group of six cataloguers who were asked to independently perform original cataloguing an three nonfiction, non-serial items selected from materials regularly assigned to them for processing. The model was then used for follow-up interviews. Each participant in the second group of cataloguers was invited to reflect an his/her work process for a recent challenging document they had catalogued. Results are presented in 12 stories describing as many personal approaches to subject cataloguing. From these stories a summarization is offered and a theoretical model of subject cataloguing is developed which, according to the author, represents a realistic approach to subject cataloguing. Stories alternate comments from the researcher and direct quotations from the observed or interviewed cataloguers. Not surprisingly, the participants' stories reveal similarities in the sequence and accomplishment of several tasks in the process of subject cataloguing. Sauperl's proposed model, described in Chapter 5, includes as main stages: 1) Examination of the book and subject identification; 2) Search for subject headings; 3) Classification. Chapter 6 is a hypothetical Gase study, using the proposed model to describe the various stages of cataloguing a hypothetical resource. ...
  15. Jörgensen, C.: ¬The applicability of selected classification systems to image attributes (1996) 0.01
    0.011428159 = product of:
      0.034284476 = sum of:
        0.034284476 = product of:
          0.06856895 = sum of:
            0.06856895 = weight(_text_:classification in 5175) [ClassicSimilarity], result of:
              0.06856895 = score(doc=5175,freq=6.0), product of:
                0.16072905 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05046903 = queryNorm
                0.42661208 = fieldWeight in 5175, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5175)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Recent research investigated image attributes as reported by participants in describing, sorting, and searching tasks with images and defined 46 specific image attributes which were then organized into 12 major classes. Attributes were also grouped as being 'perceptual' (directly stimulated by visual percepts), 'interpretive' (requiring inference from visual percepts), and 'reactive' (cognitive and affective responses to the images). This research describes the coverage of two image indexing and classification systems and one general classification system in relation to the previous findings and analyzes the extent to which components of these systems are capable of describing the range of image attributes as revealed by the previous research
  16. Langridge, D.W.: Subject analysis : principles and procedures (1989) 0.01
    0.011428159 = product of:
      0.034284476 = sum of:
        0.034284476 = product of:
          0.06856895 = sum of:
            0.06856895 = weight(_text_:classification in 2021) [ClassicSimilarity], result of:
              0.06856895 = score(doc=2021,freq=6.0), product of:
                0.16072905 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05046903 = queryNorm
                0.42661208 = fieldWeight in 2021, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2021)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Subject analysis is the basis of all classifying and indexing techniques and is equally applicable to automatic and manual indexing systems. This book discusses subject analysis as an activity in its own right, independent of any indexing language. It examines the theoretical basis of subject analysis using the concepts of forms of knowledge as applicable to classification schemes.
    LCSH
    Classification / Books
    Subject
    Classification / Books
  17. Wyllie, J.: Concept indexing : the world beyond the windows (1990) 0.01
    0.011310944 = product of:
      0.03393283 = sum of:
        0.03393283 = product of:
          0.06786566 = sum of:
            0.06786566 = weight(_text_:classification in 2977) [ClassicSimilarity], result of:
              0.06786566 = score(doc=2977,freq=2.0), product of:
                0.16072905 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05046903 = queryNorm
                0.42223644 = fieldWeight in 2977, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2977)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper argues that the realisation of the electronic hypermedia of the future depends on integrating the technology of free text retrieval with the classification-based discipline of content analysis
  18. Wilkinson, C.L.: Intellectual level as a search enhancement in the online environment : summation and implications (1990) 0.01
    0.01066406 = product of:
      0.03199218 = sum of:
        0.03199218 = product of:
          0.06398436 = sum of:
            0.06398436 = weight(_text_:classification in 479) [ClassicSimilarity], result of:
              0.06398436 = score(doc=479,freq=4.0), product of:
                0.16072905 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05046903 = queryNorm
                0.39808834 = fieldWeight in 479, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0625 = fieldNorm(doc=479)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper summarizes the papers presented by the members of the panel on "The Concept of Intellectual Level in Cataloging and Classification." The implication of adding intellectual level to the MARC record and creating intellectual level indexes in online catalogs are discussed. Conclusion is reached that providing intellectual level will not only be costly but may perhaps even be a disservice to library users.
    Source
    Cataloging and classification quarterly. 11(1990) no.1, S.89-97
  19. Bi, Y.: Sentiment classification in social media data by combining triplet belief functions (2022) 0.01
    0.009795565 = product of:
      0.029386694 = sum of:
        0.029386694 = product of:
          0.058773387 = sum of:
            0.058773387 = weight(_text_:classification in 613) [ClassicSimilarity], result of:
              0.058773387 = score(doc=613,freq=6.0), product of:
                0.16072905 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05046903 = queryNorm
                0.3656675 = fieldWeight in 613, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.046875 = fieldNorm(doc=613)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Sentiment analysis is an emerging technique that caters for semantic orientation and opinion mining. It is increasingly used to analyze online reviews and posts for identifying people's opinions and attitudes to products and events in order to improve business performance of companies and aid to make better organizing strategies of events. This paper presents an innovative approach to combining the outputs of sentiment classifiers under the framework of belief functions. It consists of the formulation of sentiment classifier outputs in the triplet evidence structure and the development of general formulas for combining triplet functions derived from sentiment classification results via three evidential combination rules along with comparative analyses. The empirical studies have been conducted on examining the effectiveness of our method for sentiment classification individually and in combination, and the results demonstrate that the best combined classifiers by our method outperforms the best individual classifiers over five review datasets.
  20. Beghtol, C.: ¬The classification of fiction : the development of a system based on theoretical principles (1994) 0.01
    0.009331052 = product of:
      0.027993156 = sum of:
        0.027993156 = product of:
          0.05598631 = sum of:
            0.05598631 = weight(_text_:classification in 3413) [ClassicSimilarity], result of:
              0.05598631 = score(doc=3413,freq=4.0), product of:
                0.16072905 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.05046903 = queryNorm
                0.34832728 = fieldWeight in 3413, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3413)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The work is an adaptation of the author's dissertation and has the following chapters: (1) background and introduction; (2) a problem in classification theory; (3) previous fiction analysis theories and systems and 'The left hand of darkness'; (4) fiction warrant and critical warrant; (5) experimental fiction analysis system (EFAS); (6) application and evaluation of EFAS. Appendix 1 gives references to fiction analysis systems and appendix 2 lists EFAS coding sheets