Search (135 results, page 7 of 7)

  • × theme_ss:"Inhaltsanalyse"
  1. Shatford, S.: Analyzing the subject of a picture : a theoretical approach (1986) 0.00
    0.0021828816 = product of:
      0.017463053 = sum of:
        0.017463053 = weight(_text_:of in 354) [ClassicSimilarity], result of:
          0.017463053 = score(doc=354,freq=10.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.2704316 = fieldWeight in 354, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=354)
      0.125 = coord(1/8)
    
    Abstract
    This paper suggests a theoretical basis for identifying and classifying the kinds of subjects a picture may have, using previously developed principles of cataloging and classification, and concepts taken from the philosophy of art, from meaning in language, and from visual perception. The purpose of developing this theoretical basis is to provide the reader with a means for evaluating, adapting, and applying presently existing indexing languages, or for devising new languages for pictorial materials; this paper does not attempt to invent or prescribe a particular indexing language.
  2. Mai, J.-E.: Analysis in indexing : document and domain centered approaches (2005) 0.00
    0.0021828816 = product of:
      0.017463053 = sum of:
        0.017463053 = weight(_text_:of in 1024) [ClassicSimilarity], result of:
          0.017463053 = score(doc=1024,freq=10.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.2704316 = fieldWeight in 1024, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1024)
      0.125 = coord(1/8)
    
    Abstract
    The paper discusses the notion of steps in indexing and reveals that the document-centered approach to indexing is prevalent and argues that the document-centered approach is problematic because it blocks out context-dependent factors in the indexing process. A domain-centered approach to indexing is presented as an alternative and the paper discusses how this approach includes a broader range of analyses and how it requires a new set of actions from using this approach; analysis of the domain, users and indexers. The paper concludes that the two-step procedure to indexing is insufficient to explain the indexing process and suggests that the domain-centered approach offers a guide for indexers that can help them manage the complexity of indexing.
  3. Langridge, D.W.: Subject analysis : principles and procedures (1989) 0.00
    0.0021828816 = product of:
      0.017463053 = sum of:
        0.017463053 = weight(_text_:of in 2021) [ClassicSimilarity], result of:
          0.017463053 = score(doc=2021,freq=10.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.2704316 = fieldWeight in 2021, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2021)
      0.125 = coord(1/8)
    
    Abstract
    Subject analysis is the basis of all classifying and indexing techniques and is equally applicable to automatic and manual indexing systems. This book discusses subject analysis as an activity in its own right, independent of any indexing language. It examines the theoretical basis of subject analysis using the concepts of forms of knowledge as applicable to classification schemes.
  4. Holley, R.M.; Joudrey, D.N.: Aboutness and conceptual analysis : a review (2021) 0.00
    0.0021828816 = product of:
      0.017463053 = sum of:
        0.017463053 = weight(_text_:of in 703) [ClassicSimilarity], result of:
          0.017463053 = score(doc=703,freq=10.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.2704316 = fieldWeight in 703, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=703)
      0.125 = coord(1/8)
    
    Abstract
    The purpose of this paper is to provide an overview of aboutness and conceptual analysis, essential concepts for LIS practitioners to understand. Aboutness refers to the subject matter and genre/form properties of a resource. It is identified during conceptual analysis, which yields an aboutness statement, a summary of a resource's aboutness. While few scholars have discussed the aboutness determination process in detail, the methods described by Patrick Wilson, D.W. Langridge, Arlene G. Taylor, and Daniel N. Joudrey provide exemplary frameworks for determining aboutness and are presented here. Discussions of how to construct an aboutness statement and the challenges associated with aboutness determination follow.
  5. Thelwall, M.; Buckley, K.; Paltoglou, G.: Sentiment strength detection for the social web (2012) 0.00
    0.0020918874 = product of:
      0.0167351 = sum of:
        0.0167351 = weight(_text_:of in 4972) [ClassicSimilarity], result of:
          0.0167351 = score(doc=4972,freq=18.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.25915858 = fieldWeight in 4972, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4972)
      0.125 = coord(1/8)
    
    Abstract
    Sentiment analysis is concerned with the automatic extraction of sentiment-related information from text. Although most sentiment analysis addresses commercial tasks, such as extracting opinions from product reviews, there is increasing interest in the affective dimension of the social web, and Twitter in particular. Most sentiment analysis algorithms are not ideally suited to this task because they exploit indirect indicators of sentiment that can reflect genre or topic instead. Hence, such algorithms used to process social web texts can identify spurious sentiment patterns caused by topics rather than affective phenomena. This article assesses an improved version of the algorithm SentiStrength for sentiment strength detection across the social web that primarily uses direct indications of sentiment. The results from six diverse social web data sets (MySpace, Twitter, YouTube, Digg, Runners World, BBC Forums) indicate that SentiStrength 2 is successful in the sense of performing better than a baseline approach for all data sets in both supervised and unsupervised cases. SentiStrength is not always better than machine-learning approaches that exploit indirect indicators of sentiment, however, and is particularly weaker for positive sentiment in news-related discussions. Overall, the results suggest that, even unsupervised, SentiStrength is robust enough to be applied to a wide variety of different social web contexts.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.1, S.163-173
  6. Hjoerland, B.: Subject (of documents) (2016) 0.00
    0.0020918874 = product of:
      0.0167351 = sum of:
        0.0167351 = weight(_text_:of in 3182) [ClassicSimilarity], result of:
          0.0167351 = score(doc=3182,freq=18.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.25915858 = fieldWeight in 3182, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3182)
      0.125 = coord(1/8)
    
    Abstract
    This article presents and discusses the concept "subject" or subject matter (of documents) as it has been examined in library and information science (LIS) for more than 100 years. Different theoretical positions are outlined and it is found that the most important distinction is between document-oriented views versus request-oriented views. The document-oriented view conceive subject as something inherent in documents, whereas the request-oriented view (or the policy based view) understand subject as an attribution made to documents in order to facilitate certain uses of them. Related concepts such as concepts, aboutness, topic, isness and ofness are also briefly presented. The conclusion is that the most fruitful way of defining "subject" (of a document) is the documents informative or epistemological potentials, that is, the documents potentials of informing users and advance the development of knowledge.
    Content
    Contents: 1. Introduction; 2. Theoretical views: 2.1 Charles Ammi Cutter (1837-1903), 2.2 S. R. Ranganathan (1892-1972), 2.3 Patrick Wilson (1927-2003), 2.4 "Content oriented" versus "request oriented" views, 2.5 Issues of subjectivity and objectivity, 2.6 The subject knowledge view, 2.7 Other views and definitions; 3. Related concepts: 3.1 Words versus concepts versus subjects, 3.2 Aboutness, 3.3 Topic, 3.4 Isness, 3.5 Ofness, 3.6 Theme.
    Source
    ISKO Encyclopedia of Knowledge Organization, ed. by B. Hjoerland. [http://www.isko.org/cyclo/logical_division]
  7. Clavier, V.; Paganelli, C.: Including authorial stance in the indexing of scientific documents (2012) 0.00
    0.0020496228 = product of:
      0.016396983 = sum of:
        0.016396983 = weight(_text_:of in 320) [ClassicSimilarity], result of:
          0.016396983 = score(doc=320,freq=12.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.25392252 = fieldWeight in 320, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=320)
      0.125 = coord(1/8)
    
    Abstract
    This article argues that authorial stance should be taken into account in the indexing of scientific documents. Authorial stance has been widely studied in linguistics and is a typical feature of scientific writing that reveals the uniqueness of each author's perspective, their scientific contribution, and their thinking. We argue that authorial stance guides the reading of scientific documents and that it can be used to characterize the knowledge contained in such documents. Our research has previously shown that people reading dissertations are interested both in a topic and in a document's authorial stance. Now, we would like to propose a two-tiered indexing system. Dissertations would first be divided into paragraphs; then, each information unit would be defined by topic and by the markers of authorial stance present in the document.
  8. Naves, M.M.L.: Analise de assunto : concepcoes (1996) 0.00
    0.0019722506 = product of:
      0.015778005 = sum of:
        0.015778005 = weight(_text_:of in 607) [ClassicSimilarity], result of:
          0.015778005 = score(doc=607,freq=4.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.24433708 = fieldWeight in 607, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=607)
      0.125 = coord(1/8)
    
    Abstract
    Discusses subject analysis as an important stage in the indexing process and observes confusions that can occur in the meaning of the term. Considers questions and difficulties about subject analysis and the concept of aboutness
  9. Renouf, A.: Making sense of text : automated approaches to meaning extraction (1993) 0.00
    0.0019524286 = product of:
      0.015619429 = sum of:
        0.015619429 = weight(_text_:of in 7111) [ClassicSimilarity], result of:
          0.015619429 = score(doc=7111,freq=2.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.24188137 = fieldWeight in 7111, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.109375 = fieldNorm(doc=7111)
      0.125 = coord(1/8)
    
  10. Marshall, L.: Specific and generic subject headings : increasing subject access to library materials (2003) 0.00
    0.0019524286 = product of:
      0.015619429 = sum of:
        0.015619429 = weight(_text_:of in 5497) [ClassicSimilarity], result of:
          0.015619429 = score(doc=5497,freq=8.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.24188137 = fieldWeight in 5497, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5497)
      0.125 = coord(1/8)
    
    Abstract
    The principle of specificity for subject headings provides a clear advantage to many researchers for the precision it brings to subject searching. However, for some researchers very specific subject headings hinder an efficient and comprehensive search. An appropriate broader heading, especially when made narrower in scope by the addition of subheadings, can benefit researchers by providing generic access to their topic. Assigning both specific and generic subject headings to a work would enhance the subject accessibility for the diverse approaches and research needs of different catalog users. However, it can be difficult for catalogers to assign broader terms consistently to different works and without consistency the gathering function of those terms may not be realized.
  11. Xie, H.; Li, X.; Wang, T.; Lau, R.Y.K.; Wong, T.-L.; Chen, L.; Wang, F.L.; Li, Q.: Incorporating sentiment into tag-based user profiles and resource profiles for personalized search in folksonomy (2016) 0.00
    0.0017640346 = product of:
      0.014112277 = sum of:
        0.014112277 = weight(_text_:of in 2671) [ClassicSimilarity], result of:
          0.014112277 = score(doc=2671,freq=20.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.21854173 = fieldWeight in 2671, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=2671)
      0.125 = coord(1/8)
    
    Abstract
    In recent years, there has been a rapid growth of user-generated data in collaborative tagging (a.k.a. folksonomy-based) systems due to the prevailing of Web 2.0 communities. To effectively assist users to find their desired resources, it is critical to understand user behaviors and preferences. Tag-based profile techniques, which model users and resources by a vector of relevant tags, are widely employed in folksonomy-based systems. This is mainly because that personalized search and recommendations can be facilitated by measuring relevance between user profiles and resource profiles. However, conventional measurements neglect the sentiment aspect of user-generated tags. In fact, tags can be very emotional and subjective, as users usually express their perceptions and feelings about the resources by tags. Therefore, it is necessary to take sentiment relevance into account into measurements. In this paper, we present a novel generic framework SenticRank to incorporate various sentiment information to various sentiment-based information for personalized search by user profiles and resource profiles. In this framework, content-based sentiment ranking and collaborative sentiment ranking methods are proposed to obtain sentiment-based personalized ranking. To the best of our knowledge, this is the first work of integrating sentiment information to address the problem of the personalized tag-based search in collaborative tagging systems. Moreover, we compare the proposed sentiment-based personalized search with baselines in the experiments, the results of which have verified the effectiveness of the proposed framework. In addition, we study the influences by popular sentiment dictionaries, and SenticNet is the most prominent knowledge base to boost the performance of personalized search in folksonomy.
  12. Hildebrandt, B.; Moratz, R.; Rickheit, G.; Sagerer, G.: Kognitive Modellierung von Sprach- und Bildverstehen (1996) 0.00
    0.0016735102 = product of:
      0.013388081 = sum of:
        0.013388081 = weight(_text_:of in 7292) [ClassicSimilarity], result of:
          0.013388081 = score(doc=7292,freq=2.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.20732689 = fieldWeight in 7292, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=7292)
      0.125 = coord(1/8)
    
    Source
    Natural language processing and speech technology: Results of the 3rd KONVENS Conference, Bielefeld, October 1996. Ed.: D. Gibbon
  13. Kessel, K.: Who's afraid of the big, bad uktena mster? : subject cataloging for images (2016) 0.00
    0.0015778005 = product of:
      0.012622404 = sum of:
        0.012622404 = weight(_text_:of in 3003) [ClassicSimilarity], result of:
          0.012622404 = score(doc=3003,freq=4.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.19546966 = fieldWeight in 3003, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=3003)
      0.125 = coord(1/8)
    
    Abstract
    This article describes the difference between cataloging images and cataloging books, the obstacles to including subject data in image cataloging records and how these obstacles can be overcome to make image collections more accessible. I call for participants to help create a subject authority reference resource for non-Western art. This article is an expanded and revised version of a presentation for the 2016 Joint ARLIS/VRA conference in Seattle.
  14. Vieira, L.: Modèle d'analyse pur une classification du document iconographique (1999) 0.00
    0.0013945919 = product of:
      0.011156735 = sum of:
        0.011156735 = weight(_text_:of in 6320) [ClassicSimilarity], result of:
          0.011156735 = score(doc=6320,freq=2.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.17277241 = fieldWeight in 6320, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=6320)
      0.125 = coord(1/8)
    
    Footnote
    Übers. d. Titels: Analyse model for a classification of iconographic documents
  15. Rowe, N.C.: Inferring depictions in natural-language captions for efficient access to picture data (1994) 0.00
    0.0013805755 = product of:
      0.011044604 = sum of:
        0.011044604 = weight(_text_:of in 7296) [ClassicSimilarity], result of:
          0.011044604 = score(doc=7296,freq=4.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.17103596 = fieldWeight in 7296, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7296)
      0.125 = coord(1/8)
    
    Abstract
    Multimedia data can require significant examination time to find desired features ('content analysis'). An alternative is using natural-language captions to describe the data, and matching captions to English queries. But it is hard to include everything in the caption of a complicated datum, so significant content analysis may still seem required. We discuss linguistic clues in captions, both syntactic and semantic, that can simplify or eliminate content analysis. We introduce the notion of content depiction and ruled for depiction inference. Our approach is implemented in an expert system which demonstrated significant increases in recall in experiments

Languages

  • e 126
  • d 8
  • f 1
  • More… Less…

Types

  • a 125
  • m 5
  • x 3
  • el 2
  • d 1
  • s 1
  • More… Less…

Classifications