Search (96 results, page 1 of 5)

  • × language_ss:"e"
  • × theme_ss:"Inhaltsanalyse"
  • × type_ss:"a"
  1. Pozzi de Sousa, B.; Ortega, C.D.: Aspects regarding the notion of subject in the context of different theoretical trends : teaching approaches in Brazil (2018) 0.02
    0.02384643 = product of:
      0.07153929 = sum of:
        0.01596415 = weight(_text_:in in 4707) [ClassicSimilarity], result of:
          0.01596415 = score(doc=4707,freq=10.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.26884392 = fieldWeight in 4707, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=4707)
        0.05557514 = product of:
          0.11115028 = sum of:
            0.11115028 = weight(_text_:ausbildung in 4707) [ClassicSimilarity], result of:
              0.11115028 = score(doc=4707,freq=2.0), product of:
                0.23429902 = queryWeight, product of:
                  5.3671665 = idf(docFreq=560, maxDocs=44218)
                  0.043654136 = queryNorm
                0.47439498 = fieldWeight in 4707, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3671665 = idf(docFreq=560, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4707)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Series
    Advances in knowledge organization; vol.16
    Source
    Challenges and opportunities for knowledge organization in the digital age: proceedings of the Fifteenth International ISKO Conference, 9-11 July 2018, Porto, Portugal / organized by: International Society for Knowledge Organization (ISKO), ISKO Spain and Portugal Chapter, University of Porto - Faculty of Arts and Humanities, Research Centre in Communication, Information and Digital Culture (CIC.digital) - Porto. Eds.: F. Ribeiro u. M.E. Cerveira
    Theme
    Ausbildung
  2. Nohr, H.: ¬The training of librarians in content analysis : some thoughts on future necessities (1991) 0.02
    0.023284636 = product of:
      0.06985391 = sum of:
        0.014278769 = weight(_text_:in in 5149) [ClassicSimilarity], result of:
          0.014278769 = score(doc=5149,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.24046129 = fieldWeight in 5149, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=5149)
        0.05557514 = product of:
          0.11115028 = sum of:
            0.11115028 = weight(_text_:ausbildung in 5149) [ClassicSimilarity], result of:
              0.11115028 = score(doc=5149,freq=2.0), product of:
                0.23429902 = queryWeight, product of:
                  5.3671665 = idf(docFreq=560, maxDocs=44218)
                  0.043654136 = queryNorm
                0.47439498 = fieldWeight in 5149, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3671665 = idf(docFreq=560, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5149)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    The training of librarians in content analysis undergoes influences resulting both from the realities existing in the various application fields and from technological innovations. The present contribution attempts to identify components of such training that are necessary for a future-oriented instruction, and it stresses the importance of furnishing a sound theoretical basis, especially in the light of technological developments. Purpose of the training is to provide the foundation for 'action competence' on the part of the students
    Theme
    Ausbildung
  3. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.01
    0.011251582 = product of:
      0.033754744 = sum of:
        0.010096614 = weight(_text_:in in 5830) [ClassicSimilarity], result of:
          0.010096614 = score(doc=5830,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.17003182 = fieldWeight in 5830, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=5830)
        0.02365813 = product of:
          0.04731626 = sum of:
            0.04731626 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
              0.04731626 = score(doc=5830,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.30952093 = fieldWeight in 5830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5830)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper examnines various isues that arise in establishing a theoretical basis for an experimental fiction analysis system. It analyzes the warrants of fiction and of works about fiction. From this analysis, it derives classificatory requirements for a fiction system. Classificatory techniques that may contribute to the specification of data elements in fiction are suggested
    Date
    5. 8.2006 13:22:08
  4. White, M.D.; Marsh, E.E.: Content analysis : a flexible methodology (2006) 0.01
    0.010286495 = product of:
      0.030859483 = sum of:
        0.013115887 = weight(_text_:in in 5589) [ClassicSimilarity], result of:
          0.013115887 = score(doc=5589,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.22087781 = fieldWeight in 5589, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=5589)
        0.017743597 = product of:
          0.035487194 = sum of:
            0.035487194 = weight(_text_:22 in 5589) [ClassicSimilarity], result of:
              0.035487194 = score(doc=5589,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.23214069 = fieldWeight in 5589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5589)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Content analysis is a highly flexible research method that has been widely used in library and information science (LIS) studies with varying research goals and objectives. The research method is applied in qualitative, quantitative, and sometimes mixed modes of research frameworks and employs a wide range of analytical techniques to generate findings and put them into context. This article characterizes content analysis as a systematic, rigorous approach to analyzing documents obtained or generated in the course of research. It briefly describes the steps involved in content analysis, differentiates between quantitative and qualitative content analysis, and shows that content analysis serves the purposes of both quantitative research and qualitative research. The authors draw on selected LIS studies that have used content analysis to illustrate the concepts addressed in the article. The article also serves as a gateway to methodological books and articles that provide more detail about aspects of content analysis discussed only briefly in the article.
    Source
    Library trends. 55(2006) no.1, S.22-45
  5. Weimer, K.H.: ¬The nexus of subject analysis and bibliographic description : the case of multipart videos (1996) 0.01
    0.00990557 = product of:
      0.02971671 = sum of:
        0.011973113 = weight(_text_:in in 6525) [ClassicSimilarity], result of:
          0.011973113 = score(doc=6525,freq=10.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.20163295 = fieldWeight in 6525, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=6525)
        0.017743597 = product of:
          0.035487194 = sum of:
            0.035487194 = weight(_text_:22 in 6525) [ClassicSimilarity], result of:
              0.035487194 = score(doc=6525,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.23214069 = fieldWeight in 6525, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6525)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Examines the goals of bibliographic control, subject analysis and their relationship for audiovisual materials in general and multipart videotape recordings in particular. Concludes that intellectual access to multipart works is not adequately provided for when these materials are catalogues in collective set records. An alternative is to catalogue the parts separately. This method increases intellectual access by providing more detailed descriptive notes and subject analysis. As evidenced by the large number of records in the national database for parts of multipart videos, cataloguers have made the intellectual content of multipart videos more accessible by cataloguing the parts separately rather than collectively. This reverses the traditional cataloguing process to begin with subject analysis, resulting in the intellectual content of these materials driving the bibliographic description. Suggests ways of determining when multipart videos are best catalogued as sets or separately
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.5-18
  6. Raieli, R.: ¬The semantic hole : enthusiasm and caution around multimedia information retrieval (2012) 0.01
    0.009546547 = product of:
      0.02863964 = sum of:
        0.007728611 = weight(_text_:in in 4888) [ClassicSimilarity], result of:
          0.007728611 = score(doc=4888,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1301535 = fieldWeight in 4888, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4888)
        0.02091103 = product of:
          0.04182206 = sum of:
            0.04182206 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.04182206 = score(doc=4888,freq=4.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.27358043 = fieldWeight in 4888, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper centres on the tools for the management of new digital documents, which are not only textual, but also visual-video, audio or multimedia in the full sense. Among the aims is to demonstrate that operating within the terms of generic Information Retrieval through textual language only is limiting, and it is instead necessary to consider ampler criteria, such as those of MultiMedia Information Retrieval, according to which, every type of digital document can be analyzed and searched by the proper elements of language for its proper nature. MMIR is presented as the organic complex of the systems of Text Retrieval, Visual Retrieval, Video Retrieval, and Audio Retrieval, each of which has an approach to information management that handles the concrete textual, visual, audio, or video content of the documents directly, here defined as content-based. In conclusion, the limits of this content-based objective access to documents is underlined. The discrepancy known as the semantic gap is that which occurs between semantic-interpretive access and content-based access. Finally, the integration of these conceptions is explained, gathering and composing the merits and the advantages of each of the approaches and of the systems to access to information.
    Date
    22. 1.2012 13:02:10
    Footnote
    Bezugnahme auf: Enser, P.G.B.: Visual image retrieval. In: Annual review of information science and technology. 42(2008), S.3-42.
    Source
    Knowledge organization. 39(2012) no.1, S.13-22
  7. Chen, S.-J.; Lee, H.-L.: Art images and mental associations : a preliminary exploration (2014) 0.01
    0.009484224 = product of:
      0.028452672 = sum of:
        0.010709076 = weight(_text_:in in 1416) [ClassicSimilarity], result of:
          0.010709076 = score(doc=1416,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18034597 = fieldWeight in 1416, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1416)
        0.017743597 = product of:
          0.035487194 = sum of:
            0.035487194 = weight(_text_:22 in 1416) [ClassicSimilarity], result of:
              0.035487194 = score(doc=1416,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.23214069 = fieldWeight in 1416, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1416)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper reports on the preliminary findings of a study that explores mental associations made by novices viewing art images. In a controlled environment, 20 Taiwanese college students responded to the question "What does the painting remind you of?" after viewing each digitized image of 15 oil paintings by a famous Taiwanese artist. Rather than focusing on the representation or interpretation of art, the study attempted to solicit information about how non-experts are stimulated by art. This paper reports on the analysis of participant responses to three of the images, and describes a12-type taxonomy of association emerged from the analysis. While 9 of the types are derived and adapted from facets in the Art & Architecture Thesaurus, three new types - Artistic Influence Association, Reactive Association, and Prototype Association - are discovered. The conclusion briefly discusses both the significance of the findings and the implications for future research.
    Series
    Advances in knowledge organization; vol. 14
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  8. Chen, H.; Ng, T.: ¬An algorithmic approach to concept exploration in a large knowledge network (automatic thesaurus consultation) : symbolic branch-and-bound search versus connectionist Hopfield Net Activation (1995) 0.01
    0.008308224 = product of:
      0.024924671 = sum of:
        0.010709076 = weight(_text_:in in 2203) [ClassicSimilarity], result of:
          0.010709076 = score(doc=2203,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18034597 = fieldWeight in 2203, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2203)
        0.014215595 = weight(_text_:und in 2203) [ClassicSimilarity], result of:
          0.014215595 = score(doc=2203,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.14692576 = fieldWeight in 2203, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=2203)
      0.33333334 = coord(2/6)
    
    Abstract
    Presents a framework for knowledge discovery and concept exploration. In order to enhance the concept exploration capability of knowledge based systems and to alleviate the limitation of the manual browsing approach, develops 2 spreading activation based algorithms for concept exploration in large, heterogeneous networks of concepts (eg multiple thesauri). One algorithm, which is based on the symbolic AI paradigma, performs a conventional branch-and-bound search on a semantic net representation to identify other highly relevant concepts (a serial, optimal search process). The 2nd algorithm, which is absed on the neural network approach, executes the Hopfield net parallel relaxation and convergence process to identify 'convergent' concepts for some initial queries (a parallel, heuristic search process). Tests these 2 algorithms on a large text-based knowledge network of about 13.000 nodes (terms) and 80.000 directed links in the area of computing technologies
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  9. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.00
    0.0049287775 = product of:
      0.029572664 = sum of:
        0.029572664 = product of:
          0.059145328 = sum of:
            0.059145328 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
              0.059145328 = score(doc=5835,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.38690117 = fieldWeight in 5835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5835)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    5. 8.2006 13:22:44
  10. Hutchins, W.J.: ¬The concept of 'aboutness' in subject indexing (1978) 0.00
    0.0027546515 = product of:
      0.016527908 = sum of:
        0.016527908 = weight(_text_:in in 1961) [ClassicSimilarity], result of:
          0.016527908 = score(doc=1961,freq=14.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.27833787 = fieldWeight in 1961, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1961)
      0.16666667 = coord(1/6)
    
    Abstract
    The common view of the 'aboutness' of documents is that the index entries (or classifications) assigned to documents represent or indicate in some way the total contents of documents; indexing and classifying are seen as processes involving the 'summerization' of the texts of documents. In this paper an alternative concept of 'aboutness' is proposed based on an analysis of the linguistic organization of texts, which is felt to be more appropriate in many indexing environments (particularly in non-specialized libraries and information services) and which has implications for the evaluation of the effectiveness of indexing systems
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.93-97.
  11. Nahotko, M.: Genre groups in knowledge organization (2016) 0.00
    0.0027546515 = product of:
      0.016527908 = sum of:
        0.016527908 = weight(_text_:in in 5139) [ClassicSimilarity], result of:
          0.016527908 = score(doc=5139,freq=14.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.27833787 = fieldWeight in 5139, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5139)
      0.16666667 = coord(1/6)
    
    Abstract
    The article is an introduction to the development of Andersen's concept of textual tools used in knowledge organization (KO) in light of the theory of genres and activity systems. In particular, the question is based on the concepts of genre connectivity and genre group, in addition to previously established concepts such as genre hierarchy, set, system, and repertoire. Five genre groups used in KO are described. The analysis of groups, systems, and selected genres used in KO is provided, based on the method proposed by Yates and Orlikowski. The aim is to show the genre system as a part of the activity system, and thus as a framework for KO.
  12. Buckland, M.; Shaw, R.: 4W vocabulary mapping across diiverse reference genres (2008) 0.00
    0.0026772693 = product of:
      0.016063616 = sum of:
        0.016063616 = weight(_text_:in in 2258) [ClassicSimilarity], result of:
          0.016063616 = score(doc=2258,freq=18.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.27051896 = fieldWeight in 2258, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2258)
      0.16666667 = coord(1/6)
    
    Content
    This paper examines three themes in the design of search support services: linking different genres of reference resources (e.g. bibliographies, biographical dictionaries, catalogs, encyclopedias, place name gazetteers); the division of vocabularies by facet (e.g. What, Where, When, and Who); and mapping between both similar and dissimilar vocabularies. Different vocabularies within a facet can be used in conjunction, e.g. a place name combined with spatial coordinates for Where. In practice, vocabularies of different facets are used in combination in the representation or description of complex topics. Rich opportunities arise from mapping across vocabularies of dissimilar reference genres to recreate the amenities of a reference library. In a network environment, in which vocabulary control cannot be imposed, semantic correspondence across diverse vocabularies is a challenge and an opportunity.
    Series
    Advances in knowledge organization; vol.11
    Source
    Culture and identity in knowledge organization: Proceedings of the Tenth International ISKO Conference 5-8 August 2008, Montreal, Canada. Ed. by Clément Arsenault and Joseph T. Tennis
  13. Saif, H.; He, Y.; Fernandez, M.; Alani, H.: Contextual semantics for sentiment analysis of Twitter (2016) 0.00
    0.0025762038 = product of:
      0.015457222 = sum of:
        0.015457222 = weight(_text_:in in 2667) [ClassicSimilarity], result of:
          0.015457222 = score(doc=2667,freq=24.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.260307 = fieldWeight in 2667, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2667)
      0.16666667 = coord(1/6)
    
    Abstract
    Sentiment analysis on Twitter has attracted much attention recently due to its wide applications in both, commercial and public sectors. In this paper we present SentiCircles, a lexicon-based approach for sentiment analysis on Twitter. Different from typical lexicon-based approaches, which offer a fixed and static prior sentiment polarities of words regardless of their context, SentiCircles takes into account the co-occurrence patterns of words in different contexts in tweets to capture their semantics and update their pre-assigned strength and polarity in sentiment lexicons accordingly. Our approach allows for the detection of sentiment at both entity-level and tweet-level. We evaluate our proposed approach on three Twitter datasets using three different sentiment lexicons to derive word prior sentiments. Results show that our approach significantly outperforms the baselines in accuracy and F-measure for entity-level subjectivity (neutral vs. polar) and polarity (positive vs. negative) detections. For tweet-level sentiment detection, our approach performs better than the state-of-the-art SentiStrength by 4-5% in accuracy in two datasets, but falls marginally behind by 1% in F-measure in the third dataset.
    Footnote
    Beitrag in einem Themenheft "Emotion and sentiment in social and expressive media"
  14. Rowe, N.C.: Inferring depictions in natural-language captions for efficient access to picture data (1994) 0.00
    0.0025503114 = product of:
      0.015301868 = sum of:
        0.015301868 = weight(_text_:in in 7296) [ClassicSimilarity], result of:
          0.015301868 = score(doc=7296,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2576908 = fieldWeight in 7296, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7296)
      0.16666667 = coord(1/6)
    
    Abstract
    Multimedia data can require significant examination time to find desired features ('content analysis'). An alternative is using natural-language captions to describe the data, and matching captions to English queries. But it is hard to include everything in the caption of a complicated datum, so significant content analysis may still seem required. We discuss linguistic clues in captions, both syntactic and semantic, that can simplify or eliminate content analysis. We introduce the notion of content depiction and ruled for depiction inference. Our approach is implemented in an expert system which demonstrated significant increases in recall in experiments
  15. Allen, B.; Reser, D.: Content analysis in library and information science research (1990) 0.00
    0.0023797948 = product of:
      0.014278769 = sum of:
        0.014278769 = weight(_text_:in in 7510) [ClassicSimilarity], result of:
          0.014278769 = score(doc=7510,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.24046129 = fieldWeight in 7510, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.125 = fieldNorm(doc=7510)
      0.16666667 = coord(1/6)
    
  16. Mai, J.-E.: Deconstructing the indexing process (2000) 0.00
    0.0023797948 = product of:
      0.014278769 = sum of:
        0.014278769 = weight(_text_:in in 4696) [ClassicSimilarity], result of:
          0.014278769 = score(doc=4696,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.24046129 = fieldWeight in 4696, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.125 = fieldNorm(doc=4696)
      0.16666667 = coord(1/6)
    
    Source
    Advances in librarianship. 23(2000), S.269-298
  17. Moraes, J.B.E. de: Aboutness in fiction : methodological perspectives for knowledge organization (2012) 0.00
    0.0023797948 = product of:
      0.014278769 = sum of:
        0.014278769 = weight(_text_:in in 856) [ClassicSimilarity], result of:
          0.014278769 = score(doc=856,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.24046129 = fieldWeight in 856, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=856)
      0.16666667 = coord(1/6)
    
    Abstract
    The subject analysis of narrative texts of fiction is complex; the methodological model of identification of concepts as elaborated for scientific texts is not applicable to fiction. It is proposed here that theoretical and methodological use of the Generative Trajectory of Meaning postulated by Greimas may contribute to the identification of aboutness in narrative texts of fiction.
    Series
    Advances in knowledge organization; vol.13
    Source
    Categories, contexts and relations in knowledge organization: Proceedings of the Twelfth International ISKO Conference 6-9 August 2012, Mysore, India. Eds.: Neelameghan, A. u. K.S. Raghavan
  18. Clavier, V.; Paganelli, C.: Including authorial stance in the indexing of scientific documents (2012) 0.00
    0.0023611297 = product of:
      0.014166778 = sum of:
        0.014166778 = weight(_text_:in in 320) [ClassicSimilarity], result of:
          0.014166778 = score(doc=320,freq=14.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.23857531 = fieldWeight in 320, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=320)
      0.16666667 = coord(1/6)
    
    Abstract
    This article argues that authorial stance should be taken into account in the indexing of scientific documents. Authorial stance has been widely studied in linguistics and is a typical feature of scientific writing that reveals the uniqueness of each author's perspective, their scientific contribution, and their thinking. We argue that authorial stance guides the reading of scientific documents and that it can be used to characterize the knowledge contained in such documents. Our research has previously shown that people reading dissertations are interested both in a topic and in a document's authorial stance. Now, we would like to propose a two-tiered indexing system. Dissertations would first be divided into paragraphs; then, each information unit would be defined by topic and by the markers of authorial stance present in the document.
  19. Garcia Jiménez, A.; Valle Gastaminza, F. del: From thesauri to ontologies: a case study in a digital visual context (2004) 0.00
    0.0023517415 = product of:
      0.014110449 = sum of:
        0.014110449 = weight(_text_:in in 2657) [ClassicSimilarity], result of:
          0.014110449 = score(doc=2657,freq=20.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2376267 = fieldWeight in 2657, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2657)
      0.16666667 = coord(1/6)
    
    Abstract
    In this paper a framework for the construction and organization of knowledge organization and representation languages in the context of digital photograph collections is presented. It analyses exigencies of photographs as documentary objects, as well as several models of indexing, different proposals of languages and a theoretical revision of ontologies in this research field, in relation to visual documents. In considering the photograph as an analysis object, it is appropriate to study all its attributes: features, components or properties of an objeet that can be represented in an information processing system. The attributes which are related to visual features include cognitive and affective answers and elements that describe spatial, semantic, symbolic or emotional features about a photograph. In any case, it is necessary to treat: a) morphological and material attributes (emulsion, state of preservation); b) biographical attributes: (school or trend, publication or exhibition); c) attributes of content: what and how a photograph says something; d) relational attributes: visual documents establish relationships with other documents that can be analysed in order to understand them.
    Series
    Advances in knowledge organization; vol.9
  20. Beghtol, C.: Stories : applications of narrative discourse analysis to issues in information storage and retrieval (1997) 0.00
    0.0023281053 = product of:
      0.013968632 = sum of:
        0.013968632 = weight(_text_:in in 5844) [ClassicSimilarity], result of:
          0.013968632 = score(doc=5844,freq=10.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.23523843 = fieldWeight in 5844, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5844)
      0.16666667 = coord(1/6)
    
    Abstract
    The arts, humanities, and social sciences commonly borrow concepts and methods from the sciences, but interdisciplinary borrowing seldom occurs in the opposite direction. Research on narrative discourse is relevant to problems of documentary storage and retrieval, for the arts and humanities in particular, but also for other broad areas of knowledge. This paper views the potential application of narrative discourse analysis to information storage and retrieval problems from 2 perspectives: 1) analysis and comparison of narrative documents in all disciplines may be simplified if fundamental categories that occur in narrative documents can be isolated; and 2) the possibility of subdividing the world of knowledge initially into narrative and non-narrative documents is explored with particular attention to Werlich's work on text types