Search (23 results, page 1 of 2)

  • × theme_ss:"Inhaltsanalyse"
  1. Raieli, R.: ¬The semantic hole : enthusiasm and caution around multimedia information retrieval (2012) 0.06
    0.05592207 = product of:
      0.11184414 = sum of:
        0.11184414 = sum of:
          0.062250152 = weight(_text_:language in 4888) [ClassicSimilarity], result of:
            0.062250152 = score(doc=4888,freq=4.0), product of:
              0.2030952 = queryWeight, product of:
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.051766515 = queryNorm
              0.30650726 = fieldWeight in 4888, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4888)
          0.04959398 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
            0.04959398 = score(doc=4888,freq=4.0), product of:
              0.18127751 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051766515 = queryNorm
              0.27358043 = fieldWeight in 4888, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4888)
      0.5 = coord(1/2)
    
    Abstract
    This paper centres on the tools for the management of new digital documents, which are not only textual, but also visual-video, audio or multimedia in the full sense. Among the aims is to demonstrate that operating within the terms of generic Information Retrieval through textual language only is limiting, and it is instead necessary to consider ampler criteria, such as those of MultiMedia Information Retrieval, according to which, every type of digital document can be analyzed and searched by the proper elements of language for its proper nature. MMIR is presented as the organic complex of the systems of Text Retrieval, Visual Retrieval, Video Retrieval, and Audio Retrieval, each of which has an approach to information management that handles the concrete textual, visual, audio, or video content of the documents directly, here defined as content-based. In conclusion, the limits of this content-based objective access to documents is underlined. The discrepancy known as the semantic gap is that which occurs between semantic-interpretive access and content-based access. Finally, the integration of these conceptions is explained, gathering and composing the merits and the advantages of each of the approaches and of the systems to access to information.
    Date
    22. 1.2012 13:02:10
    Source
    Knowledge organization. 39(2012) no.1, S.13-22
  2. Taylor, S.L.: Integrating natural language understanding with document structure analysis (1994) 0.03
    0.026684195 = product of:
      0.05336839 = sum of:
        0.05336839 = product of:
          0.10673678 = sum of:
            0.10673678 = weight(_text_:language in 1794) [ClassicSimilarity], result of:
              0.10673678 = score(doc=1794,freq=6.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.5255505 = fieldWeight in 1794, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1794)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Document understanding, the interpretation of a document from its image form, is a technology area which benefits greatly from the integration of natural language processing with image processing. Develops a prototype of an Intelligent Document Understanding System (IDUS) which employs several technologies: image processing, optical character recognition, document structure analysis and text understanding in a cooperative fashion. Discusses those areas of research during development of IDUS where it is found that the most benefit from the integration of natural language processing and image processing occured: document structure analysis, OCR correction, and text analysis. Discusses 2 applications which are supported by IDUS: text retrieval and automatic generation of hypertext links
  3. Hildebrandt, B.; Moratz, R.; Rickheit, G.; Sagerer, G.: Kognitive Modellierung von Sprach- und Bildverstehen (1996) 0.03
    0.0264105 = product of:
      0.052821 = sum of:
        0.052821 = product of:
          0.105642 = sum of:
            0.105642 = weight(_text_:language in 7292) [ClassicSimilarity], result of:
              0.105642 = score(doc=7292,freq=2.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.52016 = fieldWeight in 7292, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7292)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Natural language processing and speech technology: Results of the 3rd KONVENS Conference, Bielefeld, October 1996. Ed.: D. Gibbon
  4. Rowe, N.C.: Inferring depictions in natural-language captions for efficient access to picture data (1994) 0.02
    0.021787554 = product of:
      0.043575108 = sum of:
        0.043575108 = product of:
          0.087150216 = sum of:
            0.087150216 = weight(_text_:language in 7296) [ClassicSimilarity], result of:
              0.087150216 = score(doc=7296,freq=4.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.42911017 = fieldWeight in 7296, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7296)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Multimedia data can require significant examination time to find desired features ('content analysis'). An alternative is using natural-language captions to describe the data, and matching captions to English queries. But it is hard to include everything in the caption of a complicated datum, so significant content analysis may still seem required. We discuss linguistic clues in captions, both syntactic and semantic, that can simplify or eliminate content analysis. We introduce the notion of content depiction and ruled for depiction inference. Our approach is implemented in an expert system which demonstrated significant increases in recall in experiments
  5. Shatford, S.: Analyzing the subject of a picture : a theoretical approach (1986) 0.02
    0.021787554 = product of:
      0.043575108 = sum of:
        0.043575108 = product of:
          0.087150216 = sum of:
            0.087150216 = weight(_text_:language in 354) [ClassicSimilarity], result of:
              0.087150216 = score(doc=354,freq=4.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.42911017 = fieldWeight in 354, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=354)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper suggests a theoretical basis for identifying and classifying the kinds of subjects a picture may have, using previously developed principles of cataloging and classification, and concepts taken from the philosophy of art, from meaning in language, and from visual perception. The purpose of developing this theoretical basis is to provide the reader with a means for evaluating, adapting, and applying presently existing indexing languages, or for devising new languages for pictorial materials; this paper does not attempt to invent or prescribe a particular indexing language.
  6. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.02
    0.01753412 = product of:
      0.03506824 = sum of:
        0.03506824 = product of:
          0.07013648 = sum of:
            0.07013648 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
              0.07013648 = score(doc=5835,freq=2.0), product of:
                0.18127751 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051766515 = queryNorm
                0.38690117 = fieldWeight in 5835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5835)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 8.2006 13:22:44
  7. Bade, D.: ¬The creation and persistence of misinformation in shared library catalogs : language and subject knowledge in a technological era (2002) 0.02
    0.015817149 = product of:
      0.031634297 = sum of:
        0.031634297 = sum of:
          0.017607002 = weight(_text_:language in 1858) [ClassicSimilarity], result of:
            0.017607002 = score(doc=1858,freq=2.0), product of:
              0.2030952 = queryWeight, product of:
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.051766515 = queryNorm
              0.08669334 = fieldWeight in 1858, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.015625 = fieldNorm(doc=1858)
          0.014027297 = weight(_text_:22 in 1858) [ClassicSimilarity], result of:
            0.014027297 = score(doc=1858,freq=2.0), product of:
              0.18127751 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051766515 = queryNorm
              0.07738023 = fieldWeight in 1858, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.015625 = fieldNorm(doc=1858)
      0.5 = coord(1/2)
    
    Date
    22. 9.1997 19:16:05
  8. Andersen, J.; Christensen, F.S.: Wittgenstein and indexing theory (2001) 0.02
    0.015562538 = product of:
      0.031125076 = sum of:
        0.031125076 = product of:
          0.062250152 = sum of:
            0.062250152 = weight(_text_:language in 1590) [ClassicSimilarity], result of:
              0.062250152 = score(doc=1590,freq=4.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.30650726 = fieldWeight in 1590, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1590)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The paper considers indexing an activity that deals with linguistic entities. It rests an the assumption that a theory of indexing should be based an a philosophy of language, because indexing is concerned with the linguistic representation of meaning. The paper consists of four sections: It begins with some basic considerations an the nature of indexing and the requirements for a theory an this; it is followed by a short review of the use of Wittgenstein's philosophy in LIS-literature; next is an analysis of Wittgenstein's work Philosophical Investigations; finally, we deduce a theory of indexing from this philosophy. Considering an indexing theory a theory of meaning entails that, for the purpose of retrieval, indexing is a representation of meaning. Therefore, an indexing theory is concerned with how words are used in the linguistic context. Furthermore, the indexing process is a communicative process containing an interpretative element. Through the philosophy of the later Wittgenstein, it is shown that language and meaning are publicly constituted entities. Since they form the basis of indexing, a theory hereof must take into account that no single actor can define the meaning of documents. Rather this is decided by the social, historical and linguistic context in which the document is produced, distributed and exchanged. Indexing must clarify and reflect these contexts.
  9. Zarri, G.P.: Indexing and querying of narrative documents, a knowledge representation approach (2003) 0.02
    0.015406125 = product of:
      0.03081225 = sum of:
        0.03081225 = product of:
          0.0616245 = sum of:
            0.0616245 = weight(_text_:language in 2691) [ClassicSimilarity], result of:
              0.0616245 = score(doc=2691,freq=2.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.30342668 = fieldWeight in 2691, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2691)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We describe here NKRL (Narrative Knowledge Representation Language), a semantic formalism for taking into account the characteristics of narrative multimedia documents. In these documents, the information content consists in the description of 'events' that relate the real or intended behaviour of some 'actors' (characters, personages, etc.). Narrative documents of an economic interest correspond to news stories, corporate documents, normative and legal texts, intelligence messages, representation of patient's medical records, etc. NKRL is characterised by the use of several knowledge representation principles and several high-level inference tools.
  10. Langridge, D.W.: Subject analysis : principles and procedures (1989) 0.02
    0.015406125 = product of:
      0.03081225 = sum of:
        0.03081225 = product of:
          0.0616245 = sum of:
            0.0616245 = weight(_text_:language in 2021) [ClassicSimilarity], result of:
              0.0616245 = score(doc=2021,freq=2.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.30342668 = fieldWeight in 2021, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2021)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Subject analysis is the basis of all classifying and indexing techniques and is equally applicable to automatic and manual indexing systems. This book discusses subject analysis as an activity in its own right, independent of any indexing language. It examines the theoretical basis of subject analysis using the concepts of forms of knowledge as applicable to classification schemes.
  11. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.01
    0.014027297 = product of:
      0.028054593 = sum of:
        0.028054593 = product of:
          0.056109186 = sum of:
            0.056109186 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
              0.056109186 = score(doc=5830,freq=2.0), product of:
                0.18127751 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051766515 = queryNorm
                0.30952093 = fieldWeight in 5830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5830)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 8.2006 13:22:08
  12. Hauff-Hartig, S.: Automatische Transkription von Videos : Fernsehen 3.0: Automatisierte Sentimentanalyse und Zusammenstellung von Kurzvideos mit hohem Aufregungslevel KI-generierte Metadaten: Von der Technologiebeobachtung bis zum produktiven Einsatz (2021) 0.01
    0.014027297 = product of:
      0.028054593 = sum of:
        0.028054593 = product of:
          0.056109186 = sum of:
            0.056109186 = weight(_text_:22 in 251) [ClassicSimilarity], result of:
              0.056109186 = score(doc=251,freq=2.0), product of:
                0.18127751 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051766515 = queryNorm
                0.30952093 = fieldWeight in 251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=251)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 5.2021 12:43:05
  13. Amac, T.: Linguistic context analysis : a new approach to communication evaluation (1997) 0.01
    0.01320525 = product of:
      0.0264105 = sum of:
        0.0264105 = product of:
          0.052821 = sum of:
            0.052821 = weight(_text_:language in 2576) [ClassicSimilarity], result of:
              0.052821 = score(doc=2576,freq=2.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.26008 = fieldWeight in 2576, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2576)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Argues that the integration of computational psycholinguistics can improve corporate communication, and thus become a new strategic tool. An electronic dictionary was created of basic, neutral and negative connotations for nouns, verbs and adjectives appearing in press releases and other communication media, which can be updated with client specific words. The focus on negative messages has the objective of detecting who, why and how publics are criticized, to learn from the vocabulary of opinion leaders and to improve issues management proactively. Suggests a new form of analysis called 'computational linguistic context analysis' (CLCA) by analyzing nominal groups of negative words, rather than monitoring content analysis in the traditional way. Concludes that CLCA can be used to analyze large quantities of press cuttings about a company and could, theoretically, be used to analyze the structure, language and style of a particular journalist to whom it is planned to send a press release or article
  14. Weinberg, B.H.: Why indexing fails the researcher (1988) 0.01
    0.011004375 = product of:
      0.02200875 = sum of:
        0.02200875 = product of:
          0.0440175 = sum of:
            0.0440175 = weight(_text_:language in 703) [ClassicSimilarity], result of:
              0.0440175 = score(doc=703,freq=2.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.21673335 = fieldWeight in 703, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=703)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    It is a truism in information science that indexing is associated with 'aboutness', and that index terms that accurately represent what a document is about will serve the needs of the user/searcher well. It is contended in this paper that indexing which is limited to the representation of aboutness serves the novice in a discipline adequately, but does not serve the scholar or researcher, who is concerned with highly specific aspects of or points-of-view on a subject. The linguistic analogs of 'aboutness' and 'aspects' are 'topic' and 'comment' respectively. Serial indexing services deal with topics at varyng levels of specificity, but neglect comment almost entirely. This may explain the underutilization of secondary information services by scholars, as has been repeatedly demonstrated in user studies. It may also account for the incomplete lists of bibliographic references in many research papers. Natural language searching of fulltext databases does not solve this problem, because the aspect of a topic of interest to researchers is often inexpressible in concrete terms. The thesis is illustrated with examples of indexing failures in research projects the author has conducted on a range of linguistic and library-information science topics. Finally, the question of whether indexing can be improved to meet the needs of researchers is examined
  15. Mai, J.-E.: Semiotics and indexing : an analysis of the subject indexing process (2001) 0.01
    0.011004375 = product of:
      0.02200875 = sum of:
        0.02200875 = product of:
          0.0440175 = sum of:
            0.0440175 = weight(_text_:language in 4480) [ClassicSimilarity], result of:
              0.0440175 = score(doc=4480,freq=2.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.21673335 = fieldWeight in 4480, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4480)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper explains at least some of the major problems related to the subject indexing process and proposes a new approach to understanding the process, which is ordinarily described as a process that takes a number of steps. The subject is first determined, then it is described in a few sentences and, lastly, the description of the subject is converted into the indexing language. It is argued that this typical approach characteristically lacks an understanding of what the central nature of the process is. Indexing is not a neutral and objective representation of a document's subject matter but the representation of an interpretation of a document for future use. Semiotics is offered here as a framework for understanding the "interpretative" nature of the subject indexing process. By placing this process within Peirce's semiotic framework of ideas and terminology, a more detailed description of the process is offered which shows that the uncertainty generally associated with this process is created by the fact that the indexer goes through a number of steps and creates the subject matter of the document during this process. The creation of the subject matter is based on the indexer's social and cultural context. The paper offers an explanation of what occurs in the indexing process and suggests that there is only little certainty to its result.
  16. Sauperl, A.: Subject cataloging process of Slovenian and American catalogers (2005) 0.01
    0.011004375 = product of:
      0.02200875 = sum of:
        0.02200875 = product of:
          0.0440175 = sum of:
            0.0440175 = weight(_text_:language in 4702) [ClassicSimilarity], result of:
              0.0440175 = score(doc=4702,freq=2.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.21673335 = fieldWeight in 4702, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4702)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - An empirical study has shown that the real process of subject cataloging does not correspond entirely to theoretical descriptions in textbooks and international standards. The purpose of this is paper is to address the issue of whether it be possible for catalogers who have not received formal training to perform subject cataloging in a different way to their trained colleagues. Design/methodology/approach - A qualitative study was conducted in 2001 among five Slovenian public library catalogers. The resulting model is compared to previous findings. Findings - First, all catalogers attempted to determine what the book was about. While the American catalogers tried to understand the topic and the author's intent, the Slovenian catalogers appeared to focus on the topic only. Slovenian and American academic library catalogers did not demonstrate any anticipation of possible uses that users might have of the book, while this was important for American public library catalogers. All catalogers used existing records to build new ones and/or to search for subject headings. The verification of subject representation with the indexing language was the last step in the subject cataloging process of American catalogers, often skipped by Slovenian catalogers. Research limitations/implications - The small and convenient sample limits the findings. Practical implications - Comparison of subject cataloging processes of Slovenian and American catalogers, two different groups, is important because they both contribute to OCLC's WorldCat database. If the cataloging community is building a universal catalog and approaches to subject description are different, then the resulting subject representations might also be different. Originality/value - This is one of the very few empirical studies of subject cataloging and indexing.
  17. Bertola, F.; Patti, V.: Ontology-based affective models to organize artworks in the social semantic web (2016) 0.01
    0.011004375 = product of:
      0.02200875 = sum of:
        0.02200875 = product of:
          0.0440175 = sum of:
            0.0440175 = weight(_text_:language in 2669) [ClassicSimilarity], result of:
              0.0440175 = score(doc=2669,freq=2.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.21673335 = fieldWeight in 2669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2669)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper, we focus on applying sentiment analysis to resources from online art collections, by exploiting, as information source, tags intended as textual traces that visitors leave to comment artworks on social platforms. We present a framework where methods and tools from a set of disciplines, ranging from Semantic and Social Web to Natural Language Processing, provide us the building blocks for creating a semantic social space to organize artworks according to an ontology of emotions. The ontology is inspired by the Plutchik's circumplex model, a well-founded psychological model of human emotions. Users can be involved in the creation of the emotional space, through a graphical interactive interface. The development of such semantic space enables new ways of accessing and exploring art collections. The affective categorization model and the emotion detection output are encoded into W3C ontology languages. This gives us the twofold advantage to enable tractable reasoning on detected emotions and related artworks, and to foster the interoperability and integration of tools developed in the Semantic Web and Linked Data community. The proposal has been evaluated against a real-word case study, a dataset of tagged multimedia artworks from the ArsMeteo Italian online collection, and validated through a user study.
  18. Hauser, E.; Tennis, J.T.: Episemantics: aboutness as aroundness (2019) 0.01
    0.011004375 = product of:
      0.02200875 = sum of:
        0.02200875 = product of:
          0.0440175 = sum of:
            0.0440175 = weight(_text_:language in 5640) [ClassicSimilarity], result of:
              0.0440175 = score(doc=5640,freq=2.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.21673335 = fieldWeight in 5640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5640)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Aboutness ranks amongst our field's greatest bugbears. What is a work about? How can this be known? This mirrors debates within the philosophy of language, where the concept of representation has similarly evaded satisfactory definition. This paper proposes that we abandon the strong sense of the word aboutness, which seems to promise some inherent relationship between work and subject, or, in philosophical terms, between word and world. Instead, we seek an etymological reset to the older sense of aboutness as "in the vicinity, nearby; in some place or various places nearby; all over a surface." To distinguish this sense in the context of information studies, we introduce the term episemantics. The authors have each independently applied this term in slightly different contexts and scales (Hauser 2018a; Tennis 2016), and this article presents a unified definition of the term and guidelines for applying it at the scale of both words and works. The resulting weak concept of aboutness is pragmatic, in Star's sense of a focus on consequences over antecedents, while reserving space for the critique and improvement of aboutness determinations within various contexts and research programs. The paper finishes with a discussion of the implication of the concept of episemantics and methodological possibilities it offers for knowledge organization research and practice. We draw inspiration from Melvil Dewey's use of physical aroundness in his first classification system and ask how aroundness might be more effectively operationalized in digital environments.
  19. Sigel, A.: How can user-oriented depth analysis be constructively guided? (2000) 0.01
    0.010893777 = product of:
      0.021787554 = sum of:
        0.021787554 = product of:
          0.043575108 = sum of:
            0.043575108 = weight(_text_:language in 133) [ClassicSimilarity], result of:
              0.043575108 = score(doc=133,freq=4.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.21455508 = fieldWeight in 133, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=133)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    It is vital for library and information science to understand the subject indexing process thoroughly. However, document analysis, the first and most important step in indexing, has not received sufficient attention. As this is an exceptionally hard problem, we still do not dispose of a sound indexing theory. Therefore we have difficulties in teaching indexing and in explaining why a given subject representation is "better" than another. Technological advancements have not helped to close this fundamental gap. To proceed, we should ask the right questions instead. Several types of indexer inconsistencies can be explained as acceptable, yet different conceptualizations which resulting of the variety of groups dealing with a problem from their respective viewpoints. Multiple indexed documents are regarded as the normal case. Intersubjectively replicable indexing results are often questionable or do not constitute interesting cases of indexing at all. In the context of my ongoing dissertation in which I intend to develop an enhanced indexing theory by investigating improvements within a social sciences domain, this paper explains user-oriented selective depth analysis and why I chose that configuration. Strongly influenced by Mai's dissertation, I also communicate my first insights concerning current indexing theories. I agree that I cannot ignore epistemological stances and philosophical issues in language and meaning related to indexing and accept the openness of the interpretive nature of the indexing process. Although I present arguments against the employment of an indexing language as well, it is still indispensable in situations which demand easier access and control by devices. Despite the enormous difficulties the user-oriented and selective depth analysis poses, I argue that it is both feasible and useful if one achieves careful guidance of the possible interpretations. There is some hope because the number of useful interpretations is limited: Every summary is tailored to a purpose, audience and situation. Domain, discourse and social practice entail additional constraints. A pluralistic method mix that focusses on ecologically valid, holistic contexts and employs qualitative methods is recommended. Domain analysis urgently has to be made more practical and applicable. Only then we will be able to investigate empirically domains in order to identify their structures shaped by the corresponding discourse communities. We plan to represent the recognized problem structures and indexing questions of relevance to a small domain in formal, ontological computer models -- if we can find such stable knowledge structures. This would allow us to tailor dynamically summaries for user communities. For practical purposes we suggest to assume a less demanding position than Hjorland's "totality of the epistemological potential". It is sufficent that we identify and represent iteratively the information needs of today's user groups in interactive knowledge-based systems. The best way to formalize such knowledge gained about discourse communities is however unknown. Indexers should stay in direct contact with the community they serve or be part of it to ensure agreement with their viewpoints. Checklist/request-oriented indexing could be very helpful but it remains to be demonstrated how well it will be applicable in the social sciences. A frame-based representation or at least a sophisticated grouping of terms could help to express relational knowledge structures. There remains much work to do since in practice no one has shown yet how such an improved indexing system would work and if the indexing results were really "better".
  20. Weimer, K.H.: ¬The nexus of subject analysis and bibliographic description : the case of multipart videos (1996) 0.01
    0.010520472 = product of:
      0.021040944 = sum of:
        0.021040944 = product of:
          0.04208189 = sum of:
            0.04208189 = weight(_text_:22 in 6525) [ClassicSimilarity], result of:
              0.04208189 = score(doc=6525,freq=2.0), product of:
                0.18127751 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051766515 = queryNorm
                0.23214069 = fieldWeight in 6525, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6525)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.5-18