Search (20 results, page 1 of 1)

  • × theme_ss:"Inhaltsanalyse"
  • × type_ss:"a"
  1. Beghtol, C.: Toward a theory of fiction analysis for information storage and retrieval (1992) 0.05
    0.05130119 = product of:
      0.10260238 = sum of:
        0.10260238 = sum of:
          0.04608324 = weight(_text_:data in 5830) [ClassicSimilarity], result of:
            0.04608324 = score(doc=5830,freq=2.0), product of:
              0.16488427 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.052144732 = queryNorm
              0.2794884 = fieldWeight in 5830, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.0625 = fieldNorm(doc=5830)
          0.056519132 = weight(_text_:22 in 5830) [ClassicSimilarity], result of:
            0.056519132 = score(doc=5830,freq=2.0), product of:
              0.18260197 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052144732 = queryNorm
              0.30952093 = fieldWeight in 5830, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=5830)
      0.5 = coord(1/2)
    
    Abstract
    This paper examnines various isues that arise in establishing a theoretical basis for an experimental fiction analysis system. It analyzes the warrants of fiction and of works about fiction. From this analysis, it derives classificatory requirements for a fiction system. Classificatory techniques that may contribute to the specification of data elements in fiction are suggested
    Date
    5. 8.2006 13:22:08
  2. Hauff-Hartig, S.: Automatische Transkription von Videos : Fernsehen 3.0: Automatisierte Sentimentanalyse und Zusammenstellung von Kurzvideos mit hohem Aufregungslevel KI-generierte Metadaten: Von der Technologiebeobachtung bis zum produktiven Einsatz (2021) 0.05
    0.05130119 = product of:
      0.10260238 = sum of:
        0.10260238 = sum of:
          0.04608324 = weight(_text_:data in 251) [ClassicSimilarity], result of:
            0.04608324 = score(doc=251,freq=2.0), product of:
              0.16488427 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.052144732 = queryNorm
              0.2794884 = fieldWeight in 251, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.0625 = fieldNorm(doc=251)
          0.056519132 = weight(_text_:22 in 251) [ClassicSimilarity], result of:
            0.056519132 = score(doc=251,freq=2.0), product of:
              0.18260197 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052144732 = queryNorm
              0.30952093 = fieldWeight in 251, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=251)
      0.5 = coord(1/2)
    
    Date
    22. 5.2021 12:43:05
    Source
    Open Password. 2021, Nr.947 vom 14.07.2021 [https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=WzMxOCwiNjczMmIwMzRlMDdmIiwwLDAsMjg4LDFd]
  3. Pejtersen, A.M.: Design of a classification scheme for fiction based on an analysis of actual user-librarian communication, and use of the scheme for control of librarians' search strategies (1980) 0.02
    0.017662229 = product of:
      0.035324458 = sum of:
        0.035324458 = product of:
          0.070648916 = sum of:
            0.070648916 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
              0.070648916 = score(doc=5835,freq=2.0), product of:
                0.18260197 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052144732 = queryNorm
                0.38690117 = fieldWeight in 5835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5835)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 8.2006 13:22:44
  4. Rowe, N.C.: Inferring depictions in natural-language captions for efficient access to picture data (1994) 0.02
    0.017460302 = product of:
      0.034920603 = sum of:
        0.034920603 = product of:
          0.069841206 = sum of:
            0.069841206 = weight(_text_:data in 7296) [ClassicSimilarity], result of:
              0.069841206 = score(doc=7296,freq=6.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.42357713 = fieldWeight in 7296, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7296)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Multimedia data can require significant examination time to find desired features ('content analysis'). An alternative is using natural-language captions to describe the data, and matching captions to English queries. But it is hard to include everything in the caption of a complicated datum, so significant content analysis may still seem required. We discuss linguistic clues in captions, both syntactic and semantic, that can simplify or eliminate content analysis. We introduce the notion of content depiction and ruled for depiction inference. Our approach is implemented in an expert system which demonstrated significant increases in recall in experiments
  5. Roberts, C.W.; Popping, R.: Computer-supported content analysis : some recent developments (1993) 0.01
    0.014401014 = product of:
      0.028802028 = sum of:
        0.028802028 = product of:
          0.057604056 = sum of:
            0.057604056 = weight(_text_:data in 4236) [ClassicSimilarity], result of:
              0.057604056 = score(doc=4236,freq=2.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.34936053 = fieldWeight in 4236, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4236)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Presents an overview of some recent developments in the clause-based content analysis of linguistic data. Introduces network analysis of evaluative texts, for the analysis of cognitive maps, and linguistic content analysis. Focuses on the types of substantive inferences afforded by the three approaches
  6. Solomon, P.: Access to fiction for children : a user-based assessment of options and opportunities (1997) 0.01
    0.014256276 = product of:
      0.028512552 = sum of:
        0.028512552 = product of:
          0.057025105 = sum of:
            0.057025105 = weight(_text_:data in 5845) [ClassicSimilarity], result of:
              0.057025105 = score(doc=5845,freq=4.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.34584928 = fieldWeight in 5845, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5845)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Reports on a study of children's intentions, purposes, search terms, strategies, successes and breakdowns in accessing fiction. Data was gathered using naturalistic methods of persistent, intensive observation and questioning with children in several school library media centres in the USA, including 997 OPAC transactions. Analyzes the data and highlights aspects of the broader context of the system which may help in development of mechanisms for electronic access
  7. Raieli, R.: ¬The semantic hole : enthusiasm and caution around multimedia information retrieval (2012) 0.01
    0.012489081 = product of:
      0.024978163 = sum of:
        0.024978163 = product of:
          0.049956325 = sum of:
            0.049956325 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.049956325 = score(doc=4888,freq=4.0), product of:
                0.18260197 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052144732 = queryNorm
                0.27358043 = fieldWeight in 4888, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2012 13:02:10
    Source
    Knowledge organization. 39(2012) no.1, S.13-22
  8. Kessel, K.: Who's afraid of the big, bad uktena mster? : subject cataloging for images (2016) 0.01
    0.01152081 = product of:
      0.02304162 = sum of:
        0.02304162 = product of:
          0.04608324 = sum of:
            0.04608324 = weight(_text_:data in 3003) [ClassicSimilarity], result of:
              0.04608324 = score(doc=3003,freq=2.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.2794884 = fieldWeight in 3003, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3003)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article describes the difference between cataloging images and cataloging books, the obstacles to including subject data in image cataloging records and how these obstacles can be overcome to make image collections more accessible. I call for participants to help create a subject authority reference resource for non-Western art. This article is an expanded and revised version of a presentation for the 2016 Joint ARLIS/VRA conference in Seattle.
  9. Weimer, K.H.: ¬The nexus of subject analysis and bibliographic description : the case of multipart videos (1996) 0.01
    0.010597337 = product of:
      0.021194674 = sum of:
        0.021194674 = product of:
          0.04238935 = sum of:
            0.04238935 = weight(_text_:22 in 6525) [ClassicSimilarity], result of:
              0.04238935 = score(doc=6525,freq=2.0), product of:
                0.18260197 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052144732 = queryNorm
                0.23214069 = fieldWeight in 6525, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6525)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.5-18
  10. Chen, S.-J.; Lee, H.-L.: Art images and mental associations : a preliminary exploration (2014) 0.01
    0.010597337 = product of:
      0.021194674 = sum of:
        0.021194674 = product of:
          0.04238935 = sum of:
            0.04238935 = weight(_text_:22 in 1416) [ClassicSimilarity], result of:
              0.04238935 = score(doc=1416,freq=2.0), product of:
                0.18260197 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052144732 = queryNorm
                0.23214069 = fieldWeight in 1416, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1416)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  11. White, M.D.; Marsh, E.E.: Content analysis : a flexible methodology (2006) 0.01
    0.010597337 = product of:
      0.021194674 = sum of:
        0.021194674 = product of:
          0.04238935 = sum of:
            0.04238935 = weight(_text_:22 in 5589) [ClassicSimilarity], result of:
              0.04238935 = score(doc=5589,freq=2.0), product of:
                0.18260197 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052144732 = queryNorm
                0.23214069 = fieldWeight in 5589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5589)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Library trends. 55(2006) no.1, S.22-45
  12. Thelwall, M.; Buckley, K.; Paltoglou, G.: Sentiment strength detection for the social web (2012) 0.01
    0.010183054 = product of:
      0.020366108 = sum of:
        0.020366108 = product of:
          0.040732216 = sum of:
            0.040732216 = weight(_text_:data in 4972) [ClassicSimilarity], result of:
              0.040732216 = score(doc=4972,freq=4.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.24703519 = fieldWeight in 4972, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4972)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Sentiment analysis is concerned with the automatic extraction of sentiment-related information from text. Although most sentiment analysis addresses commercial tasks, such as extracting opinions from product reviews, there is increasing interest in the affective dimension of the social web, and Twitter in particular. Most sentiment analysis algorithms are not ideally suited to this task because they exploit indirect indicators of sentiment that can reflect genre or topic instead. Hence, such algorithms used to process social web texts can identify spurious sentiment patterns caused by topics rather than affective phenomena. This article assesses an improved version of the algorithm SentiStrength for sentiment strength detection across the social web that primarily uses direct indications of sentiment. The results from six diverse social web data sets (MySpace, Twitter, YouTube, Digg, Runners World, BBC Forums) indicate that SentiStrength 2 is successful in the sense of performing better than a baseline approach for all data sets in both supervised and unsupervised cases. SentiStrength is not always better than machine-learning approaches that exploit indirect indicators of sentiment, however, and is particularly weaker for positive sentiment in news-related discussions. Overall, the results suggest that, even unsupervised, SentiStrength is robust enough to be applied to a wide variety of different social web contexts.
  13. Chen, H.: ¬An analysis of image queries in the field of art history (2001) 0.01
    0.010080709 = product of:
      0.020161418 = sum of:
        0.020161418 = product of:
          0.040322836 = sum of:
            0.040322836 = weight(_text_:data in 5187) [ClassicSimilarity], result of:
              0.040322836 = score(doc=5187,freq=2.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.24455236 = fieldWeight in 5187, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5187)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Chen arranged with an Art History instructor to require 20 medieval art images in papers received from 29 students. Participants completed a self administered presearch and postsearch questionnaire, and were interviewed after questionnaire analysis, in order to collect both the keywords and phrases they planned to use, and those actually used. Three MLIS student reviewers then mapped the queries to Enser and McGregor's four categories, Jorgensen's 12 classes, and Fidel's 12 feature data and object poles providing a degree of match on a seven point scale (one not at all to 7 exact). The reviewers give highest scores to Enser and McGregor;'s categories. Modifications to both the Enser and McGregor and Jorgensen schemes are suggested
  14. Morehead, D.R.; Pejtersen, A.M.; Rouse, W.B.: ¬The value of information and computer-aided information seeking : problem formulation and application to fiction retrieval (1984) 0.01
    0.010080709 = product of:
      0.020161418 = sum of:
        0.020161418 = product of:
          0.040322836 = sum of:
            0.040322836 = weight(_text_:data in 5828) [ClassicSimilarity], result of:
              0.040322836 = score(doc=5828,freq=2.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.24455236 = fieldWeight in 5828, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5828)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Issues concerning the formulation and application of a model of how humans value information are examined. Formulation of a value function is based on research from modelling, value assessment, human information seeking behavior, and human decision making. The proposed function is incorporated into a computer-based fiction retrieval system and evaluated using data from nine searches. Evaluation is based on the ability of an individual's value function to discriminate among novels selected, rejected, and not considered. The results are discussed in terms of both formulation and utilization of a value function as well as the implications for extending the proposed formulation to other information seeking environments
  15. Short, M.: Text mining and subject analysis for fiction; or, using machine learning and information extraction to assign subject headings to dime novels (2019) 0.01
    0.010080709 = product of:
      0.020161418 = sum of:
        0.020161418 = product of:
          0.040322836 = sum of:
            0.040322836 = weight(_text_:data in 5481) [ClassicSimilarity], result of:
              0.040322836 = score(doc=5481,freq=2.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.24455236 = fieldWeight in 5481, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5481)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Theme
    Data Mining
  16. Rorissa, A.; Iyer, H.: Theories of cognition and image categorization : what category labels reveal about basic level theory (2008) 0.01
    0.008640608 = product of:
      0.017281216 = sum of:
        0.017281216 = product of:
          0.03456243 = sum of:
            0.03456243 = weight(_text_:data in 1958) [ClassicSimilarity], result of:
              0.03456243 = score(doc=1958,freq=2.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.2096163 = fieldWeight in 1958, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1958)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Information search and retrieval interactions usually involve information content in the form of document collections, information retrieval systems and interfaces, and the user. To fully understand information search and retrieval interactions between users' cognitive space and the information space, researchers need to turn to cognitive models and theories. In this article, the authors use one of these theories, the basic level theory. Use of the basic level theory to understand human categorization is both appropriate and essential to user-centered design of taxonomies, ontologies, browsing interfaces, and other indexing tools and systems. Analyses of data from two studies involving free sorting by 105 participants of 100 images were conducted. The types of categories formed and category labels were examined. Results of the analyses indicate that image category labels generally belong to superordinate to the basic level, and are generic and interpretive. Implications for research on theories of cognition and categorization, and design of image indexing, retrieval and browsing systems are discussed.
  17. Bi, Y.: Sentiment classification in social media data by combining triplet belief functions (2022) 0.01
    0.008640608 = product of:
      0.017281216 = sum of:
        0.017281216 = product of:
          0.03456243 = sum of:
            0.03456243 = weight(_text_:data in 613) [ClassicSimilarity], result of:
              0.03456243 = score(doc=613,freq=2.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.2096163 = fieldWeight in 613, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=613)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  18. Enser, P.G.B.; Sandom, C.J.; Hare, J.S.; Lewis, P.H.: Facing the reality of semantic image retrieval (2007) 0.01
    0.007200507 = product of:
      0.014401014 = sum of:
        0.014401014 = product of:
          0.028802028 = sum of:
            0.028802028 = weight(_text_:data in 837) [ClassicSimilarity], result of:
              0.028802028 = score(doc=837,freq=2.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.17468026 = fieldWeight in 837, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=837)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - To provide a better-informed view of the extent of the semantic gap in image retrieval, and the limited potential for bridging it offered by current semantic image retrieval techniques. Design/methodology/approach - Within an ongoing project, a broad spectrum of operational image retrieval activity has been surveyed, and, from a number of collaborating institutions, a test collection assembled which comprises user requests, the images selected in response to those requests, and their associated metadata. This has provided the evidence base upon which to make informed observations on the efficacy of cutting-edge automatic annotation techniques which seek to integrate the text-based and content-based image retrieval paradigms. Findings - Evidence from the real-world practice of image retrieval highlights the existence of a generic-specific continuum of object identification, and the incidence of temporal, spatial, significance and abstract concept facets, manifest in textual indexing and real-query scenarios but often having no directly visible presence in an image. These factors combine to limit the functionality of current semantic image retrieval techniques, which interpret only visible features at the generic extremity of the generic-specific continuum. Research limitations/implications - The project is concerned with the traditional image retrieval environment in which retrieval transactions are conducted on still images which form part of managed collections. The possibilities offered by ontological support for adding functionality to automatic annotation techniques are considered. Originality/value - The paper offers fresh insights into the challenge of migrating content-based image retrieval from the laboratory to the operational environment, informed by newly-assembled, comprehensive, live data.
  19. Bertola, F.; Patti, V.: Ontology-based affective models to organize artworks in the social semantic web (2016) 0.01
    0.007200507 = product of:
      0.014401014 = sum of:
        0.014401014 = product of:
          0.028802028 = sum of:
            0.028802028 = weight(_text_:data in 2669) [ClassicSimilarity], result of:
              0.028802028 = score(doc=2669,freq=2.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.17468026 = fieldWeight in 2669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2669)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper, we focus on applying sentiment analysis to resources from online art collections, by exploiting, as information source, tags intended as textual traces that visitors leave to comment artworks on social platforms. We present a framework where methods and tools from a set of disciplines, ranging from Semantic and Social Web to Natural Language Processing, provide us the building blocks for creating a semantic social space to organize artworks according to an ontology of emotions. The ontology is inspired by the Plutchik's circumplex model, a well-founded psychological model of human emotions. Users can be involved in the creation of the emotional space, through a graphical interactive interface. The development of such semantic space enables new ways of accessing and exploring art collections. The affective categorization model and the emotion detection output are encoded into W3C ontology languages. This gives us the twofold advantage to enable tractable reasoning on detected emotions and related artworks, and to foster the interoperability and integration of tools developed in the Semantic Web and Linked Data community. The proposal has been evaluated against a real-word case study, a dataset of tagged multimedia artworks from the ArsMeteo Italian online collection, and validated through a user study.
  20. Xie, H.; Li, X.; Wang, T.; Lau, R.Y.K.; Wong, T.-L.; Chen, L.; Wang, F.L.; Li, Q.: Incorporating sentiment into tag-based user profiles and resource profiles for personalized search in folksonomy (2016) 0.01
    0.005760405 = product of:
      0.01152081 = sum of:
        0.01152081 = product of:
          0.02304162 = sum of:
            0.02304162 = weight(_text_:data in 2671) [ClassicSimilarity], result of:
              0.02304162 = score(doc=2671,freq=2.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.1397442 = fieldWeight in 2671, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2671)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In recent years, there has been a rapid growth of user-generated data in collaborative tagging (a.k.a. folksonomy-based) systems due to the prevailing of Web 2.0 communities. To effectively assist users to find their desired resources, it is critical to understand user behaviors and preferences. Tag-based profile techniques, which model users and resources by a vector of relevant tags, are widely employed in folksonomy-based systems. This is mainly because that personalized search and recommendations can be facilitated by measuring relevance between user profiles and resource profiles. However, conventional measurements neglect the sentiment aspect of user-generated tags. In fact, tags can be very emotional and subjective, as users usually express their perceptions and feelings about the resources by tags. Therefore, it is necessary to take sentiment relevance into account into measurements. In this paper, we present a novel generic framework SenticRank to incorporate various sentiment information to various sentiment-based information for personalized search by user profiles and resource profiles. In this framework, content-based sentiment ranking and collaborative sentiment ranking methods are proposed to obtain sentiment-based personalized ranking. To the best of our knowledge, this is the first work of integrating sentiment information to address the problem of the personalized tag-based search in collaborative tagging systems. Moreover, we compare the proposed sentiment-based personalized search with baselines in the experiments, the results of which have verified the effectiveness of the proposed framework. In addition, we study the influences by popular sentiment dictionaries, and SenticNet is the most prominent knowledge base to boost the performance of personalized search in folksonomy.