Search (1 results, page 1 of 1)

  • × author_ss:"Alani, H."
  • × theme_ss:"Inhaltsanalyse"
  1. Saif, H.; He, Y.; Fernandez, M.; Alani, H.: Contextual semantics for sentiment analysis of Twitter (2016) 0.00
    0.0023218233 = product of:
      0.011609117 = sum of:
        0.011609117 = product of:
          0.034827348 = sum of:
            0.034827348 = weight(_text_:f in 2667) [ClassicSimilarity], result of:
              0.034827348 = score(doc=2667,freq=4.0), product of:
                0.11184496 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.028060954 = queryNorm
                0.31138954 = fieldWeight in 2667, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2667)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Abstract
    Sentiment analysis on Twitter has attracted much attention recently due to its wide applications in both, commercial and public sectors. In this paper we present SentiCircles, a lexicon-based approach for sentiment analysis on Twitter. Different from typical lexicon-based approaches, which offer a fixed and static prior sentiment polarities of words regardless of their context, SentiCircles takes into account the co-occurrence patterns of words in different contexts in tweets to capture their semantics and update their pre-assigned strength and polarity in sentiment lexicons accordingly. Our approach allows for the detection of sentiment at both entity-level and tweet-level. We evaluate our proposed approach on three Twitter datasets using three different sentiment lexicons to derive word prior sentiments. Results show that our approach significantly outperforms the baselines in accuracy and F-measure for entity-level subjectivity (neutral vs. polar) and polarity (positive vs. negative) detections. For tweet-level sentiment detection, our approach performs better than the state-of-the-art SentiStrength by 4-5% in accuracy in two datasets, but falls marginally behind by 1% in F-measure in the third dataset.