Search (721 results, page 1 of 37)

  • × year_i:[2010 TO 2020}
  1. Hogan, N.M.; Sweeney, K.J.: Social networking and scientific communication : a paradoxical return to Mertonian roots? (2013) 0.11
    0.10655749 = product of:
      0.21311498 = sum of:
        0.21311498 = sum of:
          0.16573438 = weight(_text_:opinion in 611) [ClassicSimilarity], result of:
            0.16573438 = score(doc=611,freq=2.0), product of:
              0.3271964 = queryWeight, product of:
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.04995828 = queryNorm
              0.50652874 = fieldWeight in 611, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.0546875 = fieldNorm(doc=611)
          0.0473806 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
            0.0473806 = score(doc=611,freq=2.0), product of:
              0.17494538 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04995828 = queryNorm
              0.2708308 = fieldWeight in 611, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=611)
      0.5 = coord(1/2)
    
    Date
    22. 3.2013 19:53:52
    Series
    Opinion paper
  2. Badia, A.: Data, information, knowledge : an information science analysis (2014) 0.11
    0.10655749 = product of:
      0.21311498 = sum of:
        0.21311498 = sum of:
          0.16573438 = weight(_text_:opinion in 1296) [ClassicSimilarity], result of:
            0.16573438 = score(doc=1296,freq=2.0), product of:
              0.3271964 = queryWeight, product of:
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.04995828 = queryNorm
              0.50652874 = fieldWeight in 1296, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1296)
          0.0473806 = weight(_text_:22 in 1296) [ClassicSimilarity], result of:
            0.0473806 = score(doc=1296,freq=2.0), product of:
              0.17494538 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04995828 = queryNorm
              0.2708308 = fieldWeight in 1296, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1296)
      0.5 = coord(1/2)
    
    Abstract
    I analyze the text of an article that appeared in this journal in 2007 that published the results of a questionnaire in which a number of experts were asked to define the concepts of data, information, and knowledge. I apply standard information retrieval techniques to build a list of the most frequent terms in each set of definitions. I then apply information extraction techniques to analyze how the top terms are used in the definitions. As a result, I draw data-driven conclusions about the aggregate opinion of the experts. I contrast this with the original analysis of the data to provide readers with an alternative viewpoint on what the data tell us.
    Date
    16. 6.2014 19:22:57
  3. Lueg, C.; Banks, B.; Michalek, M.; Dimsey, J.; Oswin, D.: Close encounters of the fifth kind : recognizing system-initiated engagement as interaction type (2019) 0.11
    0.10655749 = product of:
      0.21311498 = sum of:
        0.21311498 = sum of:
          0.16573438 = weight(_text_:opinion in 5252) [ClassicSimilarity], result of:
            0.16573438 = score(doc=5252,freq=2.0), product of:
              0.3271964 = queryWeight, product of:
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.04995828 = queryNorm
              0.50652874 = fieldWeight in 5252, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5252)
          0.0473806 = weight(_text_:22 in 5252) [ClassicSimilarity], result of:
            0.0473806 = score(doc=5252,freq=2.0), product of:
              0.17494538 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04995828 = queryNorm
              0.2708308 = fieldWeight in 5252, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5252)
      0.5 = coord(1/2)
    
    Date
    15. 5.2019 19:22:59
    Series
    Opinion paper
  4. Varathan, K.D.; Giachanou, A.; Crestani, F.: Comparative opinion mining : a review (2017) 0.10
    0.10252156 = product of:
      0.20504312 = sum of:
        0.20504312 = product of:
          0.41008624 = sum of:
            0.41008624 = weight(_text_:opinion in 3540) [ClassicSimilarity], result of:
              0.41008624 = score(doc=3540,freq=24.0), product of:
                0.3271964 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04995828 = queryNorm
                1.2533337 = fieldWeight in 3540, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3540)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Opinion mining refers to the use of natural language processing, text analysis, and computational linguistics to identify and extract subjective information in textual material. Opinion mining, also known as sentiment analysis, has received a lot of attention in recent times, as it provides a number of tools to analyze public opinion on a number of different topics. Comparative opinion mining is a subfield of opinion mining which deals with identifying and extracting information that is expressed in a comparative form (e.g., "paper X is better than the Y"). Comparative opinion mining plays a very important role when one tries to evaluate something because it provides a reference point for the comparison. This paper provides a review of the area of comparative opinion mining. It is the first review that cover specifically this topic as all previous reviews dealt mostly with general opinion mining. This survey covers comparative opinion mining from two different angles. One from the perspective of techniques and the other from the perspective of comparative opinion elements. It also incorporates preprocessing tools as well as data set that were used by past researchers that can be useful to future researchers in the field of comparative opinion mining.
  5. Nguyen, T.T.; Tho Thanh Quan, T.T.; Tuoi Thi Phan, T.T.: Sentiment search : an emerging trend on social media monitoring systems (2014) 0.10
    0.10063015 = product of:
      0.2012603 = sum of:
        0.2012603 = sum of:
          0.167417 = weight(_text_:opinion in 1625) [ClassicSimilarity], result of:
            0.167417 = score(doc=1625,freq=4.0), product of:
              0.3271964 = queryWeight, product of:
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.04995828 = queryNorm
              0.5116713 = fieldWeight in 1625, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1625)
          0.033843286 = weight(_text_:22 in 1625) [ClassicSimilarity], result of:
            0.033843286 = score(doc=1625,freq=2.0), product of:
              0.17494538 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04995828 = queryNorm
              0.19345059 = fieldWeight in 1625, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1625)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - The purpose of this paper is to discuss sentiment search, which not only retrieves data related to submitted keywords but also identifies sentiment opinion implied in the retrieved data and the subject targeted by this opinion. Design/methodology/approach - The authors propose a retrieval framework known as Cross-Domain Sentiment Search (CSS), which combines the usage of domain ontologies with specific linguistic rules to handle sentiment terms in textual data. The CSS framework also supports incrementally enriching domain ontologies when applied in new domains. Findings - The authors found that domain ontologies are extremely helpful when CSS is applied in specific domains. In the meantime, the embedded linguistic rules make CSS achieve better performance as compared to data mining techniques. Research limitations/implications - The approach has been initially applied in a real social monitoring system of a professional IT company. Thus, it is proved to be able to handle real data acquired from social media channels such as electronic newspapers or social networks. Originality/value - The authors have placed aspect-based sentiment analysis in the context of semantic search and introduced the CSS framework for the whole sentiment search process. The formal definitions of Sentiment Ontology and aspect-based sentiment analysis are also presented. This distinguishes the work from other related works.
    Date
    20. 1.2015 18:30:22
  6. Bhattacharya, S.; Yang, C.; Srinivasan, P.; Boynton, B.: Perceptions of presidential candidates' personalities in twitter (2016) 0.10
    0.10063015 = product of:
      0.2012603 = sum of:
        0.2012603 = sum of:
          0.167417 = weight(_text_:opinion in 2635) [ClassicSimilarity], result of:
            0.167417 = score(doc=2635,freq=4.0), product of:
              0.3271964 = queryWeight, product of:
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.04995828 = queryNorm
              0.5116713 = fieldWeight in 2635, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2635)
          0.033843286 = weight(_text_:22 in 2635) [ClassicSimilarity], result of:
            0.033843286 = score(doc=2635,freq=2.0), product of:
              0.17494538 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04995828 = queryNorm
              0.19345059 = fieldWeight in 2635, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2635)
      0.5 = coord(1/2)
    
    Abstract
    Political sentiment analysis using social media, especially Twitter, has attracted wide interest in recent years. In such research, opinions about politicians are typically divided into positive, negative, or neutral. In our research, the goal is to mine political opinion from social media at a higher resolution by assessing statements of opinion related to the personality traits of politicians; this is an angle that has not yet been considered in social media research. A second goal is to contribute a novel retrieval-based approach for tracking public perception of personality using Gough and Heilbrun's Adjective Check List (ACL) of 110 terms describing key traits. This is in contrast to the typical lexical and machine-learning approaches used in sentiment analysis. High-precision search templates developed from the ACL were run on an 18-month span of Twitter posts mentioning Obama and Romney and these retrieved more than half a million tweets. For example, the results indicated that Romney was perceived as more of an achiever and Obama was perceived as somewhat more friendly. The traits were also aggregated into 14 broad personality dimensions. For example, Obama rated far higher than Romney on the Moderation dimension and lower on the Machiavellianism dimension. The temporal variability of such perceptions was explored.
    Date
    22. 1.2016 11:25:47
  7. Guo, L.; Wan, X.: Exploiting syntactic and semantic relationships between terms for opinion retrieval (2012) 0.10
    0.1004502 = product of:
      0.2009004 = sum of:
        0.2009004 = product of:
          0.4018008 = sum of:
            0.4018008 = weight(_text_:opinion in 492) [ClassicSimilarity], result of:
              0.4018008 = score(doc=492,freq=16.0), product of:
                0.3271964 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04995828 = queryNorm
                1.2280111 = fieldWeight in 492, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.046875 = fieldNorm(doc=492)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Opinion retrieval is the task of finding documents that express an opinion about a given query. A key challenge in opinion retrieval is to capture the query-related opinion score of a document. Existing methods rely mainly on the proximity information between the opinion terms and the query terms to address the key challenge. In this study, we propose to incorporate the syntactic and semantic information of terms into a probabilistic model to capture the query-related opinion score more accurately. The syntactic tree structure of a sentence is used to evaluate the modifying probability between an opinion term and a noun within the sentence with a tree kernel method. Moreover, WordNet and the probabilistic topic model are used to evaluate the semantic relatedness between any noun and the given query. The experimental results over standard TREC baselines on the benchmark BLOG06 collection demonstrate the effectiveness of our proposed method, in comparison with the proximity-based method and other baselines.
  8. Li, D.; Tang, J.; Ding, Y.; Shuai, X.; Chambers, T.; Sun, G.; Luo, Z.; Zhang, J.: Topic-level opinion influence model (TOIM) : an investigation using tencent microblogging (2015) 0.09
    0.093588956 = product of:
      0.18717791 = sum of:
        0.18717791 = product of:
          0.37435582 = sum of:
            0.37435582 = weight(_text_:opinion in 2345) [ClassicSimilarity], result of:
              0.37435582 = score(doc=2345,freq=20.0), product of:
                0.3271964 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04995828 = queryNorm
                1.1441319 = fieldWeight in 2345, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2345)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Text mining has been widely used in multiple types of user-generated data to infer user opinion, but its application to microblogging is difficult because text messages are short and noisy, providing limited information about user opinion. Given that microblogging users communicate with each other to form a social network, we hypothesize that user opinion is influenced by its neighbors in the network. In this paper, we infer user opinion on a topic by combining two factors: the user's historical opinion about relevant topics and opinion influence from his/her neighbors. We thus build a topic-level opinion influence model (TOIM) by integrating both topic factor and opinion influence factor into a unified probabilistic model. We evaluate our model in one of the largest microblogging sites in China, Tencent Weibo, and the experiments show that TOIM outperforms baseline methods in opinion inference accuracy. Moreover, incorporating indirect influence further improves inference recall and f1-measure. Finally, we demonstrate some useful applications of TOIM in analyzing users' behaviors in Tencent Weibo.
  9. Belbachir, F.; Boughanem, M.: Using language models to improve opinion detection (2018) 0.09
    0.08536626 = product of:
      0.17073251 = sum of:
        0.17073251 = product of:
          0.34146503 = sum of:
            0.34146503 = weight(_text_:opinion in 5044) [ClassicSimilarity], result of:
              0.34146503 = score(doc=5044,freq=26.0), product of:
                0.3271964 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04995828 = queryNorm
                1.0436088 = fieldWeight in 5044, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5044)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Opinion mining is one of the most important research tasks in the information retrieval research community. With the huge volume of opinionated data available on the Web, approaches must be developed to differentiate opinion from fact. In this paper, we present a lexicon-based approach for opinion retrieval. Generally, opinion retrieval consists of two stages: relevance to the query and opinion detection. In our work, we focus on the second state which itself focusses on detecting opinionated documents . We compare the document to be analyzed with opinionated sources that contain subjective information. We hypothesize that a document with a strong similarity to opinionated sources is more likely to be opinionated itself. Typical lexicon-based approaches treat and choose their opinion sources according to their test collection, then calculate the opinion score based on the frequency of subjective terms in the document. In our work, we use different open opinion collections without any specific treatment and consider them as a reference collection. We then use language models to determine opinion scores. The analysis document and reference collection are represented by different language models (i.e., Dirichlet, Jelinek-Mercer and two-stage models). These language models are generally used in information retrieval to represent the relationship between documents and queries. However, in our study, we modify these language models to represent opinionated documents. We carry out several experiments using Text REtrieval Conference (TREC) Blogs 06 as our analysis collection and Internet Movie Data Bases (IMDB), Multi-Perspective Question Answering (MPQA) and CHESLY as our reference collection. To improve opinion detection, we study the impact of using different language models to represent the document and reference collection alongside different combinations of opinion and retrieval scores. We then use this data to deduce the best opinion detection models. Using the best models, our approach improves on the best baseline of TREC Blog (baseline4) by 30%.
  10. Huang, H.-H.; Wang, J.-J.; Chen, H.-H.: Implicit opinion analysis : extraction and polarity labelling (2017) 0.08
    0.08286719 = product of:
      0.16573438 = sum of:
        0.16573438 = product of:
          0.33146876 = sum of:
            0.33146876 = weight(_text_:opinion in 3820) [ClassicSimilarity], result of:
              0.33146876 = score(doc=3820,freq=8.0), product of:
                0.3271964 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04995828 = queryNorm
                1.0130575 = fieldWeight in 3820, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3820)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Opinion words are crucial information for sentiment analysis. In some text, however, opinion words are absent or highly ambiguous. The resulting implicit opinions are more difficult to extract and label than explicit ones. In this paper, cutting-edge machine-learning approaches - deep neural network and word-embedding - are adopted for implicit opinion mining at the snippet and clause levels. Hotel reviews written in Chinese are collected and annotated as the experimental data set. Results show the convolutional neural network models not only outperform traditional support vector machine models, but also capture hidden knowledge within the raw text. The strength of word-embedding is also analyzed.
  11. Huang, J.; Boh, W.F.; Goh, K.H.: Opinion convergence versus polarization : examining opinion distributions in online word-of-mouth (2019) 0.08
    0.079412855 = product of:
      0.15882571 = sum of:
        0.15882571 = product of:
          0.31765142 = sum of:
            0.31765142 = weight(_text_:opinion in 5411) [ClassicSimilarity], result of:
              0.31765142 = score(doc=5411,freq=10.0), product of:
                0.3271964 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04995828 = queryNorm
                0.97082806 = fieldWeight in 5411, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5411)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We examine how opinion distributions (i.e., opinion polarization and convergence over time) differ across product salient platforms (product platforms) versus product non-salient platforms (non-product platforms). Drawing on the theory of impression management, we hypothesize and explain when and why consumers choose to post their comments on different platforms, and how their behavior will be affected when they choose to post on online platforms. To test the hypotheses, we collected and text-mined online posts from product platforms such as review aggregator sites, discussion forums, and consumer rating websites, and non-product platforms such as microblogs. The results showed that product platforms have more polarized opinions, and exhibit more convergence in opinion across time, compared with non-product platforms. Our findings advise researchers and practitioners to pay attention to the characteristics of online platforms, and how users' perceptions of the purpose of the online platform may affect their online posting behavior.
  12. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.08
    0.07934699 = product of:
      0.15869398 = sum of:
        0.15869398 = product of:
          0.47608194 = sum of:
            0.47608194 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.47608194 = score(doc=973,freq=2.0), product of:
                0.42354685 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04995828 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  13. Ménard, E.; Dorey, J.: TIIARA: a new bilingual taxonomy for image indexing (2014) 0.08
    0.07611249 = product of:
      0.15222497 = sum of:
        0.15222497 = sum of:
          0.11838169 = weight(_text_:opinion in 1374) [ClassicSimilarity], result of:
            0.11838169 = score(doc=1374,freq=2.0), product of:
              0.3271964 = queryWeight, product of:
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.04995828 = queryNorm
              0.3618062 = fieldWeight in 1374, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1374)
          0.033843286 = weight(_text_:22 in 1374) [ClassicSimilarity], result of:
            0.033843286 = score(doc=1374,freq=2.0), product of:
              0.17494538 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04995828 = queryNorm
              0.19345059 = fieldWeight in 1374, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1374)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents the final phase of a research project that aims to develop a bilingual taxonomy (English and French) for the indexing of ordinary digital images. The objective of this last stage was to ask a representative sample of image searchers to complete retrieval tasks of images indexed using the new taxonomy TIIARA to measure its degree of effectiveness and efficiency. During this experiment, a sample of 60 participants were asked to indicate where in the taxonomic structure they thought they would find each one of the 30 images shown. Respondents also completed a questionnaire intended to obtain their general opinion on TIIARA and to report any difficulties encountered during the retrieval process. The quantitative data was analyzed according to statistical methods, while the content of the open-ended questions was analyzed and coded to identify emergent themes. The findings of this ultimate phase of the research project indicated that, despite the fact that some categories still need further refining, TIIARA already constitutes a successful tool that provides access to ordinary images. Furthermore, the bilingual taxonomy constitutes a definite benefit for image searchers who are not very familiar with images indexed in English, which is still the dominant language of the Web.
    Date
    3. 9.2014 19:22:07
  14. Choi, Y.; Syn, S.Y.: Characteristics of tagging behavior in digitized humanities online collections (2016) 0.08
    0.07611249 = product of:
      0.15222497 = sum of:
        0.15222497 = sum of:
          0.11838169 = weight(_text_:opinion in 2891) [ClassicSimilarity], result of:
            0.11838169 = score(doc=2891,freq=2.0), product of:
              0.3271964 = queryWeight, product of:
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.04995828 = queryNorm
              0.3618062 = fieldWeight in 2891, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2891)
          0.033843286 = weight(_text_:22 in 2891) [ClassicSimilarity], result of:
            0.033843286 = score(doc=2891,freq=2.0), product of:
              0.17494538 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04995828 = queryNorm
              0.19345059 = fieldWeight in 2891, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2891)
      0.5 = coord(1/2)
    
    Abstract
    The purpose of this study was to examine user tags that describe digitized archival collections in the field of humanities. A collection of 8,310 tags from a digital portal (Nineteenth-Century Electronic Scholarship, NINES) was analyzed to find out what attributes of primary historical resources users described with tags. Tags were categorized to identify which tags describe the content of the resource, the resource itself, and subjective aspects (e.g., usage or emotion). The study's findings revealed that over half were content-related; tags representing opinion, usage context, or self-reference, however, reflected only a small percentage. The study further found that terms related to genre or physical format of a resource were frequently used in describing primary archival resources. It was also learned that nontextual resources had lower numbers of content-related tags and higher numbers of document-related tags than textual resources and bibliographic materials; moreover, textual resources tended to have more user-context-related tags than other resources. These findings help explain users' tagging behavior and resource interpretation in primary resources in the humanities. Such information provided through tags helps information professionals decide to what extent indexing archival and cultural resources should be done for resource description and discovery, and understand users' terminology.
    Date
    21. 4.2016 11:23:22
  15. Miao, Q.; Li, Q.; Zeng, D.: Fine-grained opinion mining by integrating multiple review sources (2010) 0.07
    0.071765095 = product of:
      0.14353019 = sum of:
        0.14353019 = product of:
          0.28706038 = sum of:
            0.28706038 = weight(_text_:opinion in 4104) [ClassicSimilarity], result of:
              0.28706038 = score(doc=4104,freq=6.0), product of:
                0.3271964 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04995828 = queryNorm
                0.8773336 = fieldWeight in 4104, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4104)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    With the rapid development of Web 2.0, online reviews have become extremely valuable sources for mining customers' opinions. Fine-grained opinion mining has attracted more and more attention of both applied and theoretical research. In this article, the authors study how to automatically mine product features and opinions from multiple review sources. Specifically, they propose an integration strategy to solve the issue. Within the integration strategy, the authors mine domain knowledge from semistructured reviews and then exploit the domain knowledge to assist product feature extraction and sentiment orientation identification from unstructured reviews. Finally, feature-opinion tuples are generated. Experimental results on real-world datasets show that the proposed approach is effective.
  16. Fernández, R.T.; Losada, D.E.: Effective sentence retrieval based on query-independent evidence (2012) 0.07
    0.071029015 = product of:
      0.14205803 = sum of:
        0.14205803 = product of:
          0.28411606 = sum of:
            0.28411606 = weight(_text_:opinion in 2728) [ClassicSimilarity], result of:
              0.28411606 = score(doc=2728,freq=8.0), product of:
                0.3271964 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04995828 = queryNorm
                0.86833495 = fieldWeight in 2728, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2728)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper we propose an effective sentence retrieval method that consists of incorporating query-independent features into standard sentence retrieval models. To meet this aim, we apply a formal methodology and consider different query-independent features. In particular, we show that opinion-based features are promising. Opinion mining is an increasingly important research topic but little is known about how to improve retrieval algorithms with opinion-based components. In this respect, we consider here different kinds of opinion-based features to act as query-independent evidence and study whether this incorporation improves retrieval performance. On the other hand, information needs are usually related to people, locations or organizations. We hypothesize here that using these named entities as query-independent features may also improve the sentence relevance estimation. Finally, the length of the retrieval unit has been shown to be an important component in different retrieval scenarios. We therefore include length-based features in our study. Our evaluation demonstrates that, either in isolation or in combination, these query-independent features help to improve substantially the performance of state-of-the-art sentence retrieval methods.
  17. Christensen, H.D.: Rethinking image indexing? (2017) 0.07
    0.071029015 = product of:
      0.14205803 = sum of:
        0.14205803 = product of:
          0.28411606 = sum of:
            0.28411606 = weight(_text_:opinion in 3697) [ClassicSimilarity], result of:
              0.28411606 = score(doc=3697,freq=2.0), product of:
                0.3271964 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04995828 = queryNorm
                0.86833495 = fieldWeight in 3697, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3697)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Series
    Opinion paper
  18. Bawden, D.; Robinson, L.: No such thing as society? : On the individuality of information behavior (2013) 0.07
    0.0669668 = product of:
      0.1339336 = sum of:
        0.1339336 = product of:
          0.2678672 = sum of:
            0.2678672 = weight(_text_:opinion in 1139) [ClassicSimilarity], result of:
              0.2678672 = score(doc=1139,freq=4.0), product of:
                0.3271964 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04995828 = queryNorm
                0.8186741 = fieldWeight in 1139, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1139)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This opinion piece considers the relative importance of individual and social factors in determining information behavior. It concludes that individual factors are more central and fundamental, although they may certainly be qualified by social and cultural factors and even though there are good reasons for studying and analyzing information behavior in terms of social groups. More studies of interesting emergent factors and behaviors in social settings would be valuable.
    Series
    Opinion
  19. Osman, D.J.; Yearwood, J.; Vamplew, P.: Automated opinion detection : implications of the level of agreement between human raters (2010) 0.07
    0.06617738 = product of:
      0.13235477 = sum of:
        0.13235477 = product of:
          0.26470953 = sum of:
            0.26470953 = weight(_text_:opinion in 4232) [ClassicSimilarity], result of:
              0.26470953 = score(doc=4232,freq=10.0), product of:
                0.3271964 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04995828 = queryNorm
                0.8090234 = fieldWeight in 4232, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4232)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The ability to agree with the TREC Blog06 opinion assessments was measured for seven human assessors and compared with the submitted results of the Blog06 participants. The assessors achieved a fair level of agreement between their assessments, although the range between the assessors was large. It is recommended that multiple assessors are used to assess opinion data, or a pre-test of assessors is completed to remove the most dissenting assessors from a pool of assessors prior to the assessment process. The possibility of inconsistent assessments in a corpus also raises concerns about training data for an automated opinion detection system (AODS), so a further recommendation is that AODS training data be assembled from a variety of sources. This paper establishes an aspirational value for an AODS by determining the level of agreement achievable by human assessors when assessing the existence of an opinion on a given topic. Knowing the level of agreement amongst humans is important because it sets an upper bound on the expected performance of AODS. While the AODSs surveyed achieved satisfactory results, none achieved a result close to the upper bound.
  20. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.07
    0.066122495 = product of:
      0.13224499 = sum of:
        0.13224499 = product of:
          0.39673495 = sum of:
            0.39673495 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.39673495 = score(doc=1826,freq=2.0), product of:
                0.42354685 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04995828 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja

Languages

  • e 533
  • d 180
  • a 1
  • hu 1
  • More… Less…

Types

  • a 633
  • el 59
  • m 46
  • s 17
  • x 12
  • r 7
  • b 5
  • i 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications