Search (624 results, page 1 of 32)

  • × year_i:[2010 TO 2020}
  • × type_ss:"a"
  1. Li, L.; He, D.; Zhang, C.; Geng, L.; Zhang, K.: Characterizing peer-judged answer quality on academic Q&A sites : a cross-disciplinary case study on ResearchGate (2018) 0.12
    0.119576484 = product of:
      0.23915297 = sum of:
        0.23915297 = sum of:
          0.20924342 = weight(_text_:q in 4637) [ClassicSimilarity], result of:
            0.20924342 = score(doc=4637,freq=8.0), product of:
              0.28916505 = queryWeight, product of:
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.04415143 = queryNorm
              0.7236124 = fieldWeight in 4637, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4637)
          0.029909546 = weight(_text_:22 in 4637) [ClassicSimilarity], result of:
            0.029909546 = score(doc=4637,freq=2.0), product of:
              0.15461078 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04415143 = queryNorm
              0.19345059 = fieldWeight in 4637, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4637)
      0.5 = coord(1/2)
    
    Abstract
    Purpose Academic social (question and answer) Q&A sites are now utilised by millions of scholars and researchers for seeking and sharing discipline-specific information. However, little is known about the factors that can affect their votes on the quality of an answer, nor how the discipline might influence these factors. The paper aims to discuss this issue. Design/methodology/approach Using 1,021 answers collected over three disciplines (library and information services, history of art, and astrophysics) in ResearchGate, statistical analysis is performed to identify the characteristics of high-quality academic answers, and comparisons were made across the three disciplines. In particular, two major categories of characteristics of the answer provider and answer content were extracted and examined. Findings The results reveal that high-quality answers on academic social Q&A sites tend to possess two characteristics: first, they are provided by scholars with higher academic reputations (e.g. more followers, etc.); and second, they provide objective information (e.g. longer answer with fewer subjective opinions). However, the impact of these factors varies across disciplines, e.g., objectivity is more favourable in physics than in other disciplines. Originality/value The study is envisioned to help academic Q&A sites to select and recommend high-quality answers across different disciplines, especially in a cold-start scenario where the answer has not received enough judgements from peers.
    Date
    20. 1.2015 18:30:22
  2. Bao, Z.; Han, Z.: What drives users' participation in online social Q&A communities? : an empirical study based on social cognitive theory (2019) 0.12
    0.119576484 = product of:
      0.23915297 = sum of:
        0.23915297 = sum of:
          0.20924342 = weight(_text_:q in 5497) [ClassicSimilarity], result of:
            0.20924342 = score(doc=5497,freq=8.0), product of:
              0.28916505 = queryWeight, product of:
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.04415143 = queryNorm
              0.7236124 = fieldWeight in 5497, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5497)
          0.029909546 = weight(_text_:22 in 5497) [ClassicSimilarity], result of:
            0.029909546 = score(doc=5497,freq=2.0), product of:
              0.15461078 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04415143 = queryNorm
              0.19345059 = fieldWeight in 5497, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5497)
      0.5 = coord(1/2)
    
    Abstract
    Purpose The purpose of this paper is to examine some drivers of users' participation in online social question-and-answer (Q&A) communities based on social cognitive theory and then identify the underlying mechanism of this process. Design/methodology/approach This study developed a research model to test the proposed hypotheses, and an online survey was employed to collected data. Totally, 313 valid responses were collected, and partial least squares structural equation modeling was adopted to analyze these data. Findings This study empirically finds that the outcome expectations (personal outcome expectations and knowledge self-management outcome expectations) are positively related to participation in online social Q&A communities. At the same time, users' self-efficacy positively influences their participation behaviors. It can not only directly motivate users' participation, but also indirectly promote participation behaviors through the two dimensions of outcome expectations. Besides, perceived expertise and perceived similarity are two positive and significant environmental elements affecting users' participation. Originality/value This study extends the understanding about how participation behaviors will be motivated in the context of online social Q&A communities. Drawing on the social cognitive theory, constructs were established based on the features of these communities. Meanwhile, some mediating effects in the motivating process were also discussed.
    Date
    20. 1.2015 18:30:22
  3. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.12
    0.11536615 = sum of:
      0.052593127 = product of:
        0.21037251 = sum of:
          0.21037251 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
            0.21037251 = score(doc=400,freq=2.0), product of:
              0.3743163 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.04415143 = queryNorm
              0.56201804 = fieldWeight in 400, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=400)
        0.25 = coord(1/4)
      0.06277303 = product of:
        0.12554605 = sum of:
          0.12554605 = weight(_text_:q in 400) [ClassicSimilarity], result of:
            0.12554605 = score(doc=400,freq=2.0), product of:
              0.28916505 = queryWeight, product of:
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.04415143 = queryNorm
              0.43416747 = fieldWeight in 400, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.046875 = fieldNorm(doc=400)
        0.5 = coord(1/2)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  4. Rosenbaum, H.; Shachaf, P.: ¬A structuration approach to online communities of practice : the case of Q&A communities (2010) 0.08
    0.076880954 = product of:
      0.15376191 = sum of:
        0.15376191 = product of:
          0.30752382 = sum of:
            0.30752382 = weight(_text_:q in 3916) [ClassicSimilarity], result of:
              0.30752382 = score(doc=3916,freq=12.0), product of:
                0.28916505 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04415143 = queryNorm
                1.0634888 = fieldWeight in 3916, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3916)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article describes an approach based on structuration theory (Giddens, 1979, 1984; Orlikowski, 1992, 2000) and communities of practice (Wenger, 1998) that can be used to guide investigation into the dynamics of online question and answer (Q&A) communities. This approach is useful because most research on Q&A sites has focused attention on information retrieval, information-seeking behavior, and information intermediation and has assumed uncritically that the online Q&A community plays an important role in these domains of study. Assuming instead that research on online communities should take into account social, technical, and contextual factors (Kling, Rosenbaum, & Sawyer, 2005), the utility of this approach is demonstrated with an analysis of three online Q&A communities seen as communities of practice. This article makes a theoretical contribution to the study of online Q&A communities and, more generally, to the domain of social reference.
  5. Zhu, Q.; Kong, X.; Hong, S.; Li, J.; He, Z.: Global ontology research progress : a bibliometric analysis (2015) 0.07
    0.073460095 = product of:
      0.14692019 = sum of:
        0.14692019 = sum of:
          0.10462171 = weight(_text_:q in 2590) [ClassicSimilarity], result of:
            0.10462171 = score(doc=2590,freq=2.0), product of:
              0.28916505 = queryWeight, product of:
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.04415143 = queryNorm
              0.3618062 = fieldWeight in 2590, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2590)
          0.042298485 = weight(_text_:22 in 2590) [ClassicSimilarity], result of:
            0.042298485 = score(doc=2590,freq=4.0), product of:
              0.15461078 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04415143 = queryNorm
              0.27358043 = fieldWeight in 2590, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2590)
      0.5 = coord(1/2)
    
    Date
    20. 1.2015 18:30:22
    17. 9.2018 18:22:23
  6. Choi, E.; Shah, C.: User motivations for asking questions in online Q&A services (2016) 0.07
    0.06920077 = product of:
      0.13840154 = sum of:
        0.13840154 = product of:
          0.27680308 = sum of:
            0.27680308 = weight(_text_:q in 2896) [ClassicSimilarity], result of:
              0.27680308 = score(doc=2896,freq=14.0), product of:
                0.28916505 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04415143 = queryNorm
                0.9572494 = fieldWeight in 2896, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2896)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Online Q&A services are information sources where people identify their information need, formulate the need in natural language, and interact with one another to satisfy their needs. Even though in recent years online Q&A has considerably grown in popularity and impacted information-seeking behaviors, we still lack knowledge about what motivates people to ask a question in online Q&A environments. Yahoo! Answers and WikiAnswers were selected as the test beds in the study, and a sequential mixed method employing an Internet-based survey, a diary method, and interviews was used to investigate user motivations for asking a question in online Q&A services. Cognitive needs were found as the most significant motivation, driving people to ask a question. Yet, it was found that other motivational factors (e.g., tension free needs) also played an important role in user motivations for asking a question, depending on asker's contexts and situations. Understanding motivations for asking a question could provide a general framework of conceptualizing different contexts and situations of information needs in online Q&A. The findings have several implications not only for developing better question-answering processes in online Q&A environments, but also for gaining insights into the broader understanding of online information-seeking behaviors.
  7. Lu, W.; Ding, H.; Jiang, J.: ¬A document expansion framework for tag-based image retrieval (2018) 0.07
    0.06726563 = product of:
      0.13453126 = sum of:
        0.13453126 = sum of:
          0.10462171 = weight(_text_:q in 4630) [ClassicSimilarity], result of:
            0.10462171 = score(doc=4630,freq=2.0), product of:
              0.28916505 = queryWeight, product of:
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.04415143 = queryNorm
              0.3618062 = fieldWeight in 4630, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4630)
          0.029909546 = weight(_text_:22 in 4630) [ClassicSimilarity], result of:
            0.029909546 = score(doc=4630,freq=2.0), product of:
              0.15461078 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04415143 = queryNorm
              0.19345059 = fieldWeight in 4630, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4630)
      0.5 = coord(1/2)
    
    Abstract
    Purpose The purpose of this paper is to utilize document expansion techniques for improving image representation and retrieval. This paper proposes a concise framework for tag-based image retrieval (TBIR). Design/methodology/approach The proposed approach includes three core components: a strategy of selecting expansion (similar) images from the whole corpus (e.g. cluster-based or nearest neighbor-based); a technique for assessing image similarity, which is adopted for selecting expansion images (text, image, or mixed); and a model for matching the expanded image representation with the search query (merging or separate). Findings The results show that applying the proposed method yields significant improvements in effectiveness, and the method obtains better performance on the top of the rank and makes a great improvement on some topics with zero score in baseline. Moreover, nearest neighbor-based expansion strategy outperforms the cluster-based expansion strategy, and using image features for selecting expansion images is better than using text features in most cases, and the separate method for calculating the augmented probability P(q|RD) is able to erase the negative influences of error images in RD. Research limitations/implications Despite these methods only outperform on the top of the rank instead of the entire rank list, TBIR on mobile platforms still can benefit from this approach. Originality/value Unlike former studies addressing the sparsity, vocabulary mismatch, and tag relatedness in TBIR individually, the approach proposed by this paper addresses all these issues with a single document expansion framework. It is a comprehensive investigation of document expansion techniques in TBIR.
    Date
    20. 1.2015 18:30:22
  8. Jiang, Z.; Gu, Q.; Yin, Y.; Wang, J.; Chen, D.: GRAW+ : a two-view graph propagation method with word coupling for readability assessment (2019) 0.07
    0.06726563 = product of:
      0.13453126 = sum of:
        0.13453126 = sum of:
          0.10462171 = weight(_text_:q in 5218) [ClassicSimilarity], result of:
            0.10462171 = score(doc=5218,freq=2.0), product of:
              0.28916505 = queryWeight, product of:
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.04415143 = queryNorm
              0.3618062 = fieldWeight in 5218, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5218)
          0.029909546 = weight(_text_:22 in 5218) [ClassicSimilarity], result of:
            0.029909546 = score(doc=5218,freq=2.0), product of:
              0.15461078 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04415143 = queryNorm
              0.19345059 = fieldWeight in 5218, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5218)
      0.5 = coord(1/2)
    
    Date
    15. 4.2019 13:46:22
  9. Westbrook, L.: Intimate partner violence online : expectations and agency in question and answer websites (2015) 0.06
    0.06342355 = product of:
      0.1268471 = sum of:
        0.1268471 = product of:
          0.2536942 = sum of:
            0.2536942 = weight(_text_:q in 1670) [ClassicSimilarity], result of:
              0.2536942 = score(doc=1670,freq=6.0), product of:
                0.28916505 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04415143 = queryNorm
                0.8773336 = fieldWeight in 1670, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1670)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article presents the first situation-rooted typology of intimate partner violence (IPV) postings in social question and answer (Q&A) sites. Survivors as well as abusers post high-risk health, legal, and financial questions to Q&A sites; answers come from individuals who self-identify as lawyers, experts, survivors, and abusers. Using grounded theory this study examines 1,241 individual posts, each within its own context, raising issues of agency and expectations. Informed by Savolainen's everyday life information seeking (ELIS) and Nahl's affective load theory (ALT), the resultant Q&A typology suggests implications for IPV service design, policy development, and research priorities.
  10. Shah, C.; Kitzie, V.: Social Q&A and virtual reference : comparing apples and oranges with the help of experts and users (2012) 0.06
    0.05848532 = product of:
      0.11697064 = sum of:
        0.11697064 = product of:
          0.23394129 = sum of:
            0.23394129 = weight(_text_:q in 457) [ClassicSimilarity], result of:
              0.23394129 = score(doc=457,freq=10.0), product of:
                0.28916505 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04415143 = queryNorm
                0.8090234 = fieldWeight in 457, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=457)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Online question-answering (Q&A) services are becoming increasingly popular among information seekers. We divide them into two categories, social Q&A (SQA) and virtual reference (VR), and examine how experts (librarians) and end users (students) evaluate information within both categories. To accomplish this, we first performed an extensive literature review and compiled a list of the aspects found to contribute to a "good" answer. These aspects were divided among three high-level concepts: relevance, quality, and satisfaction. We then interviewed both experts and users, asking them first to reflect on their online Q&A experiences and then comment on our list of aspects. These interviews uncovered two main disparities. One disparity was found between users' expectations with these services and how information was actually delivered among them, and the other disparity between the perceptions of users and experts with regard to the aforementioned three characteristics of relevance, quality, and satisfaction. Using qualitative analyses of both the interviews and relevant literature, we suggest ways to create better hybrid solutions for online Q&A and to bridge the gap between experts' and users' understandings of relevance, quality, and satisfaction, as well as the perceived importance of each in contributing to a good answer.
  11. Li, D.; Kwong, C.-P.: Understanding latent semantic indexing : a topological structure analysis using Q-analysis (2010) 0.05
    0.054363046 = product of:
      0.10872609 = sum of:
        0.10872609 = product of:
          0.21745218 = sum of:
            0.21745218 = weight(_text_:q in 3427) [ClassicSimilarity], result of:
              0.21745218 = score(doc=3427,freq=6.0), product of:
                0.28916505 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04415143 = queryNorm
                0.7520002 = fieldWeight in 3427, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3427)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The method of latent semantic indexing (LSI) is well-known for tackling the synonymy and polysemy problems in information retrieval; however, its performance can be very different for various datasets, and the questions of what characteristics of a dataset and why these characteristics contribute to this difference have not been fully understood. In this article, we propose that the mathematical structure of simplexes can be attached to a term-document matrix in the vector space model (VSM) for information retrieval. The Q-analysis devised by R.H. Atkin ([1974]) may then be applied to effect an analysis of the topological structure of the simplexes and their corresponding dataset. Experimental results of this analysis reveal that there is a correlation between the effectiveness of LSI and the topological structure of the dataset. By using the information obtained from the topological analysis, we develop a new method to explore the semantic information in a dataset. Experimental results show that our method can enhance the performance of VSM for datasets over which LSI is not effective.
    Object
    Q-analysis
  12. Wu, Q.: ¬The w-index : a measure to assess scientific impact by focusing on widely cited papers (2010) 0.05
    0.052310854 = product of:
      0.10462171 = sum of:
        0.10462171 = product of:
          0.20924342 = sum of:
            0.20924342 = weight(_text_:q in 3428) [ClassicSimilarity], result of:
              0.20924342 = score(doc=3428,freq=8.0), product of:
                0.28916505 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04415143 = queryNorm
                0.7236124 = fieldWeight in 3428, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3428)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Based on the principles of the h-index, I propose a new measure, the w-index, as a particularly simple and more useful way to assess the substantial impact of a researcher's work, especially regarding excellent papers. The w-index can be defined as follows: If w of a researcher's papers have at least 10w citations each and the other papers have fewer than 10(w+1) citations, that researcher's w-index is w. The results demonstrate that there are noticeable differences between the w-index and the h-index, because the w-index plays close attention to the more widely cited papers. These discrepancies can be measured by comparing the ranks of 20 astrophysicists, a few famous physical scientists, and 16 Price medalists. Furthermore, I put forward the w(q)-index to improve the discriminatory power of the w-index and to rank scientists with the same w. The factor q is the least number of citations a researcher with w needed to reach w+1. In terms of both simplicity and accuracy, the w-index or w(q)-index can be widely used for evaluation of scientists, journals, conferences, scientific topics, research institutions, and so on.
  13. Lou, J.; Fang, Y.; Lim, K.H.; Peng, J.Z.: Contributing high quantity and quality knowledge to online Q&A communities (2013) 0.05
    0.052310854 = product of:
      0.10462171 = sum of:
        0.10462171 = product of:
          0.20924342 = sum of:
            0.20924342 = weight(_text_:q in 615) [ClassicSimilarity], result of:
              0.20924342 = score(doc=615,freq=8.0), product of:
                0.28916505 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04415143 = queryNorm
                0.7236124 = fieldWeight in 615, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=615)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study investigates the motivational factors affecting the quantity and quality of voluntary knowledge contribution in online Q&A communities. Although previous studies focus on knowledge contribution quantity, this study regards quantity and quality as two important, yet distinct, aspects of knowledge contribution. Drawing on self-determination theory, this study proposes that five motivational factors, categorized along the extrinsic-intrinsic spectrum of motivation, have differential effects on knowledge contribution quantity versus quality in the context of online Q&A communities. An online survey with 367 participants was conducted in a leading online Q&A community to test the research model. Results show that rewards in the reputation system, learning, knowledge self-efficacy, and enjoy helping stand out as important motivations. Furthermore, rewards in the reputation system, as a manifestation of the external regulation, is more effective in facilitating the knowledge contribution quantity than quality. Knowledge self-efficacy, as a manifestation of intrinsic motivation, is more strongly related to knowledge contribution quality, whereas the other intrinsic motivation, enjoy helping, is more strongly associated with knowledge contribution quantity. Both theoretical and practical implications are discussed.
  14. Savolainen, R.: Providing informational support in an online discussion group and a Q&A site : the case of travel planning (2015) 0.05
    0.052310854 = product of:
      0.10462171 = sum of:
        0.10462171 = product of:
          0.20924342 = sum of:
            0.20924342 = weight(_text_:q in 1660) [ClassicSimilarity], result of:
              0.20924342 = score(doc=1660,freq=8.0), product of:
                0.28916505 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04415143 = queryNorm
                0.7236124 = fieldWeight in 1660, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1660)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study examines the ways in which informational support based on user-generated content is provided for the needs of leisure-related travel planning in an online discussion group and a Q&A site. Attention is paid to the grounds by which the participants bolster the informational support. The findings draw on the analysis of 200 threads of a Finnish online discussion group and a Yahoo! Answers Q&A (question and answer) forum. Three main types of informational support were identified: providing factual information, providing advice, and providing personal opinion. The grounds used in the answers varied across the types of informational support. While providing factual information, the most popular ground was description of the attributes of an entity. In the context of providing advice, reference to external sources of information was employed most frequently. Finally, although providing personal opinions, the participants most often bolstered their views by articulating positive or negative evaluations of an entity. Overall, regarding the grounds, there were more similarities than differences between the discussion group and the Q&A site.
  15. Miao, Q.; Li, Q.; Zeng, D.: Fine-grained opinion mining by integrating multiple review sources (2010) 0.05
    0.05178511 = product of:
      0.10357022 = sum of:
        0.10357022 = product of:
          0.20714045 = sum of:
            0.20714045 = weight(_text_:q in 4104) [ClassicSimilarity], result of:
              0.20714045 = score(doc=4104,freq=4.0), product of:
                0.28916505 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04415143 = queryNorm
                0.7163398 = fieldWeight in 4104, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4104)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  16. Gazan, R.: Social Q&A (2011) 0.04
    0.044387236 = product of:
      0.08877447 = sum of:
        0.08877447 = product of:
          0.17754894 = sum of:
            0.17754894 = weight(_text_:q in 4933) [ClassicSimilarity], result of:
              0.17754894 = score(doc=4933,freq=4.0), product of:
                0.28916505 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04415143 = queryNorm
                0.61400557 = fieldWeight in 4933, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4933)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article presents a review and analysis of the research literature in social Q&A (SQA), a term describing systems where people ask, answer, and rate content while interacting around it. The growth of SQA is contextualized within the broader trend of user-generated content from Usenet to Web 2.0, and alternative definitions of SQA are reviewed. SQA sites have been conceptualized in the literature as simultaneous examples of tools, collections, communities, and complex sociotechnical systems. Major threads of SQA research include user-generated and algorithmic question categorization, answer classification and quality assessment, studies of user satisfaction, reward structures, and motivation for participation, and how trust and expertise are both operationalized by and emerge from SQA sites. Directions for future research are discussed, including more refined conceptions of SQA site participants and their roles, unpacking the processes by which social capital is achieved, managed, and wielded in SQA sites, refining question categorization, conducting research within and across a wider range of SQA sites, the application of economic and game-theoretic models, and the problematization of SQA itself.
  17. Savolainen, R.: ¬The structure of argument patterns on a social Q&A site (2012) 0.04
    0.044387236 = product of:
      0.08877447 = sum of:
        0.08877447 = product of:
          0.17754894 = sum of:
            0.17754894 = weight(_text_:q in 517) [ClassicSimilarity], result of:
              0.17754894 = score(doc=517,freq=4.0), product of:
                0.28916505 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04415143 = queryNorm
                0.61400557 = fieldWeight in 517, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.046875 = fieldNorm(doc=517)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study investigates the argument patterns in Yahoo! Answers, a major question and answer (Q&A) site. Mainly drawing on the ideas of Toulmin (), argument pattern is conceptualized as a set of 5 major elements: claim, counterclaim, rebuttal, support, and grounds. The combinations of these elements result in diverse argument patterns. Failed opening consists of an initial claim only, whereas nonoppositional argument pattern also includes indications of support. Oppositional argument pattern contains the elements of counterclaim and rebuttal. Mixed argument pattern entails all 5 elements. The empirical data were gathered by downloading from Yahoo! Answers 100 discussion threads discussing global warming-a controversial topic providing a fertile ground for arguments for and against. Of the argument patterns, failed openings were most frequent, followed by oppositional, nonoppositional, and mixed patterns. In most cases, the participants grounded their arguments by drawing on personal beliefs and facts. The findings suggest that oppositional and mixed argument patterns provide more opportunities for the assessment of the quality and credibility of answers, as compared to failed openings and nonoppositional argument patterns.
  18. Wu, P.F.; Korfiatis, N.: You scratch someone's back and we'll scratch yours : collective reciprocity in social Q&A communities (2013) 0.04
    0.044387236 = product of:
      0.08877447 = sum of:
        0.08877447 = product of:
          0.17754894 = sum of:
            0.17754894 = weight(_text_:q in 1079) [ClassicSimilarity], result of:
              0.17754894 = score(doc=1079,freq=4.0), product of:
                0.28916505 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04415143 = queryNorm
                0.61400557 = fieldWeight in 1079, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Taking a structuration perspective and integrating reciprocity research in economics, this study examines the dynamics of reciprocal interactions in social question & answer communities. We postulate that individual users of social Q&A constantly adjust their kindness in the direction of the observed benefit and effort of others. Collective reciprocity emerges from this pattern of conditional strategy of reciprocation and helps form a structure that guides the very interactions that give birth to the structure. Based on a large sample of data from Yahoo! Answers, our empirical analysis supports the collective reciprocity premise, showing that the more effort (relative to benefit) an asker contributes to the community, the more likely the community will return the favor. On the other hand, the more benefit (relative to effort) the asker takes from the community, the less likely the community will cooperate in terms of providing answers. We conclude that a structuration view of reciprocity sheds light on the duality of social norms in online communities.
  19. Li, J.; Sun, A.; Xing, Z.: To do or not to do : distill crowdsourced negative caveats to augment api documentation (2018) 0.04
    0.044387236 = product of:
      0.08877447 = sum of:
        0.08877447 = product of:
          0.17754894 = sum of:
            0.17754894 = weight(_text_:q in 4575) [ClassicSimilarity], result of:
              0.17754894 = score(doc=4575,freq=4.0), product of:
                0.28916505 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04415143 = queryNorm
                0.61400557 = fieldWeight in 4575, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4575)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Negative caveats of application programming interfaces (APIs) are about "how not to use an API," which are often absent from the official API documentation. When these caveats are overlooked, programming errors may emerge from misusing APIs, leading to heavy discussions on Q&A websites like Stack Overflow. If the overlooked caveats could be mined from these discussions, they would be beneficial for programmers to avoid misuse of APIs. However, it is challenging because the discussions are informal, redundant, and diverse. For this, for example, we propose Disca, a novel approach for automatically Distilling desirable API negative caveats from unstructured Q&A discussions. Through sentence selection and prominent term clustering, Disca ensures that distilled caveats are context-independent, prominent, semantically diverse, and nonredundant. Quantitative evaluation in our experiments shows that the proposed Disca significantly outperforms four text-summarization techniques. We also show that the distilled API negative caveats could greatly augment API documentation through qualitative analysis.
  20. Sun, Y.; Wang, N.; Shen, X.-L.; Zhang, X.: Bias effects, synergistic effects, and information contingency effects : developing and testing an extended information adoption model in social Q&A (2019) 0.04
    0.044387236 = product of:
      0.08877447 = sum of:
        0.08877447 = product of:
          0.17754894 = sum of:
            0.17754894 = weight(_text_:q in 5439) [ClassicSimilarity], result of:
              0.17754894 = score(doc=5439,freq=4.0), product of:
                0.28916505 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.04415143 = queryNorm
                0.61400557 = fieldWeight in 5439, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5439)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    To advance the theoretical understanding on information adoption, this study tries to extend the information adoption model (IAM) in three ways. First, this study considers the relationship between source credibility and argument quality and the relationship between herding factors and information usefulness (i.e., bias effects). Second, this study proposes the interaction effects of source credibility and argument quality and the interaction effects of herding factors and information usefulness (i.e., synergistic effects). Third, this study explores the moderating role of an information characteristic - search versus experience information (i.e., information contingency effects). The proposed extended information adoption model (EIAM) is empirically tested through a 2 by 2 by 2 experiment in the social Q&A context, and the results confirm most of the hypotheses. Finally, theoretical contributions and practical implications are discussed.

Languages

  • e 486
  • d 136
  • a 1
  • More… Less…

Types

Themes