Search (5 results, page 1 of 1)

  • × author_ss:"Sun, Y."
  • × year_i:[2010 TO 2020}
  1. Sun, Y.; Wang, N.; Shen, X.-L.; Zhang, X.: Bias effects, synergistic effects, and information contingency effects : developing and testing an extended information adoption model in social Q&A (2019) 0.00
    0.0020392092 = product of:
      0.0040784185 = sum of:
        0.0040784185 = product of:
          0.008156837 = sum of:
            0.008156837 = weight(_text_:a in 5439) [ClassicSimilarity], result of:
              0.008156837 = score(doc=5439,freq=10.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.1709182 = fieldWeight in 5439, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5439)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    To advance the theoretical understanding on information adoption, this study tries to extend the information adoption model (IAM) in three ways. First, this study considers the relationship between source credibility and argument quality and the relationship between herding factors and information usefulness (i.e., bias effects). Second, this study proposes the interaction effects of source credibility and argument quality and the interaction effects of herding factors and information usefulness (i.e., synergistic effects). Third, this study explores the moderating role of an information characteristic - search versus experience information (i.e., information contingency effects). The proposed extended information adoption model (EIAM) is empirically tested through a 2 by 2 by 2 experiment in the social Q&A context, and the results confirm most of the hypotheses. Finally, theoretical contributions and practical implications are discussed.
    Footnote
    Part of a special issue for research on people's engagement with technology.
    Type
    a
  2. Zhang, Y.; Sun, Y.; Xie, B.: Quality of health information for consumers on the web : a systematic review of indicators, criteria, tools, and evaluation results (2015) 0.00
    0.0020106873 = product of:
      0.0040213745 = sum of:
        0.0040213745 = product of:
          0.008042749 = sum of:
            0.008042749 = weight(_text_:a in 2218) [ClassicSimilarity], result of:
              0.008042749 = score(doc=2218,freq=14.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.1685276 = fieldWeight in 2218, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2218)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The quality of online health information for consumers has been a critical issue that concerns all stakeholders in healthcare. To gain an understanding of how quality is evaluated, this systematic review examined 165 articles in which researchers evaluated the quality of consumer-oriented health information on the web against predefined criteria. It was found that studies typically evaluated quality in relation to the substance and formality of content, as well as to the design of technological platforms. Attention to design, particularly interactivity, privacy, and social and cultural appropriateness is on the rise, which suggests the permeation of a user-centered perspective into the evaluation of health information systems, and a growing recognition of the need to study these systems from a social-technical perspective. Researchers used many preexisting instruments to facilitate evaluation of the formality of content; however, only a few were used in multiple studies, and their validity was questioned. The quality of content (i.e., accuracy and completeness) was always evaluated using proprietary instruments constructed based on medical guidelines or textbooks. The evaluation results revealed that the quality of health information varied across medical domains and across websites, and that the overall quality remained problematic. Future research is needed to examine the quality of user-generated content and to explore opportunities offered by emerging new media that can facilitate the consumer evaluation of health information.
    Type
    a
  3. Shen, X.-L.; Li, Y.-J.; Sun, Y.; Chen, J.; Wang, F.: Knowledge withholding in online knowledge spaces : social deviance behavior and secondary control perspective (2019) 0.00
    0.0018615347 = product of:
      0.0037230693 = sum of:
        0.0037230693 = product of:
          0.0074461387 = sum of:
            0.0074461387 = weight(_text_:a in 5016) [ClassicSimilarity], result of:
              0.0074461387 = score(doc=5016,freq=12.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.15602624 = fieldWeight in 5016, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5016)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Knowledge withholding, which is defined as the likelihood that an individual devotes less than full effort to knowledge contribution, can be regarded as an emerging social deviance behavior for knowledge practice in online knowledge spaces. However, prior studies placed a great emphasis on proactive knowledge behaviors, such as knowledge sharing and contribution, but failed to consider the uniqueness of knowledge withholding. To capture the social-deviant nature of knowledge withholding and to better understand how people deal with counterproductive knowledge behaviors, this study develops a research model based on the secondary control perspective. Empirical analyses were conducted using the data collected from an online knowledge space. The results indicate that both predictive control and vicarious control exert a positive influence on knowledge withholding. This study also incorporates knowledge-withholding acceptability as a moderating variable of secondary control strategies. In particular, knowledge-withholding acceptability enhances the impact of predictive control, whereas it weakens the effect of vicarious control on knowledge withholding. This study concludes with a discussion of the key findings, and the implications for both research and practice.
    Type
    a
  4. Xu, S.; Zhai, D.; Wang, F.; An, X.; Pang, H.; Sun, Y.: ¬A novel method for topic linkages between scientific publications and patents (2019) 0.00
    0.0015199365 = product of:
      0.003039873 = sum of:
        0.003039873 = product of:
          0.006079746 = sum of:
            0.006079746 = weight(_text_:a in 5360) [ClassicSimilarity], result of:
              0.006079746 = score(doc=5360,freq=8.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.12739488 = fieldWeight in 5360, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5360)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    It is increasingly important to build topic linkages between scientific publications and patents for the purpose of understanding the relationships between science and technology. Previous studies on the linkages mainly focus on the analysis of nonpatent references on the front page of patents, or the resulting citation-link networks, but with unsatisfactory performance. In the meanwhile, abundant mentioned entities in the scholarly articles and patents further complicate topic linkages. To deal with this situation, a novel statistical entity-topic model (named the CCorrLDA2 model), armed with the collapsed Gibbs sampling inference algorithm, is proposed to discover the hidden topics respectively from the academic articles and patents. In order to reduce the negative impact on topic similarity calculation, word tokens and entity mentions are grouped by the Brown clustering method. Then a topic linkages construction problem is transformed into the well-known optimal transportation problem after topic similarity is calculated on the basis of symmetrized Kullback-Leibler (KL) divergence. Extensive experimental results indicate that our approach is feasible to build topic linkages with more superior performance than the counterparts.
    Type
    a
  5. Sun, Y.; Kantor, P.B.; Morse, E.L.: Using cross-evaluation to evaluate interactive QA systems (2011) 0.00
    0.001289709 = product of:
      0.002579418 = sum of:
        0.002579418 = product of:
          0.005158836 = sum of:
            0.005158836 = weight(_text_:a in 4744) [ClassicSimilarity], result of:
              0.005158836 = score(doc=4744,freq=4.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.10809815 = fieldWeight in 4744, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4744)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this article, we report on an experiment to assess the possibility of rigorous evaluation of interactive question-answering (QA) systems using the cross-evaluation method. This method takes into account the effects of tasks and context, and of the users of the systems. Statistical techniques are used to remove these effects, isolating the effect of the system itself. The results show that this approach yields meaningful measurements of the impact of systems on user task performance, using a surprisingly small number of subjects and without relying on predetermined judgments of the quality, or of the relevance of materials. We conclude that the method is indeed effective for comparing end-to-end QA systems, and for comparing interactive systems with high efficiency.
    Type
    a