Search (12 results, page 1 of 1)

  • × author_ss:"Sun, Y."
  1. Leydesdorff, L.; Sun, Y.: National and international dimensions of the Triple Helix in Japan : university-industry-government versus international coauthorship relations (2009) 0.02
    0.021290656 = product of:
      0.042581312 = sum of:
        0.042581312 = sum of:
          0.008935366 = weight(_text_:a in 2761) [ClassicSimilarity], result of:
            0.008935366 = score(doc=2761,freq=12.0), product of:
              0.04772363 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041389145 = queryNorm
              0.18723148 = fieldWeight in 2761, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=2761)
          0.033645947 = weight(_text_:22 in 2761) [ClassicSimilarity], result of:
            0.033645947 = score(doc=2761,freq=2.0), product of:
              0.14493774 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041389145 = queryNorm
              0.23214069 = fieldWeight in 2761, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2761)
      0.5 = coord(1/2)
    
    Abstract
    International co-authorship relations and university-industry-government (Triple Helix) relations have hitherto been studied separately. Using Japanese publication data for the 1981-2004 period, we were able to study both kinds of relations in a single design. In the Japanese file, 1,277,030 articles with at least one Japanese address were attributed to the three sectors, and we know additionally whether these papers were coauthored internationally. Using the mutual information in three and four dimensions, respectively, we show that the Japanese Triple-Helix system has been continuously eroded at the national level. However, since the mid-1990s, international coauthorship relations have contributed to a reduction of the uncertainty at the national level. In other words, the national publication system of Japan has developed a capacity to retain surplus value generated internationally. In a final section, we compare these results with an analysis based on similar data for Canada. A relative uncoupling of national university-industry-government relations because of international collaborations is indicated in both countries.
    Date
    22. 3.2009 19:07:20
    Type
    a
  2. Wacholder, N.; Kelly, D.; Kantor, P.; Rittman, R.; Sun, Y.; Bai, B.; Small, S.; Yamrom, B.; Strzalkowski, T.: ¬A model for quantitative evaluation of an end-to-end question-answering system (2007) 0.00
    0.0028838771 = product of:
      0.0057677543 = sum of:
        0.0057677543 = product of:
          0.011535509 = sum of:
            0.011535509 = weight(_text_:a in 435) [ClassicSimilarity], result of:
              0.011535509 = score(doc=435,freq=20.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.24171482 = fieldWeight in 435, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=435)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We describe a procedure for quantitative evaluation of interactive question-answering systems and illustrate it with application to the High-Quality Interactive QuestionAnswering (HITIQA) system. Our objectives were (a) to design a method to realistically and reliably assess interactive question-answering systems by comparing the quality of reports produced using different systems, (b) to conduct a pilot test of this method, and (c) to perform a formative evaluation of the HITIQA system. Far more important than the specific information gathered from this pilot evaluation is the development of (a) a protocol for evaluating an emerging technology, (b) reusable assessment instruments, and (c) the knowledge gained in conducting the evaluation. We conclude that this method, which uses a surprisingly small number of subjects and does not rely on predetermined relevance judgments, measures the impact of system change on work produced by users. Therefore this method can be used to compare the product of interactive systems that use different underlying technologies.
    Type
    a
  3. Wu, K.J.; Chen, M.-C.; Sun, Y.: Automatic topics discovery from hyperlinked documents (2004) 0.00
    0.0024128247 = product of:
      0.0048256493 = sum of:
        0.0048256493 = product of:
          0.009651299 = sum of:
            0.009651299 = weight(_text_:a in 2563) [ClassicSimilarity], result of:
              0.009651299 = score(doc=2563,freq=14.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.20223314 = fieldWeight in 2563, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2563)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Topic discovery is an important means for marketing, e-Business and social science studies. As well, it can be applied to various purposes, such as identifying a group with certain properties and observing the emergence and diminishment of a certain cyber community. Previous topic discovery work (J.M. Kleinberg, Proceedings of the 9th Annual ACM-SIAM Symposium on Discrete Algorithms, San Francisco, California, p. 668) requires manual judgment of usefulness of outcomes and is thus incapable of handling the explosive growth of the Internet. In this paper, we propose the Automatic Topic Discovery (ATD) method, which combines a method of base set construction, a clustering algorithm and an iterative principal eigenvector computation method to discover the topics relevant to a given query without using manual examination. Given a query, ATD returns with topics associated with the query and top representative pages for each topic. Our experiments show that the ATD method performs better than the traditional eigenvector method in terms of computation time and topic discovery quality.
    Type
    a
  4. Sun, Y.; Kantor, P.B.: Cross-evaluation : a new model for information system evaluation (2006) 0.00
    0.002149515 = product of:
      0.00429903 = sum of:
        0.00429903 = product of:
          0.00859806 = sum of:
            0.00859806 = weight(_text_:a in 5048) [ClassicSimilarity], result of:
              0.00859806 = score(doc=5048,freq=16.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.18016359 = fieldWeight in 5048, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5048)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this article, we introduce a new information system evaluation method and report on its application to a collaborative information seeking system, AntWorld. The key innovation of the new method is to use precisely the same group of users who work with the system as judges, a system we call Cross-Evaluation. In the new method, we also propose to assess the system at the level of task completion. The obvious potential limitation of this method is that individuals may be inclined to think more highly of the materials that they themselves have found and are almost certain to think more highly of their own work product than they do of the products built by others. The keys to neutralizing this problem are careful design and a corresponding analytical model based on analysis of variance. We model the several measures of task completion with a linear model of five effects, describing the users who interact with the system, the system used to finish the task, the task itself, the behavior of individuals as judges, and the selfjudgment bias. Our analytical method successfully isolates the effect of each variable. This approach provides a successful model to make concrete the "threerealities" paradigm, which calls for "real tasks," "real users," and "real systems."
    Type
    a
  5. Ng, K.B.; Kantor, P.B.; Strzalkowski, T.; Wacholder, N.; Tang, R.; Bai, B.; Rittman,; Song, P.; Sun, Y.: Automated judgment of document qualities (2006) 0.00
    0.0020392092 = product of:
      0.0040784185 = sum of:
        0.0040784185 = product of:
          0.008156837 = sum of:
            0.008156837 = weight(_text_:a in 182) [ClassicSimilarity], result of:
              0.008156837 = score(doc=182,freq=10.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.1709182 = fieldWeight in 182, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=182)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The authors report on a series of experiments to automate the assessment of document qualities such as depth and objectivity. The primary purpose is to develop a quality-sensitive functionality, orthogonal to relevance, to select documents for an interactive question-answering system. The study consisted of two stages. In the classifier construction stage, nine document qualities deemed important by information professionals were identified and classifiers were developed to predict their values. In the confirmative evaluation stage, the performance of the developed methods was checked using a different document collection. The quality prediction methods worked well in the second stage. The results strongly suggest that the best way to predict document qualities automatically is to construct classifiers on a person-by-person basis.
    Type
    a
  6. Sun, Y.; Wang, N.; Shen, X.-L.; Zhang, X.: Bias effects, synergistic effects, and information contingency effects : developing and testing an extended information adoption model in social Q&A (2019) 0.00
    0.0020392092 = product of:
      0.0040784185 = sum of:
        0.0040784185 = product of:
          0.008156837 = sum of:
            0.008156837 = weight(_text_:a in 5439) [ClassicSimilarity], result of:
              0.008156837 = score(doc=5439,freq=10.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.1709182 = fieldWeight in 5439, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5439)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    To advance the theoretical understanding on information adoption, this study tries to extend the information adoption model (IAM) in three ways. First, this study considers the relationship between source credibility and argument quality and the relationship between herding factors and information usefulness (i.e., bias effects). Second, this study proposes the interaction effects of source credibility and argument quality and the interaction effects of herding factors and information usefulness (i.e., synergistic effects). Third, this study explores the moderating role of an information characteristic - search versus experience information (i.e., information contingency effects). The proposed extended information adoption model (EIAM) is empirically tested through a 2 by 2 by 2 experiment in the social Q&A context, and the results confirm most of the hypotheses. Finally, theoretical contributions and practical implications are discussed.
    Footnote
    Part of a special issue for research on people's engagement with technology.
    Type
    a
  7. Zhang, Y.; Sun, Y.; Xie, B.: Quality of health information for consumers on the web : a systematic review of indicators, criteria, tools, and evaluation results (2015) 0.00
    0.0020106873 = product of:
      0.0040213745 = sum of:
        0.0040213745 = product of:
          0.008042749 = sum of:
            0.008042749 = weight(_text_:a in 2218) [ClassicSimilarity], result of:
              0.008042749 = score(doc=2218,freq=14.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.1685276 = fieldWeight in 2218, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2218)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The quality of online health information for consumers has been a critical issue that concerns all stakeholders in healthcare. To gain an understanding of how quality is evaluated, this systematic review examined 165 articles in which researchers evaluated the quality of consumer-oriented health information on the web against predefined criteria. It was found that studies typically evaluated quality in relation to the substance and formality of content, as well as to the design of technological platforms. Attention to design, particularly interactivity, privacy, and social and cultural appropriateness is on the rise, which suggests the permeation of a user-centered perspective into the evaluation of health information systems, and a growing recognition of the need to study these systems from a social-technical perspective. Researchers used many preexisting instruments to facilitate evaluation of the formality of content; however, only a few were used in multiple studies, and their validity was questioned. The quality of content (i.e., accuracy and completeness) was always evaluated using proprietary instruments constructed based on medical guidelines or textbooks. The evaluation results revealed that the quality of health information varied across medical domains and across websites, and that the overall quality remained problematic. Future research is needed to examine the quality of user-generated content and to explore opportunities offered by emerging new media that can facilitate the consumer evaluation of health information.
    Type
    a
  8. Shen, X.-L.; Li, Y.-J.; Sun, Y.; Chen, J.; Wang, F.: Knowledge withholding in online knowledge spaces : social deviance behavior and secondary control perspective (2019) 0.00
    0.0018615347 = product of:
      0.0037230693 = sum of:
        0.0037230693 = product of:
          0.0074461387 = sum of:
            0.0074461387 = weight(_text_:a in 5016) [ClassicSimilarity], result of:
              0.0074461387 = score(doc=5016,freq=12.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.15602624 = fieldWeight in 5016, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5016)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Knowledge withholding, which is defined as the likelihood that an individual devotes less than full effort to knowledge contribution, can be regarded as an emerging social deviance behavior for knowledge practice in online knowledge spaces. However, prior studies placed a great emphasis on proactive knowledge behaviors, such as knowledge sharing and contribution, but failed to consider the uniqueness of knowledge withholding. To capture the social-deviant nature of knowledge withholding and to better understand how people deal with counterproductive knowledge behaviors, this study develops a research model based on the secondary control perspective. Empirical analyses were conducted using the data collected from an online knowledge space. The results indicate that both predictive control and vicarious control exert a positive influence on knowledge withholding. This study also incorporates knowledge-withholding acceptability as a moderating variable of secondary control strategies. In particular, knowledge-withholding acceptability enhances the impact of predictive control, whereas it weakens the effect of vicarious control on knowledge withholding. This study concludes with a discussion of the key findings, and the implications for both research and practice.
    Type
    a
  9. Wu, D.; Xu, H.; Sun, Y.; Lv, S.: What should we teach? : A human-centered data science graduate curriculum model design for iField schools (2023) 0.00
    0.0018615347 = product of:
      0.0037230693 = sum of:
        0.0037230693 = product of:
          0.0074461387 = sum of:
            0.0074461387 = weight(_text_:a in 961) [ClassicSimilarity], result of:
              0.0074461387 = score(doc=961,freq=12.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.15602624 = fieldWeight in 961, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=961)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The information schools, also referred to as iField schools, are leaders in data science education. This study aims to develop a data science graduate curriculum model from an information science perspective to support iField schools in developing data science graduate education. In June 2020, information about 96 data science graduate programs from iField schools worldwide was collected and analyzed using a mixed research method based on inductive content analysis. A wide range of data science competencies and skills development and 12 knowledge topics covered by the curriculum were obtained. The humanistic model is further taken as the theoretical and methodological basis for course model construction, and 12 course knowledge topics are reconstructed into 4 course modules, including (a) data-driven methods and techniques; (b) domain knowledge; (c) legal, moral, and ethical aspects of data; and (d) shaping and developing personal traits, and human-centered data science graduate curriculum model is formed. At the end of the study, the wide application prospect of this model is discussed.
    Type
    a
  10. Xu, S.; Zhai, D.; Wang, F.; An, X.; Pang, H.; Sun, Y.: ¬A novel method for topic linkages between scientific publications and patents (2019) 0.00
    0.0015199365 = product of:
      0.003039873 = sum of:
        0.003039873 = product of:
          0.006079746 = sum of:
            0.006079746 = weight(_text_:a in 5360) [ClassicSimilarity], result of:
              0.006079746 = score(doc=5360,freq=8.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.12739488 = fieldWeight in 5360, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5360)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    It is increasingly important to build topic linkages between scientific publications and patents for the purpose of understanding the relationships between science and technology. Previous studies on the linkages mainly focus on the analysis of nonpatent references on the front page of patents, or the resulting citation-link networks, but with unsatisfactory performance. In the meanwhile, abundant mentioned entities in the scholarly articles and patents further complicate topic linkages. To deal with this situation, a novel statistical entity-topic model (named the CCorrLDA2 model), armed with the collapsed Gibbs sampling inference algorithm, is proposed to discover the hidden topics respectively from the academic articles and patents. In order to reduce the negative impact on topic similarity calculation, word tokens and entity mentions are grouped by the Brown clustering method. Then a topic linkages construction problem is transformed into the well-known optimal transportation problem after topic similarity is calculated on the basis of symmetrized Kullback-Leibler (KL) divergence. Extensive experimental results indicate that our approach is feasible to build topic linkages with more superior performance than the counterparts.
    Type
    a
  11. Kelly, D.; Wacholder, N.; Rittman, R.; Sun, Y.; Kantor, P.; Small, S.; Strzalkowski, T.: Using interview data to identify evaluation criteria for interactive, analytical question-answering systems (2007) 0.00
    0.001289709 = product of:
      0.002579418 = sum of:
        0.002579418 = product of:
          0.005158836 = sum of:
            0.005158836 = weight(_text_:a in 332) [ClassicSimilarity], result of:
              0.005158836 = score(doc=332,freq=4.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.10809815 = fieldWeight in 332, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=332)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The purpose of this work is to identify potential evaluation criteria for interactive, analytical question-answering (QA) systems by analyzing evaluative comments made by users of such a system. Qualitative data collected from intelligence analysts during interviews and focus groups were analyzed to identify common themes related to performance, use, and usability. These data were collected as part of an intensive, three-day evaluation workshop of the High-Quality Interactive Question Answering (HITIQA) system. Inductive coding and memoing were used to identify and categorize these data. Results suggest potential evaluation criteria for interactive, analytical QA systems, which can be used to guide the development and design of future systems and evaluations. This work contributes to studies of QA systems, information seeking and use behaviors, and interactive searching.
    Type
    a
  12. Sun, Y.; Kantor, P.B.; Morse, E.L.: Using cross-evaluation to evaluate interactive QA systems (2011) 0.00
    0.001289709 = product of:
      0.002579418 = sum of:
        0.002579418 = product of:
          0.005158836 = sum of:
            0.005158836 = weight(_text_:a in 4744) [ClassicSimilarity], result of:
              0.005158836 = score(doc=4744,freq=4.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.10809815 = fieldWeight in 4744, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4744)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this article, we report on an experiment to assess the possibility of rigorous evaluation of interactive question-answering (QA) systems using the cross-evaluation method. This method takes into account the effects of tasks and context, and of the users of the systems. Statistical techniques are used to remove these effects, isolating the effect of the system itself. The results show that this approach yields meaningful measurements of the impact of systems on user task performance, using a surprisingly small number of subjects and without relying on predetermined judgments of the quality, or of the relevance of materials. We conclude that the method is indeed effective for comparing end-to-end QA systems, and for comparing interactive systems with high efficiency.
    Type
    a