Search (6 results, page 1 of 1)

  • × author_ss:"Sun, Y."
  • × year_i:[2000 TO 2010}
  1. Leydesdorff, L.; Sun, Y.: National and international dimensions of the Triple Helix in Japan : university-industry-government versus international coauthorship relations (2009) 0.02
    0.019396545 = product of:
      0.03879309 = sum of:
        0.03879309 = sum of:
          0.008140436 = weight(_text_:a in 2761) [ClassicSimilarity], result of:
            0.008140436 = score(doc=2761,freq=12.0), product of:
              0.043477926 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.037706986 = queryNorm
              0.18723148 = fieldWeight in 2761, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=2761)
          0.030652655 = weight(_text_:22 in 2761) [ClassicSimilarity], result of:
            0.030652655 = score(doc=2761,freq=2.0), product of:
              0.13204344 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037706986 = queryNorm
              0.23214069 = fieldWeight in 2761, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2761)
      0.5 = coord(1/2)
    
    Abstract
    International co-authorship relations and university-industry-government (Triple Helix) relations have hitherto been studied separately. Using Japanese publication data for the 1981-2004 period, we were able to study both kinds of relations in a single design. In the Japanese file, 1,277,030 articles with at least one Japanese address were attributed to the three sectors, and we know additionally whether these papers were coauthored internationally. Using the mutual information in three and four dimensions, respectively, we show that the Japanese Triple-Helix system has been continuously eroded at the national level. However, since the mid-1990s, international coauthorship relations have contributed to a reduction of the uncertainty at the national level. In other words, the national publication system of Japan has developed a capacity to retain surplus value generated internationally. In a final section, we compare these results with an analysis based on similar data for Canada. A relative uncoupling of national university-industry-government relations because of international collaborations is indicated in both countries.
    Date
    22. 3.2009 19:07:20
    Type
    a
  2. Wacholder, N.; Kelly, D.; Kantor, P.; Rittman, R.; Sun, Y.; Bai, B.; Small, S.; Yamrom, B.; Strzalkowski, T.: ¬A model for quantitative evaluation of an end-to-end question-answering system (2007) 0.00
    0.0026273148 = product of:
      0.0052546295 = sum of:
        0.0052546295 = product of:
          0.010509259 = sum of:
            0.010509259 = weight(_text_:a in 435) [ClassicSimilarity], result of:
              0.010509259 = score(doc=435,freq=20.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.24171482 = fieldWeight in 435, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=435)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We describe a procedure for quantitative evaluation of interactive question-answering systems and illustrate it with application to the High-Quality Interactive QuestionAnswering (HITIQA) system. Our objectives were (a) to design a method to realistically and reliably assess interactive question-answering systems by comparing the quality of reports produced using different systems, (b) to conduct a pilot test of this method, and (c) to perform a formative evaluation of the HITIQA system. Far more important than the specific information gathered from this pilot evaluation is the development of (a) a protocol for evaluating an emerging technology, (b) reusable assessment instruments, and (c) the knowledge gained in conducting the evaluation. We conclude that this method, which uses a surprisingly small number of subjects and does not rely on predetermined relevance judgments, measures the impact of system change on work produced by users. Therefore this method can be used to compare the product of interactive systems that use different underlying technologies.
    Type
    a
  3. Wu, K.J.; Chen, M.-C.; Sun, Y.: Automatic topics discovery from hyperlinked documents (2004) 0.00
    0.0021981692 = product of:
      0.0043963385 = sum of:
        0.0043963385 = product of:
          0.008792677 = sum of:
            0.008792677 = weight(_text_:a in 2563) [ClassicSimilarity], result of:
              0.008792677 = score(doc=2563,freq=14.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.20223314 = fieldWeight in 2563, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2563)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Topic discovery is an important means for marketing, e-Business and social science studies. As well, it can be applied to various purposes, such as identifying a group with certain properties and observing the emergence and diminishment of a certain cyber community. Previous topic discovery work (J.M. Kleinberg, Proceedings of the 9th Annual ACM-SIAM Symposium on Discrete Algorithms, San Francisco, California, p. 668) requires manual judgment of usefulness of outcomes and is thus incapable of handling the explosive growth of the Internet. In this paper, we propose the Automatic Topic Discovery (ATD) method, which combines a method of base set construction, a clustering algorithm and an iterative principal eigenvector computation method to discover the topics relevant to a given query without using manual examination. Given a query, ATD returns with topics associated with the query and top representative pages for each topic. Our experiments show that the ATD method performs better than the traditional eigenvector method in terms of computation time and topic discovery quality.
    Type
    a
  4. Sun, Y.; Kantor, P.B.: Cross-evaluation : a new model for information system evaluation (2006) 0.00
    0.0019582848 = product of:
      0.0039165695 = sum of:
        0.0039165695 = product of:
          0.007833139 = sum of:
            0.007833139 = weight(_text_:a in 5048) [ClassicSimilarity], result of:
              0.007833139 = score(doc=5048,freq=16.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.18016359 = fieldWeight in 5048, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5048)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this article, we introduce a new information system evaluation method and report on its application to a collaborative information seeking system, AntWorld. The key innovation of the new method is to use precisely the same group of users who work with the system as judges, a system we call Cross-Evaluation. In the new method, we also propose to assess the system at the level of task completion. The obvious potential limitation of this method is that individuals may be inclined to think more highly of the materials that they themselves have found and are almost certain to think more highly of their own work product than they do of the products built by others. The keys to neutralizing this problem are careful design and a corresponding analytical model based on analysis of variance. We model the several measures of task completion with a linear model of five effects, describing the users who interact with the system, the system used to finish the task, the task itself, the behavior of individuals as judges, and the selfjudgment bias. Our analytical method successfully isolates the effect of each variable. This approach provides a successful model to make concrete the "threerealities" paradigm, which calls for "real tasks," "real users," and "real systems."
    Type
    a
  5. Ng, K.B.; Kantor, P.B.; Strzalkowski, T.; Wacholder, N.; Tang, R.; Bai, B.; Rittman,; Song, P.; Sun, Y.: Automated judgment of document qualities (2006) 0.00
    0.0018577921 = product of:
      0.0037155843 = sum of:
        0.0037155843 = product of:
          0.0074311686 = sum of:
            0.0074311686 = weight(_text_:a in 182) [ClassicSimilarity], result of:
              0.0074311686 = score(doc=182,freq=10.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.1709182 = fieldWeight in 182, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=182)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The authors report on a series of experiments to automate the assessment of document qualities such as depth and objectivity. The primary purpose is to develop a quality-sensitive functionality, orthogonal to relevance, to select documents for an interactive question-answering system. The study consisted of two stages. In the classifier construction stage, nine document qualities deemed important by information professionals were identified and classifiers were developed to predict their values. In the confirmative evaluation stage, the performance of the developed methods was checked using a different document collection. The quality prediction methods worked well in the second stage. The results strongly suggest that the best way to predict document qualities automatically is to construct classifiers on a person-by-person basis.
    Type
    a
  6. Kelly, D.; Wacholder, N.; Rittman, R.; Sun, Y.; Kantor, P.; Small, S.; Strzalkowski, T.: Using interview data to identify evaluation criteria for interactive, analytical question-answering systems (2007) 0.00
    0.0011749709 = product of:
      0.0023499418 = sum of:
        0.0023499418 = product of:
          0.0046998835 = sum of:
            0.0046998835 = weight(_text_:a in 332) [ClassicSimilarity], result of:
              0.0046998835 = score(doc=332,freq=4.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.10809815 = fieldWeight in 332, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=332)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The purpose of this work is to identify potential evaluation criteria for interactive, analytical question-answering (QA) systems by analyzing evaluative comments made by users of such a system. Qualitative data collected from intelligence analysts during interviews and focus groups were analyzed to identify common themes related to performance, use, and usability. These data were collected as part of an intensive, three-day evaluation workshop of the High-Quality Interactive Question Answering (HITIQA) system. Inductive coding and memoing were used to identify and categorize these data. Results suggest potential evaluation criteria for interactive, analytical QA systems, which can be used to guide the development and design of future systems and evaluations. This work contributes to studies of QA systems, information seeking and use behaviors, and interactive searching.
    Type
    a