Search (4 results, page 1 of 1)

  • × author_ss:"Li, Y."
  • × year_i:[2010 TO 2020}
  1. Crespo, J.A.; Herranz, N.; Li, Y.; Ruiz-Castillo, J.: ¬The effect on citation inequality of differences in citation practices at the web of science subject category level (2014) 0.05
    0.0515022 = product of:
      0.1030044 = sum of:
        0.1030044 = sum of:
          0.053092297 = weight(_text_:web in 1291) [ClassicSimilarity], result of:
            0.053092297 = score(doc=1291,freq=6.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.3122631 = fieldWeight in 1291, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1291)
          0.049912106 = weight(_text_:22 in 1291) [ClassicSimilarity], result of:
            0.049912106 = score(doc=1291,freq=4.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.27358043 = fieldWeight in 1291, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1291)
      0.5 = coord(1/2)
    
    Abstract
    This article studies the impact of differences in citation practices at the subfield, or Web of Science subject category level, using the model introduced in Crespo, Li, and Ruiz-Castillo (2013a), according to which the number of citations received by an article depends on its underlying scientific influence and the field to which it belongs. We use the same Thomson Reuters data set of about 4.4 million articles used in Crespo et al. (2013a) to analyze 22 broad fields. The main results are the following: First, when the classification system goes from 22 fields to 219 subfields the effect on citation inequality of differences in citation practices increases from ?14% at the field level to 18% at the subfield level. Second, we estimate a set of exchange rates (ERs) over a wide [660, 978] citation quantile interval to express the citation counts of articles into the equivalent counts in the all-sciences case. In the fractional case, for example, we find that in 187 of 219 subfields the ERs are reliable in the sense that the coefficient of variation is smaller than or equal to 0.10. Third, in the fractional case the normalization of the raw data using the ERs (or subfield mean citations) as normalization factors reduces the importance of the differences in citation practices from 18% to 3.8% (3.4%) of overall citation inequality. Fourth, the results in the fractional case are essentially replicated when we adopt a multiplicative approach.
    Object
    Web of Science
  2. Arora, S.K.; Li, Y.; Youtie, J.; Shapira, P.: Using the wayback machine to mine websites in the social sciences : a methodological resource (2016) 0.01
    0.013273074 = product of:
      0.026546149 = sum of:
        0.026546149 = product of:
          0.053092297 = sum of:
            0.053092297 = weight(_text_:web in 3050) [ClassicSimilarity], result of:
              0.053092297 = score(doc=3050,freq=6.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.3122631 = fieldWeight in 3050, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3050)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Websites offer an unobtrusive data source for developing and analyzing information about various types of social science phenomena. In this paper, we provide a methodological resource for social scientists looking to expand their toolkit using unstructured web-based text, and in particular, with the Wayback Machine, to access historical website data. After providing a literature review of existing research that uses the Wayback Machine, we put forward a step-by-step description of how the analyst can design a research project using archived websites. We draw on the example of a project that analyzes indicators of innovation activities and strategies in 300 U.S. small- and medium-sized enterprises in green goods industries. We present six steps to access historical Wayback website data: (a) sampling, (b) organizing and defining the boundaries of the web crawl, (c) crawling, (d) website variable operationalization, (e) integration with other data sources, and (f) analysis. Although our examples draw on specific types of firms in green goods industries, the method can be generalized to other areas of research. In discussing the limitations and benefits of using the Wayback Machine, we note that both machine and human effort are essential to developing a high-quality data set from archived web information.
  3. Yang, M.; Kiang, M.; Chen, H.; Li, Y.: Artificial immune system for illicit content identification in social media (2012) 0.01
    0.01083742 = product of:
      0.02167484 = sum of:
        0.02167484 = product of:
          0.04334968 = sum of:
            0.04334968 = weight(_text_:web in 4980) [ClassicSimilarity], result of:
              0.04334968 = score(doc=4980,freq=4.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.25496176 = fieldWeight in 4980, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4980)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Social media is frequently used as a platform for the exchange of information and opinions as well as propaganda dissemination. But online content can be misused for the distribution of illicit information, such as violent postings in web forums. Illicit content is highly distributed in social media, while non-illicit content is unspecific and topically diverse. It is costly and time consuming to label a large amount of illicit content (positive examples) and non-illicit content (negative examples) to train classification systems. Nevertheless, it is relatively easy to obtain large volumes of unlabeled content in social media. In this article, an artificial immune system-based technique is presented to address the difficulties in the illicit content identification in social media. Inspired by the positive selection principle in the immune system, we designed a novel labeling heuristic based on partially supervised learning to extract high-quality positive and negative examples from unlabeled datasets. The empirical evaluation results from two large hate group web forums suggest that our proposed approach generally outperforms the benchmark techniques and exhibits more stable performance.
  4. Shen, J.; Yao, L.; Li, Y.; Clarke, M.; Wang, L.; Li, D.: Visualizing the history of evidence-based medicine : a bibliometric analysis (2013) 0.01
    0.007663213 = product of:
      0.015326426 = sum of:
        0.015326426 = product of:
          0.030652853 = sum of:
            0.030652853 = weight(_text_:web in 1090) [ClassicSimilarity], result of:
              0.030652853 = score(doc=1090,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.18028519 = fieldWeight in 1090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1090)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The aim of this paper is to visualize the history of evidence-based medicine (EBM) and to examine the characteristics of EBM development in China and the West. We searched the Web of Science and the Chinese National Knowledge Infrastructure database for papers related to EBM. We applied information visualization techniques, citation analysis, cocitation analysis, cocitation cluster analysis, and network analysis to construct historiographies, themes networks, and chronological theme maps regarding EBM in China and the West. EBM appeared to develop in 4 stages: incubation (1972-1992 in the West vs. 1982-1999 in China), initiation (1992-1993 vs. 1999-2000), rapid development (1993-2000 vs. 2000-2004), and stable distribution (2000 onwards vs. 2004 onwards). Although there was a lag in EBM initiation in China compared with the West, the pace of development appeared similar. Our study shows that important differences exist in research themes, domain structures, and development depth, and in the speed of adoption between China and the West. In the West, efforts in EBM have shifted from education to practice, and from the quality of evidence to its translation. In China, there was a similar shift from education to practice, and from production of evidence to its translation. In addition, this concept has diffused to other healthcare areas, leading to the development of evidence-based traditional Chinese medicine, evidence-based nursing, and evidence-based policy making.