Search (7 results, page 1 of 1)

  • × author_ss:"Ruiz-Castillo, J."
  1. Crespo, J.A.; Herranz, N.; Li, Y.; Ruiz-Castillo, J.: ¬The effect on citation inequality of differences in citation practices at the web of science subject category level (2014) 0.14
    0.13763928 = product of:
      0.20645893 = sum of:
        0.18247387 = weight(_text_:citation in 1291) [ClassicSimilarity], result of:
          0.18247387 = score(doc=1291,freq=18.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.7771468 = fieldWeight in 1291, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1291)
        0.023985062 = product of:
          0.047970124 = sum of:
            0.047970124 = weight(_text_:22 in 1291) [ClassicSimilarity], result of:
              0.047970124 = score(doc=1291,freq=4.0), product of:
                0.17534193 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050071523 = queryNorm
                0.27358043 = fieldWeight in 1291, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1291)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This article studies the impact of differences in citation practices at the subfield, or Web of Science subject category level, using the model introduced in Crespo, Li, and Ruiz-Castillo (2013a), according to which the number of citations received by an article depends on its underlying scientific influence and the field to which it belongs. We use the same Thomson Reuters data set of about 4.4 million articles used in Crespo et al. (2013a) to analyze 22 broad fields. The main results are the following: First, when the classification system goes from 22 fields to 219 subfields the effect on citation inequality of differences in citation practices increases from ?14% at the field level to 18% at the subfield level. Second, we estimate a set of exchange rates (ERs) over a wide [660, 978] citation quantile interval to express the citation counts of articles into the equivalent counts in the all-sciences case. In the fractional case, for example, we find that in 187 of 219 subfields the ERs are reliable in the sense that the coefficient of variation is smaller than or equal to 0.10. Third, in the fractional case the normalization of the raw data using the ERs (or subfield mean citations) as normalization factors reduces the importance of the differences in citation practices from 18% to 3.8% (3.4%) of overall citation inequality. Fourth, the results in the fractional case are essentially replicated when we adopt a multiplicative approach.
  2. Albarrán, P.; Ruiz-Castillo, J.: References made and citations received by scientific articles (2011) 0.10
    0.09784907 = product of:
      0.1467736 = sum of:
        0.12642162 = weight(_text_:citation in 4185) [ClassicSimilarity], result of:
          0.12642162 = score(doc=4185,freq=6.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.5384232 = fieldWeight in 4185, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.046875 = fieldNorm(doc=4185)
        0.020351999 = product of:
          0.040703997 = sum of:
            0.040703997 = weight(_text_:22 in 4185) [ClassicSimilarity], result of:
              0.040703997 = score(doc=4185,freq=2.0), product of:
                0.17534193 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050071523 = queryNorm
                0.23214069 = fieldWeight in 4185, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4185)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This article studies massive evidence about references made and citations received after a 5-year citation window by 3.7 million articles published in 1998 to 2002 in 22 scientific fields. We find that the distributions of references made and citations received share a number of basic features across sciences. Reference distributions are rather skewed to the right while citation distributions are even more highly skewed: The mean is about 20 percentage points to the right of the median, and articles with a remarkable or an outstanding number of citations represent about 9% of the total. Moreover, the existence of a power law representing the upper tail of citation distributions cannot be rejected in 17 fields whose articles represent 74.7% of the total. Contrary to the evidence in other contexts, the value of the scale parameter is above 3.5 in 13 of the 17 cases. Finally, power laws are typically small, but capture a considerable proportion of the total citations received.
  3. Perianes-Rodriguez, A.; Ruiz-Castillo, J.: University citation distributions (2016) 0.07
    0.06724416 = product of:
      0.20173246 = sum of:
        0.20173246 = weight(_text_:citation in 3152) [ClassicSimilarity], result of:
          0.20173246 = score(doc=3152,freq=22.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.8591682 = fieldWeight in 3152, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3152)
      0.33333334 = coord(1/3)
    
    Abstract
    We investigate the citation distributions of the 500 universities in the 2013 edition of the Leiden Ranking produced by The Centre for Science and Technological Studies. We use a Web of Science data set consisting of 3.6 million articles published in 2003 to 2008 and classified into 5,119 clusters. The main findings are the following. First, the universality claim, according to which all university-citation distributions, appropriately normalized, follow a single functional form, is not supported by the data. Second, the 500 university citation distributions are all highly skewed and very similar. Broadly speaking, university citation distributions appear to behave as if they differ by a relatively constant scale factor over a large, intermediate part of their support. Third, citation-impact differences between universities account for 3.85% of overall citation inequality. This percentage is greatly reduced when university citation distributions are normalized using their mean normalized citation scores (MNCSs) as normalization factors. Finally, regarding practical consequences, we only need a single explanatory model for the type of high skewness characterizing all university citation distributions, and the similarity of university citation distributions goes a long way in explaining the similarity of the university rankings obtained with the MNCS and the Top 10% indicator.
  4. Albarrán, P.; Perianes-Rodríguez, A.; Ruiz-Castillo, J.: Differences in citation impact across countries (2015) 0.06
    0.059595715 = product of:
      0.17878714 = sum of:
        0.17878714 = weight(_text_:citation in 1665) [ClassicSimilarity], result of:
          0.17878714 = score(doc=1665,freq=12.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.7614453 = fieldWeight in 1665, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.046875 = fieldNorm(doc=1665)
      0.33333334 = coord(1/3)
    
    Abstract
    Using a large data set, indexed by Thomson Reuters, consisting of 4.4 million articles published in 1998-2003 with a 5-year citation window for each year, this article studies country citation distributions for a partitioning of the world into 36 countries and two geographical areas in eight broad scientific fields and the all-sciences case. The two key findings are the following. First, country citation distributions are highly skewed and very similar to each other in all fields. Second, to a large extent, differences in country citation distributions can be accounted for by scale factors. The Empirical situation described in the article helps to understand why international comparisons of citation impact according to (a) mean citations and (b) the percentage of articles in each country belonging to the top 10% of the most cited articles are so similar to each other.
  5. Costas, R.; Perianes-Rodríguez, A.; Ruiz-Castillo, J.: On the quest for currencies of science : field "exchange rates" for citations and Mendeley readership (2017) 0.05
    0.054922134 = product of:
      0.0823832 = sum of:
        0.0688152 = weight(_text_:citation in 4051) [ClassicSimilarity], result of:
          0.0688152 = score(doc=4051,freq=4.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.29308042 = fieldWeight in 4051, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.03125 = fieldNorm(doc=4051)
        0.013568 = product of:
          0.027136 = sum of:
            0.027136 = weight(_text_:22 in 4051) [ClassicSimilarity], result of:
              0.027136 = score(doc=4051,freq=2.0), product of:
                0.17534193 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050071523 = queryNorm
                0.15476047 = fieldWeight in 4051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4051)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose The introduction of "altmetrics" as new tools to analyze scientific impact within the reward system of science has challenged the hegemony of citations as the predominant source for measuring scientific impact. Mendeley readership has been identified as one of the most important altmetric sources, with several features that are similar to citations. The purpose of this paper is to perform an in-depth analysis of the differences and similarities between the distributions of Mendeley readership and citations across fields. Design/methodology/approach The authors analyze two issues by using in each case a common analytical framework for both metrics: the shape of the distributions of readership and citations, and the field normalization problem generated by differences in citation and readership practices across fields. In the first issue the authors use the characteristic scores and scales method, and in the second the measurement framework introduced in Crespo et al. (2013). Findings There are three main results. First, the citations and Mendeley readership distributions exhibit a strikingly similar degree of skewness in all fields. Second, the results on "exchange rates (ERs)" for Mendeley readership empirically supports the possibility of comparing readership counts across fields, as well as the field normalization of readership distributions using ERs as normalization factors. Third, field normalization using field mean readerships as normalization factors leads to comparably good results. Originality/value These findings open up challenging new questions, particularly regarding the possibility of obtaining conflicting results from field normalized citation and Mendeley readership indicators; this suggests the need for better determining the role of the two metrics in capturing scientific recognition.
    Date
    20. 1.2015 18:30:22
  6. Herranz, N.; Ruiz-Castillo, J.: Multiplicative and fractional strategies when journals are assigned to several subfields (2012) 0.05
    0.048659697 = product of:
      0.14597909 = sum of:
        0.14597909 = weight(_text_:citation in 484) [ClassicSimilarity], result of:
          0.14597909 = score(doc=484,freq=8.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.62171745 = fieldWeight in 484, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.046875 = fieldNorm(doc=484)
      0.33333334 = coord(1/3)
    
    Abstract
    In many data sets, articles are classified into subfields through the journals in which they have been published. The problem is that while many journals are assigned to a single subfield, many others are assigned to several. This article discusses a multiplicative and a fractional strategy to deal with this situation. The empirical part studies different aspects of citation distributions under the two strategies, namely: the number of articles, the mean citation rate, the broad shape of the distribution, their characterization in terms of size- and scale-invariant indicators of high and low impact, and the presence of extreme distributions, that is, distributions that behave very differently from the rest. We found that, despite large differences in the number of articles according to both strategies, the similarity of the citation characteristics of articles published in journals assigned to one or several subfields guarantees that choosing one of the two strategies may not lead to a radically different picture in practical applications. Nevertheless, the characterization of citation excellence through a high-impact indicator may considerably differ depending on that choice.
  7. Perianes-Rodriguez, A.; Ruiz-Castillo, J.: ¬The impact of classification systems in the evaluation of the research performance of the Leiden Ranking universities (2018) 0.02
    0.020274874 = product of:
      0.06082462 = sum of:
        0.06082462 = weight(_text_:citation in 4374) [ClassicSimilarity], result of:
          0.06082462 = score(doc=4374,freq=2.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.25904894 = fieldWeight in 4374, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4374)
      0.33333334 = coord(1/3)
    
    Abstract
    In this article, we investigate the consequences of choosing different classification systems-namely, the way publications (or journals) are assigned to scientific fields-for the ranking of research units. We study the impact of this choice on the ranking of 500 universities in the 2013 edition of the Leiden Ranking in two cases. First, we compare a Web of Science (WoS) journal-level classification system, consisting of 236 subject categories, and a publication-level algorithmically constructed system, denoted G8, consisting of 5,119 clusters. The result is that the consequences of the move from the WoS to the G8 system using the Top 1% citation impact indicator are much greater than the consequences of this move using the Top 10% indicator. Second, we compare the G8 classification system and a publication-level alternative of the same family, the G6 system, consisting of 1,363 clusters. The result is that, although less important than in the previous case, the consequences of the move from the G6 to the G8 system under the Top 1% indicator are still of a large order of magnitude.