Search (4 results, page 1 of 1)

  • × author_ss:"Daniel, H.-D."
  • × author_ss:"Mutz, R."
  • × year_i:[2010 TO 2020}
  1. Mutz, R.; Daniel, H.-D.: What is behind the curtain of the Leiden Ranking? (2015) 0.00
    0.002269176 = product of:
      0.004538352 = sum of:
        0.004538352 = product of:
          0.009076704 = sum of:
            0.009076704 = weight(_text_:a in 2171) [ClassicSimilarity], result of:
              0.009076704 = score(doc=2171,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1709182 = fieldWeight in 2171, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2171)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Even with very well-documented rankings of universities, it is difficult for an individual university to reconstruct its position in the ranking. What is the reason behind whether a university places higher or lower in the ranking? Taking the example of ETH Zurich, the aim of this communication is to reconstruct how the high position of ETHZ (in Europe rank no. 1 in PP[top 10%]) in the Centre for Science and Technology Studies (CWTS) Leiden Ranking 2013 in the field "social sciences, arts and humanities" came about. According to our analyses, the bibliometric indicator values of a university depend very strongly on weights that result in differing estimates of both the total number of a university's publications and the number of publications with a citation impact in the 90th percentile, or PP(top 10%). In addition, we examine the effect of weights at the level of individual publications. Based on the results, we offer recommendations for improving the Leiden Ranking (for example, publication of sample calculations to increase transparency).
    Type
    a
  2. Mutz, R.; Wolbring, T.; Daniel, H.-D.: ¬The effect of the "very important paper" (VIP) designation in Angewandte Chemie International Edition on citation impact : a propensity score matching analysis (2017) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 3792) [ClassicSimilarity], result of:
              0.008285859 = score(doc=3792,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 3792, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3792)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Scientific journals publish an increasing number of articles every year. To steer readers' attention to the most important papers, journals use several techniques (e.g., lead paper). Angewandte Chemie International Edition (AC), a leading international journal in chemistry, signals high-quality papers through designating them as a "very important paper" (VIP). This study aims to investigate the citation impact of Communications in AC receiving the special feature VIP, both cumulated and over time. Using propensity score matching, treatment group (VIP) and control group (non-VIP) were balanced for 14 covariates to estimate the unconfounded "average treatment effect on the treated" for the VIP designation. Out of N = 3,011 Communications published in 2007 and 2008, N = 207 received the special feature VIP. For each Communication, data were collected from AC (e.g., referees' ratings) and from the databases Chemical Abstracts (e.g., sections) and the Web of Science (e.g., citations). The estimated unconfounded average treatment effect on the treated (that is, Communications designated as a VIP) was statistically significant and amounted to 19.83 citations. In addition, the special feature VIP fostered the cumulated annual citation growth. For instance, the time until a Communication reached its maximum annual number of citations, was reduced.
    Type
    a
  3. Mutz, R.; Bornmann, L.; Daniel, H.-D.: Testing for the fairness and predictive validity of research funding decisions : a multilevel multiple imputation for missing data approach using ex-ante and ex-post peer evaluation data from the Austrian science fund (2015) 0.00
    0.0016913437 = product of:
      0.0033826875 = sum of:
        0.0033826875 = product of:
          0.006765375 = sum of:
            0.006765375 = weight(_text_:a in 2270) [ClassicSimilarity], result of:
              0.006765375 = score(doc=2270,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12739488 = fieldWeight in 2270, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2270)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    It is essential for research funding organizations to ensure both the validity and fairness of the grant approval procedure. The ex-ante peer evaluation (EXANTE) of N?=?8,496 grant applications submitted to the Austrian Science Fund from 1999 to 2009 was statistically analyzed. For 1,689 funded research projects an ex-post peer evaluation (EXPOST) was also available; for the rest of the grant applications a multilevel missing data imputation approach was used to consider verification bias for the first time in peer-review research. Without imputation, the predictive validity of EXANTE was low (r?=?.26) but underestimated due to verification bias, and with imputation it was r?=?.49. That is, the decision-making procedure is capable of selecting the best research proposals for funding. In the EXANTE there were several potential biases (e.g., gender). With respect to the EXPOST there was only one real bias (discipline-specific and year-specific differential prediction). The novelty of this contribution is, first, the combining of theoretical concepts of validity and fairness with a missing data imputation approach to correct for verification bias and, second, multilevel modeling to test peer review-based funding decisions for both validity and fairness in terms of potential and real biases.
    Type
    a
  4. Bornmann, L.; Mutz, R.; Daniel, H.-D.: Multilevel-statistical reformulation of citation-based university rankings : the Leiden ranking 2011/2012 (2013) 0.00
    0.0011959607 = product of:
      0.0023919214 = sum of:
        0.0023919214 = product of:
          0.0047838427 = sum of:
            0.0047838427 = weight(_text_:a in 1007) [ClassicSimilarity], result of:
              0.0047838427 = score(doc=1007,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.090081796 = fieldWeight in 1007, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1007)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Since the 1990s, with the heightened competition and the strong growth of the international higher education market, an increasing number of rankings have been created that measure the scientific performance of an institution based on data. The Leiden Ranking 2011/2012 (LR) was published early in 2012. Starting from Goldstein and Spiegelhalter's (1996) recommendations for conducting quantitative comparisons among institutions, in this study we undertook a reformulation of the LR by means of multilevel regression models. First, with our models we replicated the ranking results; second, the reanalysis of the LR data showed that only 5% of the PPtop10% total variation is attributable to differences between universities. Beyond that, about 80% of the variation between universities can be explained by differences among countries. If covariates are included in the model the differences among most of the universities become meaningless. Our findings have implications for conducting university rankings in general and for the LR in particular. For example, with Goldstein-adjusted confidence intervals, it is possible to interpret the significance of differences among universities meaningfully: Rank differences among universities should be interpreted as meaningful only if their confidence intervals do not overlap.
    Type
    a