Search (6 results, page 1 of 1)

  • × author_ss:"Mutz, R."
  • × theme_ss:"Informetrie"
  1. Bornmann, L.; Mutz, R.: From P100 to P100' : a new citation-rank approach (2014) 0.03
    0.03037249 = product of:
      0.06074498 = sum of:
        0.06074498 = sum of:
          0.0108246 = weight(_text_:a in 1431) [ClassicSimilarity], result of:
            0.0108246 = score(doc=1431,freq=8.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.20383182 = fieldWeight in 1431, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0625 = fieldNorm(doc=1431)
          0.04992038 = weight(_text_:22 in 1431) [ClassicSimilarity], result of:
            0.04992038 = score(doc=1431,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.30952093 = fieldWeight in 1431, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1431)
      0.5 = coord(1/2)
    
    Abstract
    Properties of a percentile-based rating scale needed in bibliometrics are formulated. Based on these properties, P100 was recently introduced as a new citation-rank approach (Bornmann, Leydesdorff, & Wang, 2013). In this paper, we conceptualize P100 and propose an improvement which we call P100'. Advantages and disadvantages of citation-rank indicators are noted.
    Date
    22. 8.2014 17:05:18
    Type
    a
  2. Bornmann, L.; Mutz, R.; Daniel, H.D.: Do we need the h index and its variants in addition to standard bibliometric measures? (2009) 0.00
    0.0023919214 = product of:
      0.0047838427 = sum of:
        0.0047838427 = product of:
          0.009567685 = sum of:
            0.009567685 = weight(_text_:a in 2861) [ClassicSimilarity], result of:
              0.009567685 = score(doc=2861,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18016359 = fieldWeight in 2861, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2861)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this study, we investigate whether there is a need for the h index and its variants in addition to standard bibliometric measures (SBMs). Results from our recent study (L. Bornmann, R. Mutz, & H.-D. Daniel, 2008) have indicated that there are two types of indices: One type of indices (e.g., h index) describes the most productive core of a scientist's output and informs about the number of papers in the core. The other type of indices (e.g., a index) depicts the impact of the papers in the core. In evaluative bibliometric studies, the two dimensions quantity and quality of output are usually assessed using the SBMs number of publications (for the quantity dimension) and total citation counts (for the impact dimension). We additionally included the SBMs into the factor analysis. The results of the newly calculated analysis indicate that there is a high intercorrelation between number of publications and the indices that load substantially on the factor Quantity of the Productive Core as well as between total citation counts and the indices that load substantially on the factor Impact of the Productive Core. The high-loading indices and SBMs within one performance dimension could be called redundant in empirical application, as high intercorrelations between different indicators are a sign for measuring something similar (or the same). Based on our findings, we propose the use of any pair of indicators (one relating to the number of papers in a researcher's productive core and one relating to the impact of these core papers) as a meaningful approach for comparing scientists.
    Type
    a
  3. Mutz, R.; Daniel, H.-D.: What is behind the curtain of the Leiden Ranking? (2015) 0.00
    0.002269176 = product of:
      0.004538352 = sum of:
        0.004538352 = product of:
          0.009076704 = sum of:
            0.009076704 = weight(_text_:a in 2171) [ClassicSimilarity], result of:
              0.009076704 = score(doc=2171,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1709182 = fieldWeight in 2171, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2171)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Even with very well-documented rankings of universities, it is difficult for an individual university to reconstruct its position in the ranking. What is the reason behind whether a university places higher or lower in the ranking? Taking the example of ETH Zurich, the aim of this communication is to reconstruct how the high position of ETHZ (in Europe rank no. 1 in PP[top 10%]) in the Centre for Science and Technology Studies (CWTS) Leiden Ranking 2013 in the field "social sciences, arts and humanities" came about. According to our analyses, the bibliometric indicator values of a university depend very strongly on weights that result in differing estimates of both the total number of a university's publications and the number of publications with a citation impact in the 90th percentile, or PP(top 10%). In addition, we examine the effect of weights at the level of individual publications. Based on the results, we offer recommendations for improving the Leiden Ranking (for example, publication of sample calculations to increase transparency).
    Type
    a
  4. Mutz, R.; Wolbring, T.; Daniel, H.-D.: ¬The effect of the "very important paper" (VIP) designation in Angewandte Chemie International Edition on citation impact : a propensity score matching analysis (2017) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 3792) [ClassicSimilarity], result of:
              0.008285859 = score(doc=3792,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 3792, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3792)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Scientific journals publish an increasing number of articles every year. To steer readers' attention to the most important papers, journals use several techniques (e.g., lead paper). Angewandte Chemie International Edition (AC), a leading international journal in chemistry, signals high-quality papers through designating them as a "very important paper" (VIP). This study aims to investigate the citation impact of Communications in AC receiving the special feature VIP, both cumulated and over time. Using propensity score matching, treatment group (VIP) and control group (non-VIP) were balanced for 14 covariates to estimate the unconfounded "average treatment effect on the treated" for the VIP designation. Out of N = 3,011 Communications published in 2007 and 2008, N = 207 received the special feature VIP. For each Communication, data were collected from AC (e.g., referees' ratings) and from the databases Chemical Abstracts (e.g., sections) and the Web of Science (e.g., citations). The estimated unconfounded average treatment effect on the treated (that is, Communications designated as a VIP) was statistically significant and amounted to 19.83 citations. In addition, the special feature VIP fostered the cumulated annual citation growth. For instance, the time until a Communication reached its maximum annual number of citations, was reduced.
    Type
    a
  5. Bornmann, L.; Moya Anegón, F. de; Mutz, R.: Do universities or research institutions with a specific subject profile have an advantage or a disadvantage in institutional rankings? (2013) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 1109) [ClassicSimilarity], result of:
              0.008118451 = score(doc=1109,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 1109, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1109)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Using data compiled for the SCImago Institutions Ranking, we look at whether the subject area type an institution (university or research-focused institution) belongs to (in terms of the fields researched) has an influence on its ranking position. We used latent class analysis to categorize institutions based on their publications in certain subject areas. Even though this categorization does not relate directly to scientific performance, our results show that it exercises an important influence on the outcome of a performance measurement: Certain subject area types of institutions have an advantage in the ranking positions when compared with others. This advantage manifests itself not only when performance is measured with an indicator that is not field-normalized but also for indicators that are field-normalized.
    Type
    a
  6. Bornmann, L.; Mutz, R.: Growth rates of modern science : a bibliometric analysis based on the number of publications and cited references (2015) 0.00
    0.0016913437 = product of:
      0.0033826875 = sum of:
        0.0033826875 = product of:
          0.006765375 = sum of:
            0.006765375 = weight(_text_:a in 2261) [ClassicSimilarity], result of:
              0.006765375 = score(doc=2261,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12739488 = fieldWeight in 2261, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2261)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Many studies (in information science) have looked at the growth of science. In this study, we reexamine the question of the growth of science. To do this we (a) use current data up to publication year 2012 and (b) analyze the data across all disciplines and also separately for the natural sciences and for the medical and health sciences. Furthermore, the data were analyzed with an advanced statistical technique-segmented regression analysis-which can identify specific segments with similar growth rates in the history of science. The study is based on two different sets of bibliometric data: (a) the number of publications held as source items in the Web of Science (WoS, Thomson Reuters) per publication year and (b) the number of cited references in the publications of the source items per cited reference year. We looked at the rate at which science has grown since the mid-1600s. In our analysis of cited references we identified three essential growth phases in the development of science, which each led to growth rates tripling in comparison with the previous phase: from less than 1% up to the middle of the 18th century, to 2 to 3% up to the period between the two world wars, and 8 to 9% to 2010.
    Type
    a