Search (2 results, page 1 of 1)

  • × author_ss:"Visser, M.S."
  • × theme_ss:"Informetrie"
  1. Nederhof, A.J.; Visser, M.S.: Quantitative deconstruction of citation impact indicators : waxing field impact but waning journal impact (2004) 0.00
    0.0026849252 = product of:
      0.0053698504 = sum of:
        0.0053698504 = product of:
          0.010739701 = sum of:
            0.010739701 = weight(_text_:a in 4419) [ClassicSimilarity], result of:
              0.010739701 = score(doc=4419,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20223314 = fieldWeight in 4419, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4419)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In two case studies of research units, reference values used to benchmark research performance appeared to show contradictory results: the average citation level in the subfields (FCSm) increased world-wide, while the citation level of the journals (JCSm) decreased, where concomitant changes were expected. Explanations were sought in: a shift in preference of document types; a change in publication preference for subfields; and changes in journal coverage. Publishing in newly covered journals with a low impact had a negative effect on impact ratios. However, the main factor behind the increase in FCSm was the distribution of articles across the five-year block periods that were studied. Publication in lower impact journals produced a lagging JCSm. Actual values of JCSm, FCSm, and citations per publication (CPP) values are not very informative either about research performance, or about the development of impact over time in a certain subfield with block indicators. Normalized citation impact indicators are free from such effects and should be consulted primarily in research performance assessments.
    Type
    a
  2. Waltman, L.; Calero-Medina, C.; Kosten, J.; Noyons, E.C.M.; Tijssen, R.J.W.; Eck, N.J. van; Leeuwen, T.N. van; Raan, A.F.J. van; Visser, M.S.; Wouters, P.: ¬The Leiden ranking 2011/2012 : data collection, indicators, and interpretation (2012) 0.00
    0.0018909799 = product of:
      0.0037819599 = sum of:
        0.0037819599 = product of:
          0.0075639198 = sum of:
            0.0075639198 = weight(_text_:a in 514) [ClassicSimilarity], result of:
              0.0075639198 = score(doc=514,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14243183 = fieldWeight in 514, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=514)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Leiden Ranking 2011/2012 is a ranking of universities based on bibliometric indicators of publication output, citation impact, and scientific collaboration. The ranking includes 500 major universities from 41 different countries. This paper provides an extensive discussion of the Leiden Ranking 2011/2012. The ranking is compared with other global university rankings, in particular the Academic Ranking of World Universities (commonly known as the Shanghai Ranking) and the Times Higher Education World University Rankings. The comparison focuses on the methodological choices underlying the different rankings. Also, a detailed description is offered of the data collection methodology of the Leiden Ranking 2011/2012 and of the indicators used in the ranking. Various innovations in the Leiden Ranking 2011/2012 are presented. These innovations include (1) an indicator based on counting a university's highly cited publications, (2) indicators based on fractional rather than full counting of collaborative publications, (3) the possibility of excluding non-English language publications, and (4) the use of stability intervals. Finally, some comments are made on the interpretation of the ranking and a number of limitations of the ranking are pointed out.
    Type
    a