Search (3 results, page 1 of 1)

  • × author_ss:"Opthof, T."
  • × author_ss:"Leydesdorff, L."
  1. Leydesdorff, L.; Opthof, T.: Scopus's source normalized impact per paper (SNIP) versus a journal impact factor based on fractional counting of citations (2010) 0.00
    0.002269176 = product of:
      0.004538352 = sum of:
        0.004538352 = product of:
          0.009076704 = sum of:
            0.009076704 = weight(_text_:a in 4107) [ClassicSimilarity], result of:
              0.009076704 = score(doc=4107,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1709182 = fieldWeight in 4107, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4107)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Impact factors (and similar measures such as the Scimago Journal Rankings) suffer from two problems: (a) citation behavior varies among fields of science and, therefore, leads to systematic differences, and (b) there are no statistics to inform us whether differences are significant. The recently introduced "source normalized impact per paper" indicator of Scopus tries to remedy the first of these two problems, but a number of normalization decisions are involved, which makes it impossible to test for significance. Using fractional counting of citations-based on the assumption that impact is proportionate to the number of references in the citing documents-citations can be contextualized at the paper level and aggregated impacts of sets can be tested for their significance. It can be shown that the weighted impact of Annals of Mathematics (0.247) is not so much lower than that of Molecular Cell (0.386) despite a five-f old difference between their impact factors (2.793 and 13.156, respectively).
    Type
    a
  2. Leydesdorff, L.; Opthof, T.: Citation analysis with medical subject Headings (MeSH) using the Web of Knowledge : a new routine (2013) 0.00
    0.002269176 = product of:
      0.004538352 = sum of:
        0.004538352 = product of:
          0.009076704 = sum of:
            0.009076704 = weight(_text_:a in 943) [ClassicSimilarity], result of:
              0.009076704 = score(doc=943,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1709182 = fieldWeight in 943, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=943)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Citation analysis of documents retrieved from the Medline database (at the Web of Knowledge) has been possible only on a case-by-case basis. A technique is presented here for citation analysis in batch mode using both Medical Subject Headings (MeSH) at the Web of Knowledge and the Science Citation Index at the Web of Science (WoS). This freeware routine is applied to the case of "Brugada Syndrome," a specific disease and field of research (since 1992). The journals containing these publications, for example, are attributed to WoS categories other than "cardiac and cardiovascular systems", perhaps because of the possibility of genetic testing for this syndrome in the clinic. With this routine, all the instruments available for citation analysis can now be used on the basis of MeSH terms. Other options for crossing between Medline, WoS, and Scopus are also reviewed.
    Type
    a
  3. Leydesdorff, L.; Bornmann, L.; Mutz, R.; Opthof, T.: Turning the tables on citation analysis one more time : principles for comparing sets of documents (2011) 0.00
    0.001757696 = product of:
      0.003515392 = sum of:
        0.003515392 = product of:
          0.007030784 = sum of:
            0.007030784 = weight(_text_:a in 4485) [ClassicSimilarity], result of:
              0.007030784 = score(doc=4485,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.13239266 = fieldWeight in 4485, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4485)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We submit newly developed citation impact indicators based not on arithmetic averages of citations but on percentile ranks. Citation distributions are-as a rule-highly skewed and should not be arithmetically averaged. With percentile ranks, the citation score of each paper is rated in terms of its percentile in the citation distribution. The percentile ranks approach allows for the formulation of a more abstract indicator scheme that can be used to organize and/or schematize different impact indicators according to three degrees of freedom: the selection of the reference sets, the evaluation criteria, and the choice of whether or not to define the publication sets as independent. Bibliometric data of seven principal investigators (PIs) of the Academic Medical Center of the University of Amsterdam are used as an exemplary dataset. We demonstrate that the proposed family indicators [R(6), R(100), R(6, k), R(100, k)] are an improvement on averages-based indicators because one can account for the shape of the distributions of citations over papers.
    Type
    a