Search (1 results, page 1 of 1)

  • × author_ss:"Bouyssou, D."
  • × theme_ss:"Informetrie"
  1. Bouyssou, D.; Marchant, T.: Ranking scientists and departments in a consistent manner (2011) 0.00
    9.923922E-4 = product of:
      0.0079391375 = sum of:
        0.0079391375 = product of:
          0.039695688 = sum of:
            0.039695688 = weight(_text_:problem in 4751) [ClassicSimilarity], result of:
              0.039695688 = score(doc=4751,freq=2.0), product of:
                0.1410789 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03323817 = queryNorm
                0.28137225 = fieldWeight in 4751, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4751)
          0.2 = coord(1/5)
      0.125 = coord(1/8)
    
    Abstract
    The standard data that we use when computing bibliometric rankings of scientists are their publication/ citation records, i.e., so many papers with 0 citation, so many with 1 citation, so many with 2 citations, etc. The standard data for bibliometric rankings of departments have the same structure. It is therefore tempting (and many authors gave in to temptation) to use the same method for computing rankings of scientists and rankings of departments. Depending on the method, this can yield quite surprising and unpleasant results. Indeed, with some methods, it may happen that the "best" department contains the "worst" scientists, and only them. This problem will not occur if the rankings satisfy a property called consistency, recently introduced in the literature. In this article, we explore the consequences of consistency and we characterize two families of consistent rankings.