Search (2 results, page 1 of 1)

  • × author_ss:"Marchant, T."
  • × theme_ss:"Informetrie"
  1. Marchant, T.: Score-based bibliometric rankings of authors (2009) 0.08
    0.083452046 = sum of:
      0.047776405 = product of:
        0.19110562 = sum of:
          0.19110562 = weight(_text_:authors in 2849) [ClassicSimilarity], result of:
            0.19110562 = score(doc=2849,freq=8.0), product of:
              0.2371355 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05201693 = queryNorm
              0.80589205 = fieldWeight in 2849, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0625 = fieldNorm(doc=2849)
        0.25 = coord(1/4)
      0.03567564 = product of:
        0.07135128 = sum of:
          0.07135128 = weight(_text_:t in 2849) [ClassicSimilarity], result of:
            0.07135128 = score(doc=2849,freq=2.0), product of:
              0.20491594 = queryWeight, product of:
                3.9394085 = idf(docFreq=2338, maxDocs=44218)
                0.05201693 = queryNorm
              0.34819782 = fieldWeight in 2849, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9394085 = idf(docFreq=2338, maxDocs=44218)
                0.0625 = fieldNorm(doc=2849)
        0.5 = coord(1/2)
    
    Abstract
    Scoring rules (or score-based rankings or summation-based rankings) form a family of bibliometric rankings of authors such that authors are ranked according to the sum over all their publications of some partial scores. Many of these rankings are widely used (e.g., number of publications, weighted or not by the impact factor, by the number of authors, or by the number of citations). We present an axiomatic analysis of the family of all scoring rules and of some particular cases within this family.
  2. Bouyssou, D.; Marchant, T.: Ranking scientists and departments in a consistent manner (2011) 0.04
    0.044672884 = sum of:
      0.017916152 = product of:
        0.07166461 = sum of:
          0.07166461 = weight(_text_:authors in 4751) [ClassicSimilarity], result of:
            0.07166461 = score(doc=4751,freq=2.0), product of:
              0.2371355 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05201693 = queryNorm
              0.30220953 = fieldWeight in 4751, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=4751)
        0.25 = coord(1/4)
      0.026756732 = product of:
        0.053513464 = sum of:
          0.053513464 = weight(_text_:t in 4751) [ClassicSimilarity], result of:
            0.053513464 = score(doc=4751,freq=2.0), product of:
              0.20491594 = queryWeight, product of:
                3.9394085 = idf(docFreq=2338, maxDocs=44218)
                0.05201693 = queryNorm
              0.26114836 = fieldWeight in 4751, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9394085 = idf(docFreq=2338, maxDocs=44218)
                0.046875 = fieldNorm(doc=4751)
        0.5 = coord(1/2)
    
    Abstract
    The standard data that we use when computing bibliometric rankings of scientists are their publication/ citation records, i.e., so many papers with 0 citation, so many with 1 citation, so many with 2 citations, etc. The standard data for bibliometric rankings of departments have the same structure. It is therefore tempting (and many authors gave in to temptation) to use the same method for computing rankings of scientists and rankings of departments. Depending on the method, this can yield quite surprising and unpleasant results. Indeed, with some methods, it may happen that the "best" department contains the "worst" scientists, and only them. This problem will not occur if the rankings satisfy a property called consistency, recently introduced in the literature. In this article, we explore the consequences of consistency and we characterize two families of consistent rankings.