Search (3 results, page 1 of 1)

  • × author_ss:"Mingers, J."
  • × theme_ss:"Informetrie"
  1. Mingers, J.; Macri, F.; Petrovici, D.: Using the h-index to measure the quality of journals in the field of business and management (2012) 0.02
    0.0179753 = product of:
      0.0539259 = sum of:
        0.042333104 = weight(_text_:web in 2741) [ClassicSimilarity], result of:
          0.042333104 = score(doc=2741,freq=4.0), product of:
            0.13836423 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.042397358 = queryNorm
            0.3059541 = fieldWeight in 2741, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2741)
        0.011592798 = product of:
          0.034778394 = sum of:
            0.034778394 = weight(_text_:29 in 2741) [ClassicSimilarity], result of:
              0.034778394 = score(doc=2741,freq=2.0), product of:
                0.14914064 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.042397358 = queryNorm
                0.23319192 = fieldWeight in 2741, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2741)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper considers the use of the h-index as a measure of a journal's research quality and contribution. We study a sample of 455 journals in business and management all of which are included in the ISI Web of Science (WoS) and the Association of Business School's peer review journal ranking list. The h-index is compared with both the traditional impact factors, and with the peer review judgements. We also consider two sources of citation data - the WoS itself and Google Scholar. The conclusions are that the h-index is preferable to the impact factor for a variety of reasons, especially the selective coverage of the impact factor and the fact that it disadvantages journals that publish many papers. Google Scholar is also preferred to WoS as a data source. However, the paper notes that it is not sufficient to use any single metric to properly evaluate research achievements.
    Date
    29. 1.2016 19:00:16
    Object
    Web of Science
  2. Mingers, J.; Burrell, Q.L.: Modeling citation behavior in Management Science journals (2006) 0.00
    0.0019147521 = product of:
      0.011488512 = sum of:
        0.011488512 = product of:
          0.034465536 = sum of:
            0.034465536 = weight(_text_:22 in 994) [ClassicSimilarity], result of:
              0.034465536 = score(doc=994,freq=2.0), product of:
                0.14846832 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042397358 = queryNorm
                0.23214069 = fieldWeight in 994, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=994)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    26.12.2007 19:22:05
  3. Leydesdorff, L.; Bornmann, L.; Mingers, J.: Statistical significance and effect sizes of differences among research universities at the level of nations and worldwide based on the Leiden rankings (2019) 0.00
    0.001290741 = product of:
      0.007744446 = sum of:
        0.007744446 = product of:
          0.023233337 = sum of:
            0.023233337 = weight(_text_:system in 5225) [ClassicSimilarity], result of:
              0.023233337 = score(doc=5225,freq=2.0), product of:
                0.13353272 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.042397358 = queryNorm
                0.17398985 = fieldWeight in 5225, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5225)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Abstract
    The Leiden Rankings can be used for grouping research universities by considering universities which are not statistically significantly different as homogeneous sets. The groups and intergroup relations can be analyzed and visualized using tools from network analysis. Using the so-called "excellence indicator" PPtop-10%-the proportion of the top-10% most-highly-cited papers assigned to a university-we pursue a classification using (a) overlapping stability intervals, (b) statistical-significance tests, and (c) effect sizes of differences among 902 universities in 54 countries; we focus on the UK, Germany, Brazil, and the USA as national examples. Although the groupings remain largely the same using different statistical significance levels or overlapping stability intervals, these classifications are uncorrelated with those based on effect sizes. Effect sizes for the differences between universities are small (w < .2). The more detailed analysis of universities at the country level suggests that distinctions beyond three or perhaps four groups of universities (high, middle, low) may not be meaningful. Given similar institutional incentives, isomorphism within each eco-system of universities should not be underestimated. Our results suggest that networks based on overlapping stability intervals can provide a first impression of the relevant groupings among universities. However, the clusters are not well-defined divisions between groups of universities.