Search (152 results, page 1 of 8)

  • × theme_ss:"Informetrie"
  • × year_i:[2010 TO 2020}
  1. Herb, U.; Beucke, D.: ¬Die Zukunft der Impact-Messung : Social Media, Nutzung und Zitate im World Wide Web (2013) 0.16
    0.15873328 = product of:
      0.47619984 = sum of:
        0.23809992 = weight(_text_:2f in 2188) [ClassicSimilarity], result of:
          0.23809992 = score(doc=2188,freq=2.0), product of:
            0.3177388 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03747799 = queryNorm
            0.7493574 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
        0.23809992 = weight(_text_:2f in 2188) [ClassicSimilarity], result of:
          0.23809992 = score(doc=2188,freq=2.0), product of:
            0.3177388 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03747799 = queryNorm
            0.7493574 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
      0.33333334 = coord(2/6)
    
    Content
    Vgl. unter: https://www.leibniz-science20.de%2Fforschung%2Fprojekte%2Faltmetrics-in-verschiedenen-wissenschaftsdisziplinen%2F&ei=2jTgVaaXGcK4Udj1qdgB&usg=AFQjCNFOPdONj4RKBDf9YDJOLuz3lkGYlg&sig2=5YI3KWIGxBmk5_kv0P_8iQ.
  2. Dobrota, M.; Dobrota, M.: ARWU ranking uncertainty and sensitivity : what if the award factor was Excluded? (2016) 0.08
    0.080006525 = product of:
      0.24001957 = sum of:
        0.22986408 = weight(_text_:ranking in 2652) [ClassicSimilarity], result of:
          0.22986408 = score(doc=2652,freq=20.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            1.1339021 = fieldWeight in 2652, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=2652)
        0.0101555 = product of:
          0.030466499 = sum of:
            0.030466499 = weight(_text_:22 in 2652) [ClassicSimilarity], result of:
              0.030466499 = score(doc=2652,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.23214069 = fieldWeight in 2652, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2652)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    The Academic Ranking of World Universities (ARWU) uses six university performance indicators, including "Alumni" and "Awards"-the number of alumni and staff winning Nobel Prizes and Fields Medals. These two indicators raised doubts about the reliability of this ranking method because they are difficult to cope with. Recently, a newsletter was published featuring a reduced ARWU ranking list, leaving out Nobel Prize and Fields Medal indicators: the Alternative Ranking (Excluding Award Factor). We used uncertainty and sensitivity analyses to examine and compare the stability and confidence of the official ARWU ranking and the Alternative Ranking. The results indicate that if the ARWU ranking is reduced to the 4-indicator Alternative Ranking, it shows greater certainty and stability in ranking universities.
    Date
    22. 1.2016 14:40:53
  3. Perianes-Rodriguez, A.; Ruiz-Castillo, J.: ¬The impact of classification systems in the evaluation of the research performance of the Leiden Ranking universities (2018) 0.04
    0.04322958 = product of:
      0.12968874 = sum of:
        0.121149 = weight(_text_:ranking in 4374) [ClassicSimilarity], result of:
          0.121149 = score(doc=4374,freq=8.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.5976189 = fieldWeight in 4374, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4374)
        0.008539738 = product of:
          0.025619213 = sum of:
            0.025619213 = weight(_text_:29 in 4374) [ClassicSimilarity], result of:
              0.025619213 = score(doc=4374,freq=2.0), product of:
                0.13183585 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03747799 = queryNorm
                0.19432661 = fieldWeight in 4374, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4374)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    In this article, we investigate the consequences of choosing different classification systems-namely, the way publications (or journals) are assigned to scientific fields-for the ranking of research units. We study the impact of this choice on the ranking of 500 universities in the 2013 edition of the Leiden Ranking in two cases. First, we compare a Web of Science (WoS) journal-level classification system, consisting of 236 subject categories, and a publication-level algorithmically constructed system, denoted G8, consisting of 5,119 clusters. The result is that the consequences of the move from the WoS to the G8 system using the Top 1% citation impact indicator are much greater than the consequences of this move using the Top 10% indicator. Second, we compare the G8 classification system and a publication-level alternative of the same family, the G6 system, consisting of 1,363 clusters. The result is that, although less important than in the previous case, the consequences of the move from the G6 to the G8 system under the Top 1% indicator are still of a large order of magnitude.
    Date
    29. 7.2018 14:41:34
  4. Mayr, P.: Bradfordizing als Re-Ranking-Ansatz in Literaturinformationssystemen (2011) 0.04
    0.037682008 = product of:
      0.11304602 = sum of:
        0.102798335 = weight(_text_:ranking in 4292) [ClassicSimilarity], result of:
          0.102798335 = score(doc=4292,freq=4.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.5070964 = fieldWeight in 4292, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=4292)
        0.010247685 = product of:
          0.030743055 = sum of:
            0.030743055 = weight(_text_:29 in 4292) [ClassicSimilarity], result of:
              0.030743055 = score(doc=4292,freq=2.0), product of:
                0.13183585 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03747799 = queryNorm
                0.23319192 = fieldWeight in 4292, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4292)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    In diesem Artikel wird ein Re-Ranking-Ansatz für Suchsysteme vorgestellt, der die Recherche nach wissenschaftlicher Literatur messbar verbessern kann. Das nichttextorientierte Rankingverfahren Bradfordizing wird eingeführt und anschließend im empirischen Teil des Artikels bzgl. der Effektivität für typische fachbezogene Recherche-Topics evaluiert. Dem Bradford Law of Scattering (BLS), auf dem Bradfordizing basiert, liegt zugrunde, dass sich die Literatur zu einem beliebigen Fachgebiet bzw. -thema in Zonen unterschiedlicher Dokumentenkonzentration verteilt. Dem Kernbereich mit hoher Konzentration der Literatur folgen Bereiche mit mittlerer und geringer Konzentration. Bradfordizing sortiert bzw. rankt eine Dokumentmenge damit nach den sogenannten Kernzeitschriften. Der Retrievaltest mit 164 intellektuell bewerteten Fragestellungen in Fachdatenbanken aus den Bereichen Sozial- und Politikwissenschaften, Wirtschaftswissenschaften, Psychologie und Medizin zeigt, dass die Dokumente der Kernzeitschriften signifikant häufiger relevant bewertet werden als Dokumente der zweiten Dokumentzone bzw. den Peripherie-Zeitschriften. Die Implementierung von Bradfordizing und weiteren Re-Rankingverfahren liefert unmittelbare Mehrwerte für den Nutzer.
    Date
    9. 2.2011 17:47:29
  5. Rötzer, F.: Bindestriche in Titeln von Artikeln schaden der wissenschaftlichen Reputation (2019) 0.04
    0.037682008 = product of:
      0.11304602 = sum of:
        0.102798335 = weight(_text_:ranking in 5697) [ClassicSimilarity], result of:
          0.102798335 = score(doc=5697,freq=4.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.5070964 = fieldWeight in 5697, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=5697)
        0.010247685 = product of:
          0.030743055 = sum of:
            0.030743055 = weight(_text_:29 in 5697) [ClassicSimilarity], result of:
              0.030743055 = score(doc=5697,freq=2.0), product of:
                0.13183585 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03747799 = queryNorm
                0.23319192 = fieldWeight in 5697, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5697)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Wissenschaftler wollen herausgefunden haben, dass das wichtige Ranking nach Zitierhäufigkeit und dem Journal Impact Factor fehlerhaft ist. Man sollte ja denken, dass Programme, seien sie nun KI-gestützt oder nicht, vorurteilslos nach bestimmten Kriterien etwa ein Ranking erstellen können. Aber es kommen immer wieder unbedachte Einflüsse ins Spiel, die lange Zeit unbemerkt bleiben können. Bei KI-Programmen ist in letzter Zeit klar geworden, dass die Datenauswahl eine verzerrende Rolle spielen kann, die zu seltsamen Ergebnissen führt.
    Date
    29. 6.2019 17:46:17
  6. Waltman, L.; Calero-Medina, C.; Kosten, J.; Noyons, E.C.M.; Tijssen, R.J.W.; Eck, N.J. van; Leeuwen, T.N. van; Raan, A.F.J. van; Visser, M.S.; Wouters, P.: ¬The Leiden ranking 2011/2012 : data collection, indicators, and interpretation (2012) 0.04
    0.03640075 = product of:
      0.21840449 = sum of:
        0.21840449 = weight(_text_:ranking in 514) [ClassicSimilarity], result of:
          0.21840449 = score(doc=514,freq=26.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            1.0773728 = fieldWeight in 514, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=514)
      0.16666667 = coord(1/6)
    
    Abstract
    The Leiden Ranking 2011/2012 is a ranking of universities based on bibliometric indicators of publication output, citation impact, and scientific collaboration. The ranking includes 500 major universities from 41 different countries. This paper provides an extensive discussion of the Leiden Ranking 2011/2012. The ranking is compared with other global university rankings, in particular the Academic Ranking of World Universities (commonly known as the Shanghai Ranking) and the Times Higher Education World University Rankings. The comparison focuses on the methodological choices underlying the different rankings. Also, a detailed description is offered of the data collection methodology of the Leiden Ranking 2011/2012 and of the indicators used in the ranking. Various innovations in the Leiden Ranking 2011/2012 are presented. These innovations include (1) an indicator based on counting a university's highly cited publications, (2) indicators based on fractional rather than full counting of collaborative publications, (3) the possibility of excluding non-English language publications, and (4) the use of stability intervals. Finally, some comments are made on the interpretation of the ranking and a number of limitations of the ranking are pointed out.
  7. Vieira, E.S.; Cabral, J.A.S.; Gomes, J.A.N.F.: Definition of a model based on bibliometric indicators for assessing applicants to academic positions (2014) 0.03
    0.032217465 = product of:
      0.09665239 = sum of:
        0.084804304 = weight(_text_:ranking in 1221) [ClassicSimilarity], result of:
          0.084804304 = score(doc=1221,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.4183332 = fieldWeight in 1221, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1221)
        0.011848084 = product of:
          0.03554425 = sum of:
            0.03554425 = weight(_text_:22 in 1221) [ClassicSimilarity], result of:
              0.03554425 = score(doc=1221,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.2708308 = fieldWeight in 1221, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1221)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    A model based on a set of bibliometric indicators is proposed for the prediction of the ranking of applicants to an academic position as produced by a committee of peers. The results show that a very small number of indicators may lead to a robust prediction of about 75% of the cases. We start with 12 indicators to build a few composite indicators by factor analysis. Following a discrete choice model, we arrive at 3 comparatively good predicative models. We conclude that these models have a surprisingly good predictive power and may help peers in their selection process.
    Date
    18. 3.2014 18:22:21
  8. Jiang, Z.; Liu, X.; Chen, Y.: Recovering uncaptured citations in a scholarly network : a two-step citation analysis to estimate publication importance (2016) 0.03
    0.031401675 = product of:
      0.09420502 = sum of:
        0.085665286 = weight(_text_:ranking in 3018) [ClassicSimilarity], result of:
          0.085665286 = score(doc=3018,freq=4.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.42258036 = fieldWeight in 3018, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3018)
        0.008539738 = product of:
          0.025619213 = sum of:
            0.025619213 = weight(_text_:29 in 3018) [ClassicSimilarity], result of:
              0.025619213 = score(doc=3018,freq=2.0), product of:
                0.13183585 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03747799 = queryNorm
                0.19432661 = fieldWeight in 3018, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3018)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    The citation relationships between publications, which are significant for assessing the importance of scholarly components within a network, have been used for various scientific applications. Missing citation metadata in scholarly databases, however, create problems for classical citation-based ranking algorithms and challenge the performance of citation-based retrieval systems. In this research, we utilize a two-step citation analysis method to investigate the importance of publications for which citation information is partially missing. First, we calculate the importance of the author and then use his importance to estimate the publication importance for some selected articles. To evaluate this method, we designed a simulation experiment-"random citation-missing"-to test the two-step citation analysis that we carried out with the Association for Computing Machinery (ACM) Digital Library (DL). In this experiment, we simulated different scenarios in a large-scale scientific digital library, from high-quality citation data, to very poor quality data, The results show that a two-step citation analysis can effectively uncover the importance of publications in different situations. More importantly, we found that the optimized impact from the importance of an author (first step) is exponentially increased when the quality of citation decreases. The findings from this study can further enhance citation-based publication-ranking algorithms for real-world applications.
    Date
    12. 6.2016 20:31:29
  9. Mayr, P.: Bradfordizing mit Katalogdaten : Alternative Sicht auf Suchergebnisse und Publikationsquellen durch Re-Ranking (2010) 0.03
    0.029675325 = product of:
      0.17805195 = sum of:
        0.17805195 = weight(_text_:ranking in 4301) [ClassicSimilarity], result of:
          0.17805195 = score(doc=4301,freq=12.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.87831676 = fieldWeight in 4301, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=4301)
      0.16666667 = coord(1/6)
    
    Abstract
    Nutzer erwarten für Literaturrecherchen in wissenschaftlichen Suchsystemen einen möglichst hohen Anteil an relevanten und qualitativen Dokumenten in den Trefferergebnissen. Insbesondere die Reihenfolge und Struktur der gelisteten Ergebnisse (Ranking) spielt, neben dem direkten Volltextzugriff auf die Dokumente, für viele Nutzer inzwischen eine entscheidende Rolle. Abgegrenzt wird Ranking oder Relevance Ranking von sogenannten Sortierungen zum Beispiel nach dem Erscheinungsjahr der Publikation, obwohl hier die Grenze zu »nach inhaltlicher Relevanz« gerankten Listen konzeptuell nicht sauber zu ziehen ist. Das Ranking von Dokumenten führt letztlich dazu, dass sich die Benutzer fokussiert mit den oberen Treffermengen eines Suchergebnisses beschäftigen. Der mittlere und untere Bereich eines Suchergebnisses wird häufig nicht mehr in Betracht gezogen. Aufgrund der Vielzahl an relevanten und verfügbaren Informationsquellen ist es daher notwendig, Kernbereiche in den Suchräumen zu identifizieren und diese anschließend dem Nutzer hervorgehoben zu präsentieren. Phillipp Mayr fasst hier die Ergebnisse seiner Dissertation zum Thema »Re-Ranking auf Basis von Bradfordizing für die verteilte Suche in Digitalen Bibliotheken« zusammen.
  10. Mingers, J.; Macri, F.; Petrovici, D.: Using the h-index to measure the quality of journals in the field of business and management (2012) 0.03
    0.027645696 = product of:
      0.082937084 = sum of:
        0.0726894 = weight(_text_:ranking in 2741) [ClassicSimilarity], result of:
          0.0726894 = score(doc=2741,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.35857132 = fieldWeight in 2741, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=2741)
        0.010247685 = product of:
          0.030743055 = sum of:
            0.030743055 = weight(_text_:29 in 2741) [ClassicSimilarity], result of:
              0.030743055 = score(doc=2741,freq=2.0), product of:
                0.13183585 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03747799 = queryNorm
                0.23319192 = fieldWeight in 2741, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2741)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper considers the use of the h-index as a measure of a journal's research quality and contribution. We study a sample of 455 journals in business and management all of which are included in the ISI Web of Science (WoS) and the Association of Business School's peer review journal ranking list. The h-index is compared with both the traditional impact factors, and with the peer review judgements. We also consider two sources of citation data - the WoS itself and Google Scholar. The conclusions are that the h-index is preferable to the impact factor for a variety of reasons, especially the selective coverage of the impact factor and the fact that it disadvantages journals that publish many papers. Google Scholar is also preferred to WoS as a data source. However, the paper notes that it is not sufficient to use any single metric to properly evaluate research achievements.
    Date
    29. 1.2016 19:00:16
  11. Mutz, R.; Daniel, H.-D.: What is behind the curtain of the Leiden Ranking? (2015) 0.03
    0.027089743 = product of:
      0.16253845 = sum of:
        0.16253845 = weight(_text_:ranking in 2171) [ClassicSimilarity], result of:
          0.16253845 = score(doc=2171,freq=10.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.8017899 = fieldWeight in 2171, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=2171)
      0.16666667 = coord(1/6)
    
    Abstract
    Even with very well-documented rankings of universities, it is difficult for an individual university to reconstruct its position in the ranking. What is the reason behind whether a university places higher or lower in the ranking? Taking the example of ETH Zurich, the aim of this communication is to reconstruct how the high position of ETHZ (in Europe rank no. 1 in PP[top 10%]) in the Centre for Science and Technology Studies (CWTS) Leiden Ranking 2013 in the field "social sciences, arts and humanities" came about. According to our analyses, the bibliometric indicator values of a university depend very strongly on weights that result in differing estimates of both the total number of a university's publications and the number of publications with a citation impact in the 90th percentile, or PP(top 10%). In addition, we examine the effect of weights at the level of individual publications. Based on the results, we offer recommendations for improving the Leiden Ranking (for example, publication of sample calculations to increase transparency).
  12. García, J.A.; Rodriguez-Sánchez, R.; Fdez-Valdivia, J.: Ranking of the subject areas of Scopus (2011) 0.03
    0.026710846 = product of:
      0.16026507 = sum of:
        0.16026507 = weight(_text_:ranking in 4768) [ClassicSimilarity], result of:
          0.16026507 = score(doc=4768,freq=14.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.79057544 = fieldWeight in 4768, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4768)
      0.16666667 = coord(1/6)
    
    Abstract
    Here, we show a longitudinal analysis of the ranking of the subject areas of Elsevier's Scopus. To this aim, we present three summary measures based on the journal ranking scores for academic journals in each subject area. This longitudinal study allows us to analyze developmental trends over times in different subject areas with distinct citation and publication patterns. We evaluate the relative performance of each subject area by using the overall prestige for the most important journals with ranking score above a given threshold (e.g., in the first quartile) as well as the overall prestige gap for the less important journals with ranking score below a given threshold (e.g., below the top 10 journals). Thus, we propose that it should be possible to study different subject areas by means of appropriate summary measures of the journal ranking scores, which provide additional information beyond analyzing the inequality of the whole ranking-score distribution for academic journals in each subject area. It allows us to investigate whether subject areas with high levels of overall prestige for the first quartile journals also tended to achieve low levels of overall prestige gap for the journals below the top 10.
  13. Bornmann, L.; Moya Anegón, F.de: What proportion of excellent papers makes an institution one of the best worldwide? : Specifying thresholds for the interpretation of the results of the SCImago Institutions Ranking and the Leiden Ranking (2014) 0.02
    0.024729438 = product of:
      0.14837663 = sum of:
        0.14837663 = weight(_text_:ranking in 1235) [ClassicSimilarity], result of:
          0.14837663 = score(doc=1235,freq=12.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.7319307 = fieldWeight in 1235, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1235)
      0.16666667 = coord(1/6)
    
    Abstract
    University rankings generally present users with the problem of placing the results given for an institution in context. Only a comparison with the performance of all other institutions makes it possible to say exactly where an institution stands. In order to interpret the results of the SCImago Institutions Ranking (based on Scopus data) and the Leiden Ranking (based on Web of Science data), in this study we offer thresholds with which it is possible to assess whether an institution belongs to the top 1%, top 5%, top 10%, top 25%, or top 50% of institutions in the world. The thresholds are based on the excellence rate or PPtop 10%. Both indicators measure the proportion of an institution's publications which belong to the 10% most frequently cited publications and are the most important indicators for measuring institutional impact. For example, while an institution must achieve a value of 24.63% in the Leiden Ranking 2013 to be considered one of the top 1% of institutions worldwide, the SCImago Institutions Ranking requires 30.2%.
  14. Egghe, L.: Informetric explanation of some Leiden Ranking graphs (2014) 0.02
    0.022844076 = product of:
      0.13706446 = sum of:
        0.13706446 = weight(_text_:ranking in 1236) [ClassicSimilarity], result of:
          0.13706446 = score(doc=1236,freq=4.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.67612857 = fieldWeight in 1236, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0625 = fieldNorm(doc=1236)
      0.16666667 = coord(1/6)
    
    Abstract
    The S-shaped functional relation between the mean citation score and the proportion of top 10% publications for the 500 Leiden Ranking universities is explained using results of the shifted Lotka function. Also the concave or convex relation between the proportion of top 100?% publications, for different fractions ?, is explained using the obtained new informetric model.
  15. Bornmann, L.; Bauer, J.: Which of the world's institutions employ the most highly cited researchers : an analysis of the data from highlycited.com (2015) 0.02
    0.022844076 = product of:
      0.13706446 = sum of:
        0.13706446 = weight(_text_:ranking in 1556) [ClassicSimilarity], result of:
          0.13706446 = score(doc=1556,freq=4.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.67612857 = fieldWeight in 1556, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0625 = fieldNorm(doc=1556)
      0.16666667 = coord(1/6)
    
    Abstract
    In 2014, Thomson Reuters published a list of the most highly cited researchers worldwide (highlycited.com). Because the data are freely available for downloading and include the names of the researchers' institutions, we produced a ranking of the institutions on the basis of the number of highly cited researchers per institution. This ranking is intended to be a helpful amendment of other available institutional rankings.
  16. Bornmann, L.; Bauer, J.: Which of the world's institutions employ the most highly cited researchers : an analysis of the data from highlycited.com (2015) 0.02
    0.022844076 = product of:
      0.13706446 = sum of:
        0.13706446 = weight(_text_:ranking in 2223) [ClassicSimilarity], result of:
          0.13706446 = score(doc=2223,freq=4.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.67612857 = fieldWeight in 2223, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0625 = fieldNorm(doc=2223)
      0.16666667 = coord(1/6)
    
    Abstract
    In 2014, Thomson Reuters published a list of the most highly cited researchers worldwide (highlycited.com). Because the data are freely available for downloading and include the names of the researchers' institutions, we produced a ranking of the institutions on the basis of the number of highly cited researchers per institution. This ranking is intended to be a helpful amendment of other available institutional rankings.
  17. Haley, M.R.; McGee, M.K.: ¬A parametric "parent metric" approach for comparing maximum-normalized journal ranking metrics (2018) 0.02
    0.022844076 = product of:
      0.13706446 = sum of:
        0.13706446 = weight(_text_:ranking in 3313) [ClassicSimilarity], result of:
          0.13706446 = score(doc=3313,freq=4.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.67612857 = fieldWeight in 3313, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0625 = fieldNorm(doc=3313)
      0.16666667 = coord(1/6)
    
    Abstract
    This article proposes a parametric approach for facilitating inter-metric and inter-field comparisons of citation-based journal ranking metrics. The mechanism is simple to apply and adjusts for metric magnitude differentials and distributional asymmetries in the rank-score curves. The method is demonstrated using h-index, AWCR-index, g-index, and e-index data from journals in Accounting, Economics, and Finance.
  18. Haley, M.R.: On the normalization and distributional adjustment of journal ranking metrics : a simple parametric approach (2017) 0.02
    0.022844076 = product of:
      0.13706446 = sum of:
        0.13706446 = weight(_text_:ranking in 3653) [ClassicSimilarity], result of:
          0.13706446 = score(doc=3653,freq=4.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.67612857 = fieldWeight in 3653, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0625 = fieldNorm(doc=3653)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper presents a simple parametric statistical approach to comparing different citation-based journal ranking metrics within a single academic field. The mechanism can also be used to compare the same metric across different academic fields. The mechanism operates by selecting an optimal normalization factor and an optimal distributional adjustment for the rank-score curve, both of which are instrumental in making sound intermetric and interfield journal comparisons.
  19. Mayr, P.: Information Retrieval-Mehrwertdienste für Digitale Bibliotheken: : Crosskonkordanzen und Bradfordizing (2010) 0.02
    0.020983625 = product of:
      0.12590174 = sum of:
        0.12590174 = weight(_text_:ranking in 4910) [ClassicSimilarity], result of:
          0.12590174 = score(doc=4910,freq=6.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.62106377 = fieldWeight in 4910, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=4910)
      0.16666667 = coord(1/6)
    
    Abstract
    In dieser Arbeit werden zwei Mehrwertdienste für Suchsysteme vorgestellt, die typische Probleme bei der Recherche nach wissenschaftlicher Literatur behandeln können. Die beiden Mehrwertdienste semantische Heterogenitätsbehandlung am Beispiel Crosskonkordanzen und Re-Ranking auf Basis von Bradfordizing, die in unterschiedlichen Phasen der Suche zum Einsatz kommen, werden in diesem Buch ausführlich beschrieben und evaluiert. Für die Tests wurden Fragestellungen und Daten aus zwei Evaluationsprojekten (CLEF und KoMoHe) verwendet. Die intellektuell bewerteten Dokumente stammen aus insgesamt sieben Fachdatenbanken der Fächer Sozialwissenschaften, Politikwissenschaft, Wirtschaftswissenschaften, Psychologie und Medizin. Die Ergebnisse dieser Arbeit sind in das GESIS-Projekt IRM eingeflossen.
    RSWK
    Dokumentationssprache / Heterogenität / Information Retrieval / Ranking / Evaluation
    Subject
    Dokumentationssprache / Heterogenität / Information Retrieval / Ranking / Evaluation
  20. Bornmann, L.; Moya Anegón, F. de; Mutz, R.: Do universities or research institutions with a specific subject profile have an advantage or a disadvantage in institutional rankings? (2013) 0.02
    0.020983625 = product of:
      0.12590174 = sum of:
        0.12590174 = weight(_text_:ranking in 1109) [ClassicSimilarity], result of:
          0.12590174 = score(doc=1109,freq=6.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.62106377 = fieldWeight in 1109, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=1109)
      0.16666667 = coord(1/6)
    
    Abstract
    Using data compiled for the SCImago Institutions Ranking, we look at whether the subject area type an institution (university or research-focused institution) belongs to (in terms of the fields researched) has an influence on its ranking position. We used latent class analysis to categorize institutions based on their publications in certain subject areas. Even though this categorization does not relate directly to scientific performance, our results show that it exercises an important influence on the outcome of a performance measurement: Certain subject area types of institutions have an advantage in the ranking positions when compared with others. This advantage manifests itself not only when performance is measured with an indicator that is not field-normalized but also for indicators that are field-normalized.

Languages

  • e 138
  • d 14

Types

  • a 149
  • m 3
  • el 2
  • s 1
  • More… Less…