Search (121 results, page 1 of 7)

  • × theme_ss:"Informetrie"
  • × year_i:[2010 TO 2020}
  1. Herb, U.; Beucke, D.: ¬Die Zukunft der Impact-Messung : Social Media, Nutzung und Zitate im World Wide Web (2013) 0.22
    0.22029212 = product of:
      0.5874457 = sum of:
        0.19581522 = weight(_text_:2f in 2188) [ClassicSimilarity], result of:
          0.19581522 = score(doc=2188,freq=2.0), product of:
            0.26131085 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030822188 = queryNorm
            0.7493574 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
        0.19581522 = weight(_text_:2f in 2188) [ClassicSimilarity], result of:
          0.19581522 = score(doc=2188,freq=2.0), product of:
            0.26131085 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030822188 = queryNorm
            0.7493574 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
        0.19581522 = weight(_text_:2f in 2188) [ClassicSimilarity], result of:
          0.19581522 = score(doc=2188,freq=2.0), product of:
            0.26131085 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.030822188 = queryNorm
            0.7493574 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
      0.375 = coord(3/8)
    
    Content
    Vgl. unter: https://www.leibniz-science20.de%2Fforschung%2Fprojekte%2Faltmetrics-in-verschiedenen-wissenschaftsdisziplinen%2F&ei=2jTgVaaXGcK4Udj1qdgB&usg=AFQjCNFOPdONj4RKBDf9YDJOLuz3lkGYlg&sig2=5YI3KWIGxBmk5_kv0P_8iQ.
  2. Ohly, P.: Dimensions of globality : a bibliometric analysis (2016) 0.01
    0.0055932454 = product of:
      0.044745963 = sum of:
        0.044745963 = product of:
          0.06711894 = sum of:
            0.033711098 = weight(_text_:29 in 4942) [ClassicSimilarity], result of:
              0.033711098 = score(doc=4942,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.31092256 = fieldWeight in 4942, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4942)
            0.03340785 = weight(_text_:22 in 4942) [ClassicSimilarity], result of:
              0.03340785 = score(doc=4942,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.30952093 = fieldWeight in 4942, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4942)
          0.6666667 = coord(2/3)
      0.125 = coord(1/8)
    
    Date
    20. 1.2019 11:22:31
    Source
    Knowledge organization for a sustainable world: challenges and perspectives for cultural, scientific, and technological sharing in a connected society : proceedings of the Fourteenth International ISKO Conference 27-29 September 2016, Rio de Janeiro, Brazil / organized by International Society for Knowledge Organization (ISKO), ISKO-Brazil, São Paulo State University ; edited by José Augusto Chaves Guimarães, Suellen Oliveira Milani, Vera Dodebei
  3. Shi, D.; Rousseau, R.; Yang, L.; Li, J.: ¬A journal's impact factor is influenced by changes in publication delays of citing journals (2017) 0.01
    0.0051744715 = product of:
      0.020697886 = sum of:
        0.012270111 = product of:
          0.03681033 = sum of:
            0.03681033 = weight(_text_:problem in 3441) [ClassicSimilarity], result of:
              0.03681033 = score(doc=3441,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.28137225 = fieldWeight in 3441, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3441)
          0.33333334 = coord(1/3)
        0.008427775 = product of:
          0.025283325 = sum of:
            0.025283325 = weight(_text_:29 in 3441) [ClassicSimilarity], result of:
              0.025283325 = score(doc=3441,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.23319192 = fieldWeight in 3441, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3441)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    In this article we describe another problem with journal impact factors by showing that one journal's impact factor is dependent on other journals' publication delays. The proposed theoretical model predicts a monotonically decreasing function of the impact factor as a function of publication delay, on condition that the citation curve of the journal is monotone increasing during the publication window used in the calculation of the journal impact factor; otherwise, this function has a reversed U shape. Our findings based on simulations are verified by examining three journals in the information sciences: the Journal of Informetrics, Scientometrics, and the Journal of the Association for Information Science and Technology.
    Date
    16.11.2017 13:29:52
  4. Rötzer, F.: Bindestriche in Titeln von Artikeln schaden der wissenschaftlichen Reputation (2019) 0.01
    0.0051744715 = product of:
      0.020697886 = sum of:
        0.012270111 = product of:
          0.03681033 = sum of:
            0.03681033 = weight(_text_:problem in 5697) [ClassicSimilarity], result of:
              0.03681033 = score(doc=5697,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.28137225 = fieldWeight in 5697, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5697)
          0.33333334 = coord(1/3)
        0.008427775 = product of:
          0.025283325 = sum of:
            0.025283325 = weight(_text_:29 in 5697) [ClassicSimilarity], result of:
              0.025283325 = score(doc=5697,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.23319192 = fieldWeight in 5697, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5697)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Content
    "Aber warum werden Titel mit Bindestrichen weniger häufig zitiert? Die Wissenschaftler vermuten, dass Autoren, wenn sie einen Artikel zitieren, möglicherweise übersehen, Bindestriche anzugeben. Dann kann in den Datenbanken keine Verlinkung mit dem Artikel mit Bindestrichen im Titel hergestellt werden, weswegen der Zitationsindex falsch wird. Das Problem scheint sich bei mehreren Bindestrichen zu verstärken, die die Irrtumshäufigkeit der Menschen erhöhen. Dass die Länge der Titel etwas mit der Zitationshäufigkeit zu tun hat, bestreiten die Wissenschaftler. Längere Titel würden einfach mit höherer Wahrscheinlichkeit mehr Bindestriche enthalten - und deswegen weniger häufig wegen der Bindestrichfehler zitiert werden. Und Artikel mit Bindestrichen sollen auch den JIF von Wissenschaftsjournalen senken."
    Date
    29. 6.2019 17:46:17
  5. D'Angelo, C.A.; Giuffrida, C.; Abramo, G.: ¬A heuristic approach to author name disambiguation in bibliometrics databases for large-scale research assessments (2011) 0.01
    0.005155518 = product of:
      0.020622073 = sum of:
        0.012270111 = product of:
          0.03681033 = sum of:
            0.03681033 = weight(_text_:problem in 4190) [ClassicSimilarity], result of:
              0.03681033 = score(doc=4190,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.28137225 = fieldWeight in 4190, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4190)
          0.33333334 = coord(1/3)
        0.008351962 = product of:
          0.025055885 = sum of:
            0.025055885 = weight(_text_:22 in 4190) [ClassicSimilarity], result of:
              0.025055885 = score(doc=4190,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.23214069 = fieldWeight in 4190, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4190)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    National exercises for the evaluation of research activity by universities are becoming regular practice in ever more countries. These exercises have mainly been conducted through the application of peer-review methods. Bibliometrics has not been able to offer a valid large-scale alternative because of almost overwhelming difficulties in identifying the true author of each publication. We will address this problem by presenting a heuristic approach to author name disambiguation in bibliometric datasets for large-scale research assessments. The application proposed concerns the Italian university system, comprising 80 universities and a research staff of over 60,000 scientists. The key advantage of the proposed approach is the ease of implementation. The algorithms are of practical application and have considerably better scalability and expandability properties than state-of-the-art unsupervised approaches. Moreover, the performance in terms of precision and recall, which can be further improved, seems thoroughly adequate for the typical needs of large-scale bibliometric research assessments.
    Date
    22. 1.2011 13:06:52
  6. Hicks, D.; Wang, J.: Coverage and overlap of the new social sciences and humanities journal lists (2011) 0.01
    0.005155518 = product of:
      0.020622073 = sum of:
        0.012270111 = product of:
          0.03681033 = sum of:
            0.03681033 = weight(_text_:problem in 4192) [ClassicSimilarity], result of:
              0.03681033 = score(doc=4192,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.28137225 = fieldWeight in 4192, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4192)
          0.33333334 = coord(1/3)
        0.008351962 = product of:
          0.025055885 = sum of:
            0.025055885 = weight(_text_:22 in 4192) [ClassicSimilarity], result of:
              0.025055885 = score(doc=4192,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.23214069 = fieldWeight in 4192, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4192)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    This is a study of coverage and overlap in second-generation social sciences and humanities journal lists, with attention paid to curation and the judgment of scholarliness. We identify four factors underpinning coverage shortfalls: journal language, country, publisher size, and age. Analyzing these factors turns our attention to the process of assessing a journal as scholarly, which is a necessary foundation for every list of scholarly journals. Although scholarliness should be a quality inherent in the journal, coverage falls short because groups assessing scholarliness have different perspectives on the social sciences and humanities literature. That the four factors shape perspectives on the literature points to a deeper problem of fragmentation within the scholarly community. We propose reducing this fragmentation as the best method to reduce coverage shortfalls.
    Date
    22. 1.2011 13:21:28
  7. Huang, M.-H.; Huang, W.-T.; Chang, C.-C.; Chen, D. Z.; Lin, C.-P.: The greater scattering phenomenon beyond Bradford's law in patent citation (2014) 0.00
    0.0041949344 = product of:
      0.033559475 = sum of:
        0.033559475 = product of:
          0.05033921 = sum of:
            0.025283325 = weight(_text_:29 in 1352) [ClassicSimilarity], result of:
              0.025283325 = score(doc=1352,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.23319192 = fieldWeight in 1352, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1352)
            0.025055885 = weight(_text_:22 in 1352) [ClassicSimilarity], result of:
              0.025055885 = score(doc=1352,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.23214069 = fieldWeight in 1352, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1352)
          0.6666667 = coord(2/3)
      0.125 = coord(1/8)
    
    Date
    22. 8.2014 17:11:29
  8. Costas, R.; Perianes-Rodríguez, A.; Ruiz-Castillo, J.: On the quest for currencies of science : field "exchange rates" for citations and Mendeley readership (2017) 0.00
    0.003437012 = product of:
      0.013748048 = sum of:
        0.008180073 = product of:
          0.02454022 = sum of:
            0.02454022 = weight(_text_:problem in 4051) [ClassicSimilarity], result of:
              0.02454022 = score(doc=4051,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.1875815 = fieldWeight in 4051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4051)
          0.33333334 = coord(1/3)
        0.005567975 = product of:
          0.016703924 = sum of:
            0.016703924 = weight(_text_:22 in 4051) [ClassicSimilarity], result of:
              0.016703924 = score(doc=4051,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.15476047 = fieldWeight in 4051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4051)
          0.33333334 = coord(1/3)
      0.25 = coord(2/8)
    
    Abstract
    Purpose The introduction of "altmetrics" as new tools to analyze scientific impact within the reward system of science has challenged the hegemony of citations as the predominant source for measuring scientific impact. Mendeley readership has been identified as one of the most important altmetric sources, with several features that are similar to citations. The purpose of this paper is to perform an in-depth analysis of the differences and similarities between the distributions of Mendeley readership and citations across fields. Design/methodology/approach The authors analyze two issues by using in each case a common analytical framework for both metrics: the shape of the distributions of readership and citations, and the field normalization problem generated by differences in citation and readership practices across fields. In the first issue the authors use the characteristic scores and scales method, and in the second the measurement framework introduced in Crespo et al. (2013). Findings There are three main results. First, the citations and Mendeley readership distributions exhibit a strikingly similar degree of skewness in all fields. Second, the results on "exchange rates (ERs)" for Mendeley readership empirically supports the possibility of comparing readership counts across fields, as well as the field normalization of readership distributions using ERs as normalization factors. Third, field normalization using field mean readerships as normalization factors leads to comparably good results. Originality/value These findings open up challenging new questions, particularly regarding the possibility of obtaining conflicting results from field normalized citation and Mendeley readership indicators; this suggests the need for better determining the role of the two metrics in capturing scientific recognition.
    Date
    20. 1.2015 18:30:22
  9. Herb, U.: Überwachungskapitalismus und Wissenschaftssteuerung (2019) 0.00
    0.0024328893 = product of:
      0.019463114 = sum of:
        0.019463114 = product of:
          0.05838934 = sum of:
            0.05838934 = weight(_text_:29 in 5624) [ClassicSimilarity], result of:
              0.05838934 = score(doc=5624,freq=6.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.5385337 = fieldWeight in 5624, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5624)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    29. 6.2019 17:46:17
    4. 8.2019 19:52:29
    Issue
    [29. Juli 2019].
  10. Prathap, G.: Quantity, quality, and consistency as bibliometric indicators (2014) 0.00
    0.0021069439 = product of:
      0.01685555 = sum of:
        0.01685555 = product of:
          0.05056665 = sum of:
            0.05056665 = weight(_text_:29 in 1178) [ClassicSimilarity], result of:
              0.05056665 = score(doc=1178,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.46638384 = fieldWeight in 1178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1178)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    29. 1.2014 15:59:59
  11. Bornmann, L.: On the function of university rankings (2014) 0.00
    0.0021069439 = product of:
      0.01685555 = sum of:
        0.01685555 = product of:
          0.05056665 = sum of:
            0.05056665 = weight(_text_:29 in 1188) [ClassicSimilarity], result of:
              0.05056665 = score(doc=1188,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.46638384 = fieldWeight in 1188, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1188)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    29. 1.2014 16:55:03
  12. Zornic, N.; Markovic, A.; Jeremic, V.: How the top 500 ARWU can provide a misleading rank (2014) 0.00
    0.0021069439 = product of:
      0.01685555 = sum of:
        0.01685555 = product of:
          0.05056665 = sum of:
            0.05056665 = weight(_text_:29 in 1279) [ClassicSimilarity], result of:
              0.05056665 = score(doc=1279,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.46638384 = fieldWeight in 1279, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1279)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    16. 6.2014 19:29:15
  13. Marx, W.; Bornmann, L.: On the problems of dealing with bibliometric data (2014) 0.00
    0.0020879905 = product of:
      0.016703924 = sum of:
        0.016703924 = product of:
          0.05011177 = sum of:
            0.05011177 = weight(_text_:22 in 1239) [ClassicSimilarity], result of:
              0.05011177 = score(doc=1239,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.46428138 = fieldWeight in 1239, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1239)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    18. 3.2014 19:13:22
  14. Strotmann, A.; Zhao, D.: Author name disambiguation : what difference does it make in author-based citation analysis? (2012) 0.00
    0.0018075579 = product of:
      0.014460463 = sum of:
        0.014460463 = product of:
          0.04338139 = sum of:
            0.04338139 = weight(_text_:problem in 389) [ClassicSimilarity], result of:
              0.04338139 = score(doc=389,freq=4.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.33160037 = fieldWeight in 389, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=389)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    In this article, we explore how strongly author name disambiguation (AND) affects the results of an author-based citation analysis study, and identify conditions under which the traditional simplified approach of using surnames and first initials may suffice in practice. We compare author citation ranking and cocitation mapping results in the stem cell research field from 2004 to 2009 using two AND approaches: the traditional simplified approach of using author surname and first initial and a sophisticated algorithmic approach. We find that the traditional approach leads to extremely distorted rankings and substantially distorted mappings of authors in this field when based on first- or all-author citation counting, whereas last-author-based citation ranking and cocitation mapping both appear relatively immune to the author name ambiguity problem. This is largely because Romanized names of Chinese and Korean authors, who are very active in this field, are extremely ambiguous, but few of these researchers consistently publish as last authors in bylines. We conclude that a more earnest effort is required to deal with the author name ambiguity problem in both citation analysis and information retrieval, especially given the current trend toward globalization. In the stem cell research field, in which laboratory heads are traditionally listed as last authors in bylines, last-author-based citation ranking and cocitation mapping using the traditional approach to author name disambiguation may serve as a simple workaround, but likely at the price of largely filtering out Chinese and Korean contributions to the field as well as important contributions by young researchers.
  15. Ibáñez, A.; Armañanzas, R.; Bielza, C.; Larrañaga, P.: Genetic algorithms and Gaussian Bayesian networks to uncover the predictive core set of bibliometric indices (2016) 0.00
    0.0018075579 = product of:
      0.014460463 = sum of:
        0.014460463 = product of:
          0.04338139 = sum of:
            0.04338139 = weight(_text_:problem in 3041) [ClassicSimilarity], result of:
              0.04338139 = score(doc=3041,freq=4.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.33160037 = fieldWeight in 3041, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3041)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    The diversity of bibliometric indices today poses the challenge of exploiting the relationships among them. Our research uncovers the best core set of relevant indices for predicting other bibliometric indices. An added difficulty is to select the role of each variable, that is, which bibliometric indices are predictive variables and which are response variables. This results in a novel multioutput regression problem where the role of each variable (predictor or response) is unknown beforehand. We use Gaussian Bayesian networks to solve the this problem and discover multivariate relationships among bibliometric indices. These networks are learnt by a genetic algorithm that looks for the optimal models that best predict bibliometric data. Results show that the optimal induced Gaussian Bayesian networks corroborate previous relationships between several indices, but also suggest new, previously unreported interactions. An extended analysis of the best model illustrates that a set of 12 bibliometric indices can be accurately predicted using only a smaller predictive core subset composed of citations, g-index, q2-index, and hr-index. This research is performed using bibliometric data on Spanish full professors associated with the computer science area.
  16. Olensky, M.; Schmidt, M.; Eck, N.J. van: Evaluation of the citation matching algorithms of CWTS and iFQ in comparison to the Web of science (2016) 0.00
    0.0018075579 = product of:
      0.014460463 = sum of:
        0.014460463 = product of:
          0.04338139 = sum of:
            0.04338139 = weight(_text_:problem in 3130) [ClassicSimilarity], result of:
              0.04338139 = score(doc=3130,freq=4.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.33160037 = fieldWeight in 3130, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3130)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    The results of bibliometric studies provided by bibliometric research groups, for example, the Centre for Science and Technology Studies (CWTS) and the Institute for Research Information and Quality Assurance (iFQ), are often used in the process of research assessment. Their databases use Web of Science (WoS) citation data, which they match according to their own matching algorithms-in the case of CWTS for standard usage in their studies and in the case of iFQ on an experimental basis. Because the problem of nonmatched citations in the WoS persists due to inaccuracies in the references or inaccuracies introduced in the data extraction process, it is important to ascertain how well these inaccuracies are rectified in these citation matching algorithms. This article evaluates the algorithms of CWTS and iFQ in comparison to the WoS in a quantitative and a qualitative analysis. The analysis builds upon the method and the manually verified corpus of a previous study. The algorithm of CWTS performs best, closely followed by that of iFQ. The WoS algorithm still performs quite well (F1 score: 96.41%), but shows deficits in matching references containing inaccuracies. An additional problem is posed by incorrectly provided cited reference information in source articles by the WoS.
  17. Eck, N.J. van; Waltman, L.; Dekker, R.; Berg, J. van den: ¬A comparison of two techniques for bibliometric mapping : multidimensional scaling and VOS (2010) 0.00
    0.0015337638 = product of:
      0.012270111 = sum of:
        0.012270111 = product of:
          0.03681033 = sum of:
            0.03681033 = weight(_text_:problem in 4112) [ClassicSimilarity], result of:
              0.03681033 = score(doc=4112,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.28137225 = fieldWeight in 4112, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4112)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    VOS is a new mapping technique that can serve as an alternative to the well-known technique of multidimensional scaling (MDS). We present an extensive comparison between the use of MDS and the use of VOS for constructing bibliometric maps. In our theoretical analysis, we show the mathematical relation between the two techniques. In our empirical analysis, we use the techniques for constructing maps of authors, journals, and keywords. Two commonly used approaches to bibliometric mapping, both based on MDS, turn out to produce maps that suffer from artifacts. Maps constructed using VOS turn out not to have this problem. We conclude that in general maps constructed using VOS provide a more satisfactory representation of a dataset than maps constructed using well-known MDS approaches.
  18. Marx, W.: Special features of historical papers from the viewpoint of bibliometrics (2011) 0.00
    0.0015337638 = product of:
      0.012270111 = sum of:
        0.012270111 = product of:
          0.03681033 = sum of:
            0.03681033 = weight(_text_:problem in 4133) [ClassicSimilarity], result of:
              0.03681033 = score(doc=4133,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.28137225 = fieldWeight in 4133, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4133)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    This paper deals with the specific features of historical papers relevant for information retrieval and bibliometrics. The analysis is based mainly on the citation indexes accessible under the Web of Science (WoS) but also on field-specific databases: the Chemical Abstracts Service (CAS) literature database and the INSPEC database. First, the journal coverage of the WoS (in particular of the WoS Century of Science archive), the limitations of specific search fields as well as several database errors are discussed. Then, the problem of misspelled citations and their "mutations" is demonstrated by a few typical examples. Complex author names, complicated journal names, and other sources of errors that result from prior citation practice are further issues. Finally, some basic phenomena limiting the meaning of citation counts of historical papers are presented and explained.
  19. López Piñeiro, C.; Gimenez Toledo, E.: Knowledge classification : a problem for scientific assessment in Spain? (2011) 0.00
    0.0015337638 = product of:
      0.012270111 = sum of:
        0.012270111 = product of:
          0.03681033 = sum of:
            0.03681033 = weight(_text_:problem in 4735) [ClassicSimilarity], result of:
              0.03681033 = score(doc=4735,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.28137225 = fieldWeight in 4735, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4735)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
  20. Bouyssou, D.; Marchant, T.: Ranking scientists and departments in a consistent manner (2011) 0.00
    0.0015337638 = product of:
      0.012270111 = sum of:
        0.012270111 = product of:
          0.03681033 = sum of:
            0.03681033 = weight(_text_:problem in 4751) [ClassicSimilarity], result of:
              0.03681033 = score(doc=4751,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.28137225 = fieldWeight in 4751, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4751)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    The standard data that we use when computing bibliometric rankings of scientists are their publication/ citation records, i.e., so many papers with 0 citation, so many with 1 citation, so many with 2 citations, etc. The standard data for bibliometric rankings of departments have the same structure. It is therefore tempting (and many authors gave in to temptation) to use the same method for computing rankings of scientists and rankings of departments. Depending on the method, this can yield quite surprising and unpleasant results. Indeed, with some methods, it may happen that the "best" department contains the "worst" scientists, and only them. This problem will not occur if the rankings satisfy a property called consistency, recently introduced in the literature. In this article, we explore the consequences of consistency and we characterize two families of consistent rankings.

Languages

  • e 110
  • d 11

Types

  • a 119
  • el 3
  • m 1
  • s 1
  • More… Less…