Diese Datenbank enthält über 40.000 Dokumente zu Themen aus den Bereichen Formalerschließung – Inhaltserschließung – Information Retrieval.
© 2015 W. Gödert, TH Köln, Institut für Informationswissenschaft / Powered by litecat, BIS Oldenburg (Stand: 04. Juni 2021)
1Colavizza, G. ; Boyack, K.W. ; Eck, N.J. van ; Waltman, L.: ¬The closer the better : similarity of publication pairs at different cocitation levels.
In: Journal of the Association for Information Science and Technology. 69(2018) no.4, S.600-609.
Abstract: We investigated the similarities of pairs of articles that are cocited at the different cocitation levels of the journal, article, section, paragraph, sentence, and bracket. Our results indicate that textual similarity, intellectual overlap (shared references), author overlap (shared authors), proximity in publication time all rise monotonically as the cocitation level gets lower (from journal to bracket). While the main gain in similarity happens when moving from journal to article cocitation, all level changes entail an increase in similarity, especially section to paragraph and paragraph to sentence/bracket levels. We compared the results from four journals over the years 2010-2015: Cell, the European Journal of Operational Research, Physics Letters B, and Research Policy, with consistent general outcomes and some interesting differences. Our findings motivate the use of granular cocitation information as defined by meaningful units of text, with implications for, among others, the elaboration of maps of science and the retrieval of scholarly literature.
Inhalt: Vgl.: https://onlinelibrary.wiley.com/doi/abs/10.1002/asi.23981.
2Olensky, M. ; Schmidt, M. ; Eck, N.J. van: Evaluation of the citation matching algorithms of CWTS and iFQ in comparison to the Web of science.
In: Journal of the Association for Information Science and Technology. 67(2016) no.10, S.2550-2564.
Abstract: The results of bibliometric studies provided by bibliometric research groups, for example, the Centre for Science and Technology Studies (CWTS) and the Institute for Research Information and Quality Assurance (iFQ), are often used in the process of research assessment. Their databases use Web of Science (WoS) citation data, which they match according to their own matching algorithms-in the case of CWTS for standard usage in their studies and in the case of iFQ on an experimental basis. Because the problem of nonmatched citations in the WoS persists due to inaccuracies in the references or inaccuracies introduced in the data extraction process, it is important to ascertain how well these inaccuracies are rectified in these citation matching algorithms. This article evaluates the algorithms of CWTS and iFQ in comparison to the WoS in a quantitative and a qualitative analysis. The analysis builds upon the method and the manually verified corpus of a previous study. The algorithm of CWTS performs best, closely followed by that of iFQ. The WoS algorithm still performs quite well (F1 score: 96.41%), but shows deficits in matching references containing inaccuracies. An additional problem is posed by incorrectly provided cited reference information in source articles by the WoS.
Inhalt: Vgl.: http://onlinelibrary.wiley.com/doi/10.1002/asi.23590/full.
Objekt: Web of science
3Waltman, L. ; Eck, N.J. van ; Raan, A.F.J. van: Universality of citation distributions revisited.
In: Journal of the American Society for Information Science and Technology. 63(2012) no.1, S.72-77.
Abstract: Radicchi, Fortunato, and Castellano (2008) claim that, apart from a scaling factor, all fields of science are characterized by the same citation distribution. We present a large-scale validation study of this universality-of-citation-distributions claim. Our analysis shows that claiming citation distributions to be universal for all fields of science is not warranted. Although many fields indeed seem to have fairly similar citation distributions, there are exceptions as well. We also briefly discuss the consequences of our findings for the measurement of scientific impact using citation-based bibliometric indicators.
4Waltman, L. ; Eck, N.J. van: ¬The inconsistency of the h-index : the case of web accessibility in Western European countries.
In: Journal of the American Society for Information Science and Technology. 63(2012) no.2, S.406-415.
Abstract: The h-index is a popular bibliometric indicator for assessing individual scientists. We criticize the h-index from a theoretical point of view. We argue that for the purpose of measuring the overall scientific impact of a scientist (or some other unit of analysis), the h-index behaves in a counterintuitive way. In certain cases, the mechanism used by the h-index to aggregate publication and citation statistics into a single number leads to inconsistencies in the way in which scientists are ranked. Our conclusion is that the h-index cannot be considered an appropriate indicator of a scientist's overall scientific impact. Based on recent theoretical insights, we discuss what kind of indicators can be used as an alternative to the h-index. We pay special attention to the highly cited publications indicator. This indicator has a lot in common with the h-index, but unlike the h-index it does not produce inconsistent rankings.
5Waltman, L. ; Eck, N.J. van: ¬A new methodology for constructing a publication-level classification system of science : keyword maps in Google scholar citations.
In: Journal of the American Society for Information Science and Technology. 63(2012) no.12, S.2378-2392.
Abstract: Classifying journals or publications into research areas is an essential element of many bibliometric analyses. Classification usually takes place at the level of journals, where the Web of Science subject categories are the most popular classification system. However, journal-level classification systems have two important limitations: They offer only a limited amount of detail, and they have difficulties with multidisciplinary journals. To avoid these limitations, we introduce a new methodology for constructing classification systems at the level of individual publications. In the proposed methodology, publications are clustered into research areas based on citation relations. The methodology is able to deal with very large numbers of publications. We present an application in which a classification system is produced that includes almost 10 million publications. Based on an extensive analysis of this classification system, we discuss the strengths and the limitations of the proposed methodology. Important strengths are the transparency and relative simplicity of the methodology and its fairly modest computing and memory requirements. The main limitation of the methodology is its exclusive reliance on direct citation relations between publications. The accuracy of the methodology can probably be increased by also taking into account other types of relations-for instance, based on bibliographic coupling.
6Waltman, L. ; Calero-Medina, C. ; Kosten, J. ; Noyons, E.C.M. ; Tijssen, R.J.W. ; Eck, N.J. van ; Leeuwen, T.N. van ; Raan, A.F.J. van ; Visser, M.S. ; Wouters, P.: ¬The Leiden ranking 2011/2012 : data collection, indicators, and interpretation.
In: Journal of the American Society for Information Science and Technology. 63(2012) no.12, S.2405-2418.
Abstract: The Leiden Ranking 2011/2012 is a ranking of universities based on bibliometric indicators of publication output, citation impact, and scientific collaboration. The ranking includes 500 major universities from 41 different countries. This paper provides an extensive discussion of the Leiden Ranking 2011/2012. The ranking is compared with other global university rankings, in particular the Academic Ranking of World Universities (commonly known as the Shanghai Ranking) and the Times Higher Education World University Rankings. The comparison focuses on the methodological choices underlying the different rankings. Also, a detailed description is offered of the data collection methodology of the Leiden Ranking 2011/2012 and of the indicators used in the ranking. Various innovations in the Leiden Ranking 2011/2012 are presented. These innovations include (1) an indicator based on counting a university's highly cited publications, (2) indicators based on fractional rather than full counting of collaborative publications, (3) the possibility of excluding non-English language publications, and (4) the use of stability intervals. Finally, some comments are made on the interpretation of the ranking and a number of limitations of the ranking are pointed out.
7Waltman, L. ; Eck, N.J. van: ¬The relation between eigenfactor, audience factor, and influence weight.
In: Journal of the American Society for Information Science and Technology. 61(2010) no.7, S.1476-1486.
Abstract: We present a theoretical and empirical analysis of a number of bibliometric indicators of journal performance. We focus on three indicators in particular: the Eigenfactor indicator, the audience factor, and the influence weight indicator. Our main finding is that the last two indicators can be regarded as a kind of special case of the first indicator. We also find that the three indicators can be nicely characterized in terms of two properties. We refer to these properties as the property of insensitivity to field differences and the property of insensitivity to insignificant journals. The empirical results that we present illustrate our theoretical findings. We also show empirically that the differences between various indicators of journal performance are quite substantial.
8Eck, N.J. van ; Waltman, L. ; Dekker, R. ; Berg, J. van den: ¬A comparison of two techniques for bibliometric mapping : multidimensional scaling and VOS.
In: Journal of the American Society for Information Science and Technology. 61(2010) no.12, S.2405-2416.
Abstract: VOS is a new mapping technique that can serve as an alternative to the well-known technique of multidimensional scaling (MDS). We present an extensive comparison between the use of MDS and the use of VOS for constructing bibliometric maps. In our theoretical analysis, we show the mathematical relation between the two techniques. In our empirical analysis, we use the techniques for constructing maps of authors, journals, and keywords. Two commonly used approaches to bibliometric mapping, both based on MDS, turn out to produce maps that suffer from artifacts. Maps constructed using VOS turn out not to have this problem. We conclude that in general maps constructed using VOS provide a more satisfactory representation of a dataset than maps constructed using well-known MDS approaches.
9Eck, N.J. van ; Waltman, L.: How to normalize cooccurrence data? : an analysis of some well-known similarity measures.
In: Journal of the American Society for Information Science and Technology. 60(2009) no.8, S.1635-1651.
Abstract: In scientometric research, the use of cooccurrence data is very common. In many cases, a similarity measure is employed to normalize the data. However, there is no consensus among researchers on which similarity measure is most appropriate for normalization purposes. In this article, we theoretically analyze the properties of similarity measures for cooccurrence data, focusing in particular on four well-known measures: the association strength, the cosine, the inclusion index, and the Jaccard index. We also study the behavior of these measures empirically. Our analysis reveals that there exist two fundamentally different types of similarity measures, namely, set-theoretic measures and probabilistic measures. The association strength is a probabilistic measure, while the cosine, the inclusion index, and the Jaccard index are set-theoretic measures. Both our theoretical and our empirical results indicate that cooccurrence data can best be normalized using a probabilistic measure. This provides strong support for the use of the association strength in scientometric research.
10Eck, N.J. van ; Waltman, L.: Appropriate similarity measures for author co-citation analysis.
In: Journal of the American Society for Information Science and Technology. 59(2008) no.10, S.1653-1661.
Abstract: We provide in this article a number of new insights into the methodological discussion about author co-citation analysis. We first argue that the use of the Pearson correlation for measuring the similarity between authors' co-citation profiles is not very satisfactory. We then discuss what kind of similarity measures may be used as an alternative to the Pearson correlation. We consider three similarity measures in particular. One is the well-known cosine. The other two similarity measures have not been used before in the bibliometric literature. We show by means of an example that the choice of an appropriate similarity measure has a high practical relevance. Finally, we discuss the use of similarity measures for statistical inference.
11Waltman, L. ; Eck, N.J. van: Some comments on the question whether co-occurrence data should be normalized.
In: Journal of the American Society for Information Science and Technology. 58(2007) no.11, S.1701-1703.
Abstract: In a recent article in JASIST, L. Leydesdorff and L. Vaughan (2006) asserted that raw cocitation data should be analyzed directly, without first applying a normalization such as the Pearson correlation. In this communication, it is argued that there is nothing wrong with the widely adopted practice of normalizing cocitation data. One of the arguments put forward by Leydesdorff and Vaughan turns out to depend crucially on incorrect multidimensional scaling maps that are due to an error in the PROXSCAL program in SPSS.