Diese Datenbank enthält über 40.000 Dokumente zu Themen aus den Bereichen Formalerschließung – Inhaltserschließung – Information Retrieval.
© 2015 W. Gödert, TH Köln, Institut für Informationswissenschaft / Powered by litecat, BIS Oldenburg (Stand: 15. Juni 2019)
1Schmidt, M.: ¬An analysis of the validity of retraction annotation in pubmed and the web of science.
In: Journal of the Association for Information Science and Technology. 69(2018) no.2, S.318-328.
Abstract: Research on scientific misconduct relies increasingly on retractions of articles. An interdisciplinary line of research has been established that empirically assesses the phenomenon of scientific misconduct using information on retractions, and thus aims to shed light on aspects of misconduct that previously were hidden. However, comparability and interpretability of studies are to a certain extent impeded by an absence of standards in corpus delineation and by the fact that the validity of this empirical data basis has never been systematically scrutinized. This article assesses the conceptual and empirical delineation of retractions against related publication types through a comparative analysis of the coverage and consistency of retraction annotation in the databases PubMed and the Web of Science (WoS), which are both commonly used for empicial studies on retractions. The searching and linking approaches of the WoS were subsequently evaluated. The results indicate that a considerable number of PubMed retracted publications and retractions are not labeled as such in the WoS or are indistinguishable from corrections, which is highly relevant for corpus and sample strategies in the WoS.
Inhalt: Vgl.: http://onlinelibrary.wiley.com/doi/10.1002/asi.23913/full.
Objekt: PubMed ; Web of Science
2Ahlgren, P. ; Colliander, C. ; Sjögårde, P.: Exploring the relation between referencing practices and citation impact : a large-scale study based on Web of Science data.
In: Journal of the Association for Information Science and Technology. 69(2018) no.5, S.728-743.
Abstract: In this large-scale contribution, we deal with the relationship between properties of cited references of Web of Science articles and the field normalized citation rate of these articles. Using nearly 1 million articles, and three classification systems with different levels of granularity, we study the effects of number of cited references, share of references covered by Web of Science, mean age of references and mean citation rate of references on field normalized citation rate. To expose the relationship between the predictor variables and the response variable, we use quantile regression. We found that a higher number of references, a higher share of references to publications within Web of Science and references to more recent publications correlate with citation impact. A correlation was observed even when normalization was done with a finely grained classification system. The predictor variables affected citation impact to a larger extent at higher quantile levels. Regarding the relative importance of the predictor variables, citation impact of the cited references was in general the least important variable. Number of cited references carried most of the importance for both low and medium quantile levels, but this importance was lessened at the highest considered level.
Inhalt: Vgl.: https://onlinelibrary.wiley.com/doi/abs/10.1002/asi.23986.
Objekt: Web of Science
3Subelj, L. ; Fiala, D.: Publication boost in web of science journals and its effect on citation distributions.
In: Journal of the Association for Information Science and Technology. 68(2017) no.4, S.1018-1023.
Abstract: In this article, we show that the dramatic increase in the number of research articles indexed in the Web of Science database impacts the commonly observed distributions of citations within these articles. First, we document that the growing number of physics articles in recent years is attributed to existing journals publishing more and more articles rather than more new journals coming into being as it happens in computer science. Second, even though the references from the more recent articles generally cover a longer time span, the newer articles are cited more frequently than the older ones if the uneven article growth is not corrected for. Nevertheless, despite this change in the distribution of citations, the citation behavior of scientists does not seem to have changed.
Inhalt: Vgl.: http://onlinelibrary.wiley.com/doi/10.1002/asi.23718/full.
Objekt: Web of science
4Zhang, J. ; Yu, Q. ; Zheng, F. ; Long, C. ; Lu, Z. ; Duan, Z.: Comparing keywords plus of WOS and author keywords : a case study of patient adherence research.
In: Journal of the Association for Information Science and Technology. 67(2016) no.4, S.967-972.
Abstract: Bibliometric analysis based on literature in the Web of Science (WOS) has become an increasingly popular method for visualizing the structure of scientific fields. Keywords Plus and Author Keywords are commonly selected as units of analysis, despite the limited research evidence demonstrating the effectiveness of Keywords Plus. This study was conceived to evaluate the efficacy of Keywords Plus as a parameter for capturing the content and scientific concepts presented in articles. Using scientific papers about patient adherence that were retrieved from WOS, a comparative assessment of Keywords Plus and Author Keywords was performed at the scientific field level and the document level, respectively. Our search yielded more Keywords Plus terms than Author Keywords, and the Keywords Plus terms were more broadly descriptive. Keywords Plus is as effective as Author Keywords in terms of bibliometric analysis investigating the knowledge structure of scientific fields, but it is less comprehensive in representing an article's content.
Inhalt: Vgl.: http://onlinelibrary.wiley.com/doi/10.1002/asi.23437/abstract.
Objekt: Web of Science
5Leydesdorff, L. ; Moya-Anegón, F. de ; Nooy, W. de: Aggregated journal-journal citation relations in scopus and web of science matched and compared in terms of networks, maps, and interactive overlays.
In: Journal of the Association for Information Science and Technology. 67(2016) no.9, S.2194-2211.
Abstract: We compare the network of aggregated journal-journal citation relations provided by the Journal Citation Reports (JCR) 2012 of the Science Citation Index (SCI) and Social Sciences Citation Index (SSCI) with similar data based on Scopus 2012. First, global and overlay maps were developed for the 2 sets separately. Using fuzzy-string matching and ISSN numbers, we were able to match 10,524 journal names between the 2 sets: 96.4% of the 10,936 journals contained in JCR, or 51.2% of the 20,554 journals covered by Scopus. Network analysis was pursued on the set of journals shared between the 2 databases and the 2 sets of unique journals. Citations among the shared journals are more comprehensively covered in JCR than in Scopus, so the network in JCR is denser and more connected than in Scopus. The ranking of shared journals in terms of indegree (i.e., numbers of citing journals) or total citations is similar in both databases overall (Spearman rank correlation ??>?0.97), but some individual journals rank very differently. Journals that are unique to Scopus seem to be less important-they are citing shared journals rather than being cited by them-but the humanities are covered better in Scopus than in JCR.
Inhalt: Vgl.: http://onlinelibrary.wiley.com/doi/10.1002/asi.23372/full.
Objekt: Scopus ; Web of science
6Olensky, M. ; Schmidt, M. ; Eck, N.J. van: Evaluation of the citation matching algorithms of CWTS and iFQ in comparison to the Web of science.
In: Journal of the Association for Information Science and Technology. 67(2016) no.10, S.2550-2564.
Abstract: The results of bibliometric studies provided by bibliometric research groups, for example, the Centre for Science and Technology Studies (CWTS) and the Institute for Research Information and Quality Assurance (iFQ), are often used in the process of research assessment. Their databases use Web of Science (WoS) citation data, which they match according to their own matching algorithms-in the case of CWTS for standard usage in their studies and in the case of iFQ on an experimental basis. Because the problem of nonmatched citations in the WoS persists due to inaccuracies in the references or inaccuracies introduced in the data extraction process, it is important to ascertain how well these inaccuracies are rectified in these citation matching algorithms. This article evaluates the algorithms of CWTS and iFQ in comparison to the WoS in a quantitative and a qualitative analysis. The analysis builds upon the method and the manually verified corpus of a previous study. The algorithm of CWTS performs best, closely followed by that of iFQ. The WoS algorithm still performs quite well (F1 score: 96.41%), but shows deficits in matching references containing inaccuracies. An additional problem is posed by incorrectly provided cited reference information in source articles by the WoS.
Inhalt: Vgl.: http://onlinelibrary.wiley.com/doi/10.1002/asi.23590/full.
Objekt: Web of science
7Fang, H.: Classifying research articles in multidisciplinary sciences journals into subject categories.
In: Knowledge organization. 42(2015) no.3, S.139-153.
Abstract: In the Thomson Reuters Web of Science database, the subject categories of a journal are applied to all articles in the journal. However, many articles in multidisciplinary Sciences journals may only be represented by a small number of subject categories. To provide more accurate information on the research areas of articles in such journals, we can classify articles in these journals into subject categories as defined by Web of Science based on their references. For an article in a multidisciplinary sciences journal, the method counts the subject categories in all of the article's references indexed by Web of Science, and uses the most numerous subject categories of the references to determine the most appropriate classification of the article. We used articles in an issue of Proceedings of the National Academy of Sciences (PNAS) to validate the correctness of the method by comparing the obtained results with the categories of the articles as defined by PNAS and their content. This study shows that the method provides more precise search results for the subject category of interest in bibliometric investigations through recognition of articles in multidisciplinary sciences journals whose work relates to a particular subject category.
Inhalt: Vgl.: http://www.ergon-verlag.de/isko_ko/downloads/ko_42_2015_3.pdf.
Themenfeld: Automatisches Klassifizieren
Objekt: Web of science
8Rotolo, D. ; Leydesdorff, L.: Matching Medline/PubMed data with Web of Science: A routine in R language.
In: Journal of the Association for Information Science and Technology. 66(2015) no.10, S.2155-2159.
Abstract: We present a novel routine, namely medlineR, based on the R language, that allows the user to match data from Medline/PubMed with records indexed in the ISI Web of Science (WoS) database. The matching allows exploiting the rich and controlled vocabulary of medical subject headings (MeSH) of Medline/PubMed with additional fields of WoS. The integration provides data (e.g., citation data, list of cited reference, list of the addresses of authors' host organizations, WoS subject categories) to perform a variety of scientometric analyses. This brief communication describes medlineR, the method on which it relies, and the steps the user should follow to perform the matching across the two databases. To demonstrate the differences from Leydesdorff and Opthof (Journal of the American Society for Information Science and Technology, 64(5), 1076-1080), we conclude this artcle by testing the routine on the MeSH category "Burgada syndrome."
Inhalt: Vgl.: http://onlinelibrary.wiley.com/doi/10.1002/asi.23385/abstract.
Objekt: Medline ; PubMed ; Web of Science
9Ho, Y.-S. ; Kahn, M.: ¬A bibliometric study of highly cited reviews in the Science Citation Index expanded(TM).
In: Journal of the Association for Information Science and Technology. 65(2014) no.2, S.372-385.
Abstract: Some 1,857 highly cited reviews, namely those cited at least 1,000 times since publication to 2011, were identified using the data hosted on the Science Citation Index ExpandedT database (Thomson Reuters, New York, NY) between 1899 and 2011. The data are disaggregated by publication date, citation counts, journals, Web of Science® (Thomson Reuters) subject areas, citation life cycles, and publications by Nobel Prize winners. Six indicators, total publications, independent publications, collaborative publications, first-author publications, corresponding-author publications, and single-author publications were applied to evaluate publication of institutions and countries. Among the highly cited reviews, 33% were single-author, 61% were single-institution, and 83% were single-country reviews. The United States ranked top for all 6 indicators. The G7 (United States, United Kingdom, Germany, Canada, France, Japan, and Italy) countries were the site of almost all the highly cited reviews. The top 12 most productive institutions were all located in the United States with Harvard University (Cambridge, MA) the leader. The top 3 most productive journals were Chemical Reviews, Nature, and the Annual Review of Biochemistry. In addition, the impact of the reviews was analyzed by total citations from publication to 2011, citations in 2011, and citation in publication year.
Objekt: SCI ; Web of Science
10Crespo, J.A. ; Herranz, N. ; Li, Y. ; Ruiz-Castillo, J.: ¬The effect on citation inequality of differences in citation practices at the web of science subject category level.
In: Journal of the Association for Information Science and Technology. 65(2014) no.6, S.1244-1256.
Abstract: This article studies the impact of differences in citation practices at the subfield, or Web of Science subject category level, using the model introduced in Crespo, Li, and Ruiz-Castillo (2013a), according to which the number of citations received by an article depends on its underlying scientific influence and the field to which it belongs. We use the same Thomson Reuters data set of about 4.4 million articles used in Crespo et al. (2013a) to analyze 22 broad fields. The main results are the following: First, when the classification system goes from 22 fields to 219 subfields the effect on citation inequality of differences in citation practices increases from ?14% at the field level to 18% at the subfield level. Second, we estimate a set of exchange rates (ERs) over a wide [660, 978] citation quantile interval to express the citation counts of articles into the equivalent counts in the all-sciences case. In the fractional case, for example, we find that in 187 of 219 subfields the ERs are reliable in the sense that the coefficient of variation is smaller than or equal to 0.10. Third, in the fractional case the normalization of the raw data using the ERs (or subfield mean citations) as normalization factors reduces the importance of the differences in citation practices from 18% to 3.8% (3.4%) of overall citation inequality. Fourth, the results in the fractional case are essentially replicated when we adopt a multiplicative approach.
Objekt: Web of Science
11Mingers, J. ; Macri, F. ; Petrovici, D.: Using the h-index to measure the quality of journals in the field of business and management.
In: Information processing and management. 48(2012) no.2, S.234-241.
Abstract: This paper considers the use of the h-index as a measure of a journal's research quality and contribution. We study a sample of 455 journals in business and management all of which are included in the ISI Web of Science (WoS) and the Association of Business School's peer review journal ranking list. The h-index is compared with both the traditional impact factors, and with the peer review judgements. We also consider two sources of citation data - the WoS itself and Google Scholar. The conclusions are that the h-index is preferable to the impact factor for a variety of reasons, especially the selective coverage of the impact factor and the fact that it disadvantages journals that publish many papers. Google Scholar is also preferred to WoS as a data source. However, the paper notes that it is not sufficient to use any single metric to properly evaluate research achievements.
Inhalt: Vgl.: doi:10.1016/j.ipm.2011.03.009.
Objekt: h-index ; Web of Science ; Google Scholar
12Huang, M.-H. ; Lin, C.-S. ; Chen, D.-Z.: Counting methods, country rank changes, and counting inflation in the assessment of national research productivity and impact.
In: Journal of the American Society for Information Science and Technology. 62(2011) no.12, S.2427-2436.
Abstract: The counting of papers and citations is fundamental to the assessment of research productivity and impact. In an age of increasing scientific collaboration across national borders, the counting of papers produced by collaboration between multiple countries, and citations of such papers, raises concerns in country-level research evaluation. In this study, we compared the number counts and country ranks resulting from five different counting methods. We also observed inflation depending on the method used. Using the 1989 to 2008 physics papers indexed in ISI's Web of Science as our sample, we analyzed the counting results in terms of paper count (research productivity) as well as citation count and citation-paper ratio (CP ratio) based evaluation (research impact). The results show that at the country-level assessment, the selection of counting method had only minor influence on the number counts and country rankings in each assessment. However, the influences of counting methods varied between paper count, citation count, and CP ratio based evaluation. The findings also suggest that the popular counting method (whole counting) that gives each collaborating country one full credit may not be the best counting method. Straight counting that accredits only the first or the corresponding author or fractional counting that accredits each collaborator with partial and weighted credit might be the better choices.
Objekt: Web of Science
13Larivière, V. ; Macaluso, B.: Improving the coverage of social science and humanities researchers' output : the case of the Érudit journal platform.
In: Journal of the American Society for Information Science and Technology. 62(2011) no.12, S.2437-2442.
Abstract: In non-English-speaking countries the measurement of research output in the social sciences and humanities (SSH) using standard bibliographic databases suffers from a major drawback: the underrepresentation of articles published in local, non-English, journals. Using papers indexed (1) in a local database of periodicals (Érudit) and (2) in the Web of Science, assigned to the population of university professors in the province of Québec, this paper quantifies, for individual researchers and departments, the importance of papers published in local journals. It also analyzes differences across disciplines and between French-speaking and English-speaking universities. The results show that, while the addition of papers published in local journals to bibliometric measures has little effect when all disciplines are considered and for anglophone universities, it increases the output of researchers from francophone universities in the social sciences and humanities by almost a third. It also shows that there is very little relation, at the level of individual researchers or departments, between the output indexed in the Web of Science and the output retrieved from the Érudit database; a clear demonstration that the Web of Science cannot be used as a proxy for the "overall" production of SSH researchers in Québec. The paper concludes with a discussion on these disciplinary and language differences, as well as on their implications for rankings of universities.
Wissenschaftsfach: Sozialwissenschaften ; Geisteswissenschaften
Objekt: Web of Science ; Érudit
14Calculating the h-index : Web of Science, Scopus or Google Scholar?.
Abstract: Gegenüberstellung der Berechnung des h-Index in den drei Tools mit Beispiel Stephen Hawking (WoS: 59, Scopus: 19, Google Scholar: 76)
Objekt: h-index ; Web of Science ; Scopus ; Google Scholar
15Alonso, S. ; Cabrerizo, F.J. ; Herrera-Viedma, E. ; Herrera, F.: WoS query partitioner : a tool to retrieve very large numbers of items from the Web of Science using different source-based partitioning approaches.
In: Journal of the American Society for Information Science and Technology. 61(2010) no.8, S.1564-1581.
Abstract: Thomson Reuters' Web of Science (WoS) is undoubtedly a great tool for scientiometrics purposes. It allows one to retrieve and compute different measures such as the total number of papers that satisfy a particular condition; however, it also is well known that this tool imposes several different restrictions that make obtaining certain results difficult. One of those constraints is that the tool does not offer the total count of documents in a dataset if it is larger than 100,000 items. In this article, we propose and analyze different approaches that involve partitioning the search space (using the Source field) to retrieve item counts for very large datasets from the WoS. The proposed techniques improve previous approaches: They do not need any extra information about the retrieved dataset (thus allowing completely automatic procedures to retrieve the results), they are designed to avoid many of the restrictions imposed by the WoS, and they can be easily applied to almost any query. Finally, a description of WoS Query Partitioner, a freely available and online interactive tool that implements those techniques, is presented.
Objekt: Web of Science
16García-Pérez, M.A.: Accuracy and completeness of publication and citation records in the Web of Science, PsycINFO, and Google Scholar : a case study for the computation of h indices in Psychology.
In: Journal of the American Society for Information Science and Technology. 61(2010) no.10, S.2070-2085.
Abstract: Hirsch's h index is becoming the standard measure of an individual's research accomplishments. The aggregation of individuals' measures is also the basis for global measures at institutional or national levels. To investigate whether the h index can be reliably computed through alternative sources of citation records, the Web of Science (WoS), PsycINFO and Google Scholar (GS) were used to collect citation records for known publications of four Spanish psychologists. Compared with WoS, PsycINFO included a larger percentage of publication records, whereas GS outperformed WoS and PsycINFO in this respect. Compared with WoS, PsycINFO retrieved a larger number of citations in unique areas of psychology, but it retrieved a smaller number of citations in areas that are close to statistics or the neurosciences, whereas GS retrieved the largest numbers of citations in all cases. Incorrect citations were scarce in Wos (0.3%), more prevalent in PsycINFO (1.1%), and overwhelming in GS (16.5%). All platforms retrieved unique citations, the largest set coming from GS. WoS and PsycINFO cover distinct areas of psychology unevenly, thus applying different penalties on the h index of researches working in different fields. Obtaining fair and accurate h indices required the union of citations retrieved by all three platforms.
Objekt: h-index ; Web of Science ; PsycINFO ; Google Scholar
17Archambault, E. ; Campbell, D ; Gingras, Y. ; Larivière, V.: Comparing bibliometric statistics obtained from the Web of Science and Scopus.
In: Journal of the American Society for Information Science and Technology. 60(2009) no.7, S.1320-1326.
Abstract: For more than 40 years, the Institute for Scientific Information (ISI, now part of Thomson Reuters) produced the only available bibliographic databases from which bibliometricians could compile large-scale bibliometric indicators. ISI's citation indexes, now regrouped under the Web of Science (WoS), were the major sources of bibliometric data until 2004, when Scopus was launched by the publisher Reed Elsevier. For those who perform bibliometric analyses and comparisons of countries or institutions, the existence of these two major databases raises the important question of the comparability and stability of statistics obtained from different data sources. This paper uses macrolevel bibliometric indicators to compare results obtained from the WoS and Scopus. It shows that the correlations between the measures obtained with both databases for the number of papers and the number of citations received by countries, as well as for their ranks, are extremely high. There is also a very high correlation when countries' papers are broken down by field. The paper thus provides evidence that indicators of scientific production and citations at the country level are stable and largely independent of the database.
Objekt: Web of Science ; Scopus
18Meho, L.I. ; Sugimoto, C.R.: Assessing the scholarly impact of information studies : a tale of two citation databases - Scopus and Web of Science.
In: Journal of the American Society for Information Science and Technology. 60(2009) no.12, S.2499-2508.
Abstract: This study uses citations, from 1996 to 2007, to the work of 80 randomly selected full-time, information studies (IS) faculty members from North America to examine differences between Scopus and Web of Science in assessing the scholarly impact of the field focusing on the most frequently citing journals, conference proceedings, research domains and institutions, as well as all citing countries. Results show that when assessment is limited to smaller citing entities (e.g., journals, conference proceedings, institutions), the two databases produce considerably different results, whereas when assessment is limited to larger citing entities (e.g., research domains, countries), the two databases produce very similar pictures of scholarly impact. In the former case, the use of Scopus (for journals and institutions) and both Scopus and Web of Science (for conference proceedings) is necessary to more accurately assess or visualize the scholarly impact of IS, whereas in the latter case, assessing or visualizing the scholarly impact of IS is independent of the database used.
Objekt: Scopus ; Web of Science
19McVeigh, M.E.: Citation indexes and the Web of Science.
In: Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates. London : Taylor & Francis, 2009. S.xx-xx.
Abstract: The Web of Science, an online database of bibliographic information produced by Thomson Reuters- draws its real value from the scholarly citation index at its core. By indexing the cited references from each paper as a separate part of the bibliographic data, a citation index creates a pathway by which a paper can be linked backward in time to the body of work that preceded it, as well as linked forward in time to its scholarly descendants. This entry provides a brief history of the development of the citation index, its core functionalities, and the way these unique data are provided to users through the Web of Science.
Anmerkung: Vgl.: http://www.tandfonline.com/doi/book/10.1081/E-ELIS3.
Themenfeld: Citation indexing
Objekt: Web of Science ; Science Citation Index ; Social Sciences Citation Index ; Arts and Humanities Citation Index
20H-Index auch im Web of Science.
In: Mitteilungen der Vereinigung Österreichischer Bibliothekarinnen und Bibliothekare. 61(2008) H.1, S.124-125.
Inhalt: "Zur Kurzmitteilung "Latest enhancements in Scopus: ... h-Index incorporated in Scopus" in den letzten Online-Mitteilungen (Online-Mitteilungen 92, S.31) ist zu korrigieren, dass der h-Index sehr wohl bereits im Web of Science enthalten ist. Allerdings findet man/frau diese Information nicht in der "cited ref search", sondern neben der Trefferliste einer Quick Search, General Search oder einer Suche über den Author Finder in der rechten Navigationsleiste unter dem Titel "Citation Report". Der "Citation Report" bietet für die in der jeweiligen Trefferliste angezeigten Arbeiten: - Die Gesamtzahl der Zitierungen aller Arbeiten in der Trefferliste - Die mittlere Zitationshäufigkeit dieser Arbeiten - Die Anzahl der Zitierungen der einzelnen Arbeiten, aufgeschlüsselt nach Publikationsjahr der zitierenden Arbeiten - Die mittlere Zitationshäufigkeit dieser Arbeiten pro Jahr - Den h-Index (ein h-Index von x sagt aus, dass x Arbeiten der Trefferliste mehr als x-mal zitiert wurden; er ist gegenüber sehr hohen Zitierungen einzelner Arbeiten unempfindlicher als die mittlere Zitationshäufigkeit)."
Themenfeld: Citation indexing ; Informetrie
Objekt: H-Index ; Web of Science