Diese Datenbank enthält über 40.000 Dokumente zu Themen aus den Bereichen Formalerschließung – Inhaltserschließung – Information Retrieval.
© 2015 W. Gödert, TH Köln, Institut für Informationswissenschaft / Powered by litecat, BIS Oldenburg (Stand: 16. Dezember 2019)
1Cabanac, G.: Bibliogifts in LibGen? : a study of a text-sharing platform driven by biblioleaks and crowdsourcing.
In: Journal of the Association for Information Science and Technology. 67(2016) no.4, S.874-884.
Abstract: Research articles disseminate the knowledge produced by the scientific community. Access to this literature is crucial for researchers and the general public. Apparently, "bibliogifts" are available online for free from text-sharing platforms. However, little is known about such platforms. What is the size of the underlying digital libraries? What are the topics covered? Where do these documents originally come from? This article reports on a study of the Library Genesis platform (LibGen). The 25 million documents (42 terabytes) it hosts and distributes for free are mostly research articles, textbooks, and books in English. The article collection stems from isolated, but massive, article uploads (71%) in line with a "biblioleaks" scenario, as well as from daily crowdsourcing (29%) by worldwide users of platforms such as Reddit Scholar and Sci-Hub. By relating the DOIs registered at CrossRef and those cached at LibGen, this study reveals that 36% of all DOI articles are available for free at LibGen. This figure is even higher (68%) for three major publishers: Elsevier, Springer, and Wiley. More research is needed to understand to what extent researchers and the general public have recourse to such text-sharing platforms and why.
Inhalt: Vgl.: http://onlinelibrary.wiley.com/doi/10.1002/asi.23445/abstract.
Themenfeld: Elektronisches Publizieren
Objekt: LibGen ; Sci-Hub
2Hartley, J. ; Cabanac, G. ; Kozak, M. ; Hubert, G.: Research on tables and graphs in academic articles : pitfalls and promises.
In: Journal of the Association for Information Science and Technology. 66(2015) no.2, S.408-427.
Abstract: Many papers have appeared recently assessing the effects of using tables and graphs in scientific publications. In this brief communication, we assess some of the methodological difficulties that have arisen in this context. These difficulties encompass issues of data availability, suitability of indicators, nature and purpose of tables and graphs, and the role of supplementary information.
Inhalt: Vgl.: http://onlinelibrary.wiley.com/doi/10.1002/asi.23208/abstract.
3Cabanac, G. ; Hubert, G. ; Hartley, J.: Solo versus collaborative writing : discrepancies in the use of tables and graphs in academic articles.
In: Journal of the Association for Information Science and Technology. 65(2014) no.4, S.812-820.
Abstract: The number of authors collaborating to write scientific articles has been increasing steadily, and with this collaboration, other factors have also changed, such as the length of articles and the number of citations. However, little is known about potential discrepancies in the use of tables and graphs between single and collaborating authors. In this article, we ask whether multiauthor articles contain more tables and graphs than single-author articles, and we studied 5,180 recent articles published in six science and social sciences journals. We found that pairs and multiple authors used significantly more tables and graphs than single authors. Such findings indicate that there is a greater emphasis on the role of tables and graphs in collaborative writing, and we discuss some of the possible causes and implications of these findings.
4Cabanac, G. ; Preuss, T.: Capitalizing on order effects in the bids of peer-reviewed conferences to secure reviews by expert referees.
In: Journal of the American Society for Information Science and Technology. 64(2013) no.2, S.405-415.
Abstract: Peer review supports scientific conferences in selecting high-quality papers for publication. Referees are expected to evaluate submissions equitably according to objective criteria (e.g., originality of the contribution, soundness of the theory, validity of the experiments). We argue that the submission date of papers is a subjective factor playing a role in the way they are evaluated. Indeed, program committee (PC) chairs and referees process submission lists that are usually sorted by paperIDs. This order conveys chronological information, as papers are numbered sequentially upon reception. We show that order effects lead to unconscious favoring of early-submitted papers to the detriment of later-submitted papers. Our point is supported by a study of 42 peer-reviewed conferences in Computer Science showing a decrease in the number of bids placed on submissions with higher paperIDs. It is advised to counterbalance order effects during the bidding phase of peer review by promoting the submissions with fewer bids to potential referees. This manipulation intends to better share bids out among submissions in order to attract qualified referees for all submissions. This would secure reviews from confident referees, who are keen on voicing sharp opinions and recommendations (acceptance or rejection) about submissions. This work contributes to the integrity of peer review, which is mandatory to maintain public trust in science.
5Cabanac, G. ; Hartley, J.: Issues of work-life balance among JASIST authors and editors.
In: Journal of the American Society for Information Science and Technology. 64(2013) no.10, S.2182-2186.
Abstract: Many dedicated scientists reject the concept of maintaining a "work-life balance." They argue that work is actually a huge part of life. In the mind-set of these scientists, weekdays and weekends are equally appropriate for working on their research. Although we all have encountered such people, we may wonder how widespread this condition is with other scientists in our field. This brief communication probes work-life balance issues among JASIST authors and editors. We collected and examined the publication histories for 1,533 of the 2,402 articles published in JASIST between 2001 and 2012. Although there is no rush to submit, revise, or accept papers, we found that 11% of these events happened during weekends and that this trend has been increasing since 2005. Our findings suggest that working during the weekend may be one of the ways that scientists cope with the highly demanding era of "publish or perish." We hope that our findings will raise an awareness of the steady increases in work among scientists before it affects our work-life balance even more.
6Cabanac, G.: Shaping the landscape of research in information systems from the perspective of editorial boards : a scientometric study of 77 leading journals.
In: Journal of the American Society for Information Science and Technology. 63(2012) no.5, S.977-996.
Abstract: Characteristics of the Journal of the American Society for Information Science and Technology and 76 other journals listed in the InformationSystems category of the Journal Citation Reports-Science edition 2009 were analyzed. Besides reporting usual bibliographic indicators, we investigated the human cornerstone of any peer-reviewed journal: its editorial board. Demographic data about the 2,846 gatekeepers serving in information systems (IS) editorial boards were collected. We discuss various scientometric indicators supported by descriptive statistics. Our findings reflect the great variety of IS journals in terms of research output, author communities, editorial boards, and gatekeeper demographics (e.g., diversity in gender and location), seniority, authority, and degree of involvement in editorial boards. We believe that these results may help the general public and scholars (e.g., readers, authors, journal gatekeepers, policy makers) to revise and increase their knowledge of scholarly communication in the IS field. The EB_IS_2009 dataset supporting this scientometric study is released as online supplementary material to this article to foster further research on editorial boards.
7Cabanac, G. ; Chevalier, M. ; Chrisment, C. ; Julien, C.: Social validation of collective annotations : definition and experiment.
In: Journal of the American Society for Information Science and Technology. 61(2010) no.2, S.271-287.
Abstract: People taking part in argumentative debates through collective annotations face a highly cognitive task when trying to estimate the group's global opinion. In order to reduce this effort, we propose in this paper to model such debates prior to evaluating their social validation. Computing the degree of global confirmation (or refutation) enables the identification of consensual (or controversial) debates. Readers as well as prominent information systems may thus benefit from this information. The accuracy of the social validation measure was tested through an online study conducted with 121 participants. We compared their human perception of consensus in argumentative debates with the results of the three proposed social validation algorithms. Their efficiency in synthesizing opinions was demonstrated by the fact that they achieved an accuracy of up to 84%.