Search (30 results, page 1 of 2)

  • × author_ss:"Larivière, V."
  1. Larivière, V.; Gingras, Y.; Archambault, E.: ¬The decline in the concentration of citations, 1900-2007 (2009) 0.02
    0.023156527 = product of:
      0.046313055 = sum of:
        0.019136423 = weight(_text_:for in 2763) [ClassicSimilarity], result of:
          0.019136423 = score(doc=2763,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.21557912 = fieldWeight in 2763, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=2763)
        0.027176632 = product of:
          0.054353263 = sum of:
            0.054353263 = weight(_text_:22 in 2763) [ClassicSimilarity], result of:
              0.054353263 = score(doc=2763,freq=4.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.32829654 = fieldWeight in 2763, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2763)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This article challenges recent research (Evans, 2008) reporting that the concentration of cited scientific literature increases with the online availability of articles and journals. Using Thomson Reuters' Web of Science, the present article analyses changes in the concentration of citations received (2- and 5-year citation windows) by papers published between 1900 and 2005. Three measures of concentration are used: the percentage of papers that received at least one citation (cited papers); the percentage of papers needed to account for 20%, 50%, and 80% of the citations; and the Herfindahl-Hirschman index (HHI). These measures are used for four broad disciplines: natural sciences and engineering, medical fields, social sciences, and the humanities. All these measures converge and show that, contrary to what was reported by Evans, the dispersion of citations is actually increasing.
    Date
    22. 3.2009 19:22:35
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.4, S.858-862
  2. Haustein, S.; Sugimoto, C.; Larivière, V.: Social media in scholarly communication : Guest editorial (2015) 0.01
    0.011569941 = product of:
      0.023139883 = sum of:
        0.013531493 = weight(_text_:for in 3809) [ClassicSimilarity], result of:
          0.013531493 = score(doc=3809,freq=12.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.15243745 = fieldWeight in 3809, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3809)
        0.009608389 = product of:
          0.019216778 = sum of:
            0.019216778 = weight(_text_:22 in 3809) [ClassicSimilarity], result of:
              0.019216778 = score(doc=3809,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.116070345 = fieldWeight in 3809, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3809)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This year marks 350 years since the inaugural publications of both the Journal des Sçavans and the Philosophical Transactions, first published in 1665 and considered the birth of the peer-reviewed journal article. This form of scholarly communication has not only remained the dominant model for disseminating new knowledge (particularly for science and medicine), but has also increased substantially in volume. Derek de Solla Price - the "father of scientometrics" (Merton and Garfield, 1986, p. vii) - was the first to document the exponential increase in scientific journals and showed that "scientists have always felt themselves to be awash in a sea of the scientific literature" (Price, 1963, p. 15), as, for example, expressed at the 1948 Royal Society's Scientific Information Conference: Not for the first time in history, but more acutely than ever before, there was a fear that scientists would be overwhelmed, that they would be no longer able to control the vast amounts of potentially relevant material that were pouring forth from the world's presses, that science itself was under threat (Bawden and Robinson, 2008, p. 183).
    One of the solutions to help scientists filter the most relevant publications and, thus, to stay current on developments in their fields during the transition from "little science" to "big science", was the introduction of citation indexing as a Wellsian "World Brain" (Garfield, 1964) of scientific information: It is too much to expect a research worker to spend an inordinate amount of time searching for the bibliographic descendants of antecedent papers. It would not be excessive to demand that the thorough scholar check all papers that have cited or criticized such papers, if they could be located quickly. The citation index makes this check practicable (Garfield, 1955, p. 108). In retrospective, citation indexing can be perceived as a pre-social web version of crowdsourcing, as it is based on the concept that the community of citing authors outperforms indexers in highlighting cognitive links between papers, particularly on the level of specific ideas and concepts (Garfield, 1983). Over the last 50 years, citation analysis and more generally, bibliometric methods, have developed from information retrieval tools to research evaluation metrics, where they are presumed to make scientific funding more efficient and effective (Moed, 2006). However, the dominance of bibliometric indicators in research evaluation has also led to significant goal displacement (Merton, 1957) and the oversimplification of notions of "research productivity" and "scientific quality", creating adverse effects such as salami publishing, honorary authorships, citation cartels, and misuse of indicators (Binswanger, 2015; Cronin and Sugimoto, 2014; Frey and Osterloh, 2006; Haustein and Larivière, 2015; Weingart, 2005).
    Furthermore, the rise of the web, and subsequently, the social web, has challenged the quasi-monopolistic status of the journal as the main form of scholarly communication and citation indices as the primary assessment mechanisms. Scientific communication is becoming more open, transparent, and diverse: publications are increasingly open access; manuscripts, presentations, code, and data are shared online; research ideas and results are discussed and criticized openly on blogs; and new peer review experiments, with open post publication assessment by anonymous or non-anonymous referees, are underway. The diversification of scholarly production and assessment, paired with the increasing speed of the communication process, leads to an increased information overload (Bawden and Robinson, 2008), demanding new filters. The concept of altmetrics, short for alternative (to citation) metrics, was created out of an attempt to provide a filter (Priem et al., 2010) and to steer against the oversimplification of the measurement of scientific success solely on the basis of number of journal articles published and citations received, by considering a wider range of research outputs and metrics (Piwowar, 2013). Although the term altmetrics was introduced in a tweet in 2010 (Priem, 2010), the idea of capturing traces - "polymorphous mentioning" (Cronin et al., 1998, p. 1320) - of scholars and their documents on the web to measure "impact" of science in a broader manner than citations was introduced years before, largely in the context of webometrics (Almind and Ingwersen, 1997; Thelwall et al., 2005):
    Date
    20. 1.2015 18:30:22
  3. Vincent-Lamarre, P.; Boivin, J.; Gargouri, Y.; Larivière, V.; Harnad, S.: Estimating open access mandate effectiveness : the MELIBEA score (2016) 0.01
    0.0069052614 = product of:
      0.027621046 = sum of:
        0.027621046 = weight(_text_:for in 3162) [ClassicSimilarity], result of:
          0.027621046 = score(doc=3162,freq=18.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.31116164 = fieldWeight in 3162, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3162)
      0.25 = coord(1/4)
    
    Abstract
    MELIBEA is a directory of institutional open-access policies for research output that uses a composite formula with eight weighted conditions to estimate the "strength" of open access (OA) mandates (registered in ROARMAP). We analyzed total Web of Science-(WoS)-indexed publication output in years 2011-2013 for 67 institutions in which OA was mandated to estimate the mandates' effectiveness: How well did the MELIBEA score and its individual conditions predict what percentage of the WoS-indexed articles is actually deposited in each institution's OA repository, and when? We found a small but significant positive correlation (0.18) between the MELIBEA "strength" score and deposit percentage. For three of the eight MELIBEA conditions (deposit timing, internal use, and opt-outs), one value of each was strongly associated with deposit percentage or latency ([a] immediate deposit required; [b] deposit required for performance evaluation; [c] unconditional opt-out allowed for the OA requirement but no opt-out for deposit requirement). When we updated the initial values and weights of the MELIBEA formula to reflect the empirical association we had found, the score's predictive power for mandate effectiveness doubled (0.36). There are not yet enough OA mandates to test further mandate conditions that might contribute to mandate effectiveness, but the present findings already suggest that it would be productive for existing and future mandates to adopt the three identified conditions so as to maximize their effectiveness, and thereby the growth of OA.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.11, S.2815-2828
  4. Archambault, E.; Campbell, D; Gingras, Y.; Larivière, V.: Comparing bibliometric statistics obtained from the Web of Science and Scopus (2009) 0.01
    0.005638122 = product of:
      0.022552488 = sum of:
        0.022552488 = weight(_text_:for in 2933) [ClassicSimilarity], result of:
          0.022552488 = score(doc=2933,freq=12.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.2540624 = fieldWeight in 2933, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2933)
      0.25 = coord(1/4)
    
    Abstract
    For more than 40 years, the Institute for Scientific Information (ISI, now part of Thomson Reuters) produced the only available bibliographic databases from which bibliometricians could compile large-scale bibliometric indicators. ISI's citation indexes, now regrouped under the Web of Science (WoS), were the major sources of bibliometric data until 2004, when Scopus was launched by the publisher Reed Elsevier. For those who perform bibliometric analyses and comparisons of countries or institutions, the existence of these two major databases raises the important question of the comparability and stability of statistics obtained from different data sources. This paper uses macrolevel bibliometric indicators to compare results obtained from the WoS and Scopus. It shows that the correlations between the measures obtained with both databases for the number of papers and the number of citations received by countries, as well as for their ranks, are extremely high. There is also a very high correlation when countries' papers are broken down by field. The paper thus provides evidence that indicators of scientific production and citations at the country level are stable and largely independent of the database.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.7, S.1320-1326
  5. Mohammadi, E.; Thelwall, M.; Haustein, S.; Larivière, V.: Who reads research articles? : an altmetrics analysis of Mendeley user categories (2015) 0.01
    0.005638122 = product of:
      0.022552488 = sum of:
        0.022552488 = weight(_text_:for in 2162) [ClassicSimilarity], result of:
          0.022552488 = score(doc=2162,freq=12.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.2540624 = fieldWeight in 2162, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2162)
      0.25 = coord(1/4)
    
    Abstract
    Little detailed information is known about who reads research articles and the contexts in which research articles are read. Using data about people who register in Mendeley as readers of articles, this article explores different types of users of Clinical Medicine, Engineering and Technology, Social Science, Physics, and Chemistry articles inside and outside academia. The majority of readers for all disciplines were PhD students, postgraduates, and postdocs but other types of academics were also represented. In addition, many Clinical Medicine articles were read by medical professionals. The highest correlations between citations and Mendeley readership counts were found for types of users who often authored academic articles, except for associate professors in some sub-disciplines. This suggests that Mendeley readership can reflect usage similar to traditional citation impact if the data are restricted to readers who are also authors without the delay of impact measured by citation counts. At the same time, Mendeley statistics can also reveal the hidden impact of some research articles, such as educational value for nonauthor users inside academia or the impact of research articles on practice for readers outside academia.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.9, S.1832-1846
  6. Mongeon, P.; Larivière, V.: Costly collaborations : the impact of scientific fraud on co-authors' careers (2016) 0.01
    0.005638122 = product of:
      0.022552488 = sum of:
        0.022552488 = weight(_text_:for in 2769) [ClassicSimilarity], result of:
          0.022552488 = score(doc=2769,freq=12.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.2540624 = fieldWeight in 2769, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2769)
      0.25 = coord(1/4)
    
    Abstract
    Over the past few years, several major scientific fraud cases have shocked the scientific community. The number of retractions each year has also increased tremendously, especially in the biomedical field, and scientific misconduct accounts for more than half of those retractions. It is assumed that co-authors of retracted papers are affected by their colleagues' misconduct, and the aim of this study is to provide empirical evidence of the effect of retractions in biomedical research on co-authors' research careers. Using data from the Web of Science, we measured the productivity, impact, and collaboration of 1,123 co-authors of 293 retracted articles for a period of 5 years before and after the retraction. We found clear evidence that collaborators do suffer consequences of their colleagues' misconduct and that a retraction for fraud has higher consequences than a retraction for error. Our results also suggest that the extent of these consequences is closely linked with the ranking of co-authors on the retracted paper, being felt most strongly by first authors, followed by the last authors, with the impact is less important for middle authors.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.3, S.535-542
  7. Larivière, V.; Archambault, V.; Gingras, Y.; Vignola-Gagné, E.: ¬The place of serials in referencing practices : comparing natural sciences and engineering with social sciences and humanities (2006) 0.01
    0.0051468783 = product of:
      0.020587513 = sum of:
        0.020587513 = weight(_text_:for in 5107) [ClassicSimilarity], result of:
          0.020587513 = score(doc=5107,freq=10.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.2319262 = fieldWeight in 5107, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5107)
      0.25 = coord(1/4)
    
    Abstract
    Journal articles constitute the core documents for the diffusion of knowledge in the natural sciences. It has been argued that the same is not true for the social sciences and humanities where knowledge is more often disseminated in monographs that are not indexed in the journal-based databases used for bibliometric analysis. Previous studies have made only partial assessments of the role played by both serials and other types of literature. The importance of journal literature in the various scientific fields has therefore not been systematically characterized. The authors address this issue by providing a systematic measurement of the role played by journal literature in the building of knowledge in both the natural sciences and engineering and the social sciences and humanities. Using citation data from the CD-ROM versions of the Science Citation Index (SCI), Social Science Citation Index (SSCI), and Arts and Humanities Citation Index (AHCI) databases from 1981 to 2000 (Thomson ISI, Philadelphia, PA), the authors quantify the share of citations to both serials and other types of literature. Variations in time and between fields are also analyzed. The results show that journal literature is increasingly important in the natural and social sciences, but that its role in the humanities is stagnant and has even tended to diminish slightly in the 1990s. Journal literature accounts for less than 50% of the citations in several disciplines of the social sciences and humanities; hence, special care should be used when using bibliometric indicators that rely only on journal literature.
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.8, S.997-1004
  8. Larivière, V.; Macaluso, B.: Improving the coverage of social science and humanities researchers' output : the case of the Érudit journal platform (2011) 0.01
    0.0051468783 = product of:
      0.020587513 = sum of:
        0.020587513 = weight(_text_:for in 4943) [ClassicSimilarity], result of:
          0.020587513 = score(doc=4943,freq=10.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.2319262 = fieldWeight in 4943, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4943)
      0.25 = coord(1/4)
    
    Abstract
    In non-English-speaking countries the measurement of research output in the social sciences and humanities (SSH) using standard bibliographic databases suffers from a major drawback: the underrepresentation of articles published in local, non-English, journals. Using papers indexed (1) in a local database of periodicals (Érudit) and (2) in the Web of Science, assigned to the population of university professors in the province of Québec, this paper quantifies, for individual researchers and departments, the importance of papers published in local journals. It also analyzes differences across disciplines and between French-speaking and English-speaking universities. The results show that, while the addition of papers published in local journals to bibliometric measures has little effect when all disciplines are considered and for anglophone universities, it increases the output of researchers from francophone universities in the social sciences and humanities by almost a third. It also shows that there is very little relation, at the level of individual researchers or departments, between the output indexed in the Web of Science and the output retrieved from the Érudit database; a clear demonstration that the Web of Science cannot be used as a proxy for the "overall" production of SSH researchers in Québec. The paper concludes with a discussion on these disciplinary and language differences, as well as on their implications for rankings of universities.
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.12, S.2437-2442
  9. Larivière, V.; Gingras, Y.: ¬The impact factor's Matthew Effect : a natural experiment in bibliometrics (2010) 0.00
    0.004784106 = product of:
      0.019136423 = sum of:
        0.019136423 = weight(_text_:for in 3338) [ClassicSimilarity], result of:
          0.019136423 = score(doc=3338,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.21557912 = fieldWeight in 3338, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=3338)
      0.25 = coord(1/4)
    
    Abstract
    Since the publication of Robert K. Merton's theory of cumulative advantage in science (Matthew Effect), several empirical studies have tried to measure its presence at the level of papers, individual researchers, institutions, or countries. However, these studies seldom control for the intrinsic quality of papers or of researchers - better (however defined) papers or researchers could receive higher citation rates because they are indeed of better quality. Using an original method for controlling the intrinsic value of papers - identical duplicate papers published in different journals with different impact factors - this paper shows that the journal in which papers are published have a strong influence on their citation rates, as duplicate papers published in high-impact journals obtain, on average, twice as many citations as their identical counterparts published in journals with lower impact factors. The intrinsic value of a paper is thus not the only reason a given paper gets cited or not, there is a specific Matthew Effect attached to journals and this gives to papers published there an added value over and above their intrinsic quality.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.2, S.424-427
  10. Larivière, V.; Gingras, Y.: On the relationship between interdisciplinarity and scientific impact (2009) 0.00
    0.003986755 = product of:
      0.01594702 = sum of:
        0.01594702 = weight(_text_:for in 3316) [ClassicSimilarity], result of:
          0.01594702 = score(doc=3316,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17964928 = fieldWeight in 3316, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3316)
      0.25 = coord(1/4)
    
    Abstract
    This article analyzes the effect of interdisciplinarity on the scientific impact of individual articles. Using all the articles published in Web of Science in 2000, we define the degree of interdisciplinarity of a given article as the percentage of its cited references made to journals of other disciplines. We show that although for all disciplines combined there is no clear correlation between the level of interdisciplinarity of articles and their citation rates, there are nonetheless some disciplines in which a higher level of interdisciplinarity is related to a higher citation rates. For other disciplines, citations decline as interdisciplinarity grows. One characteristic is visible in all disciplines: Highly disciplinary and highly interdisciplinary articles have a low scientific impact. This suggests that there might be an optimum of interdisciplinarity beyond which the research is too dispersed to find its niche and under which it is too mainstream to have high impact. Finally, the relationship between interdisciplinarity and scientific impact is highly determined by the citation characteristics of the disciplines involved: Articles citing citation-intensive disciplines are more likely to be cited by those disciplines and, hence, obtain higher citation scores than would articles citing non-citation-intensive disciplines.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.1, S.126-131
  11. Kozlowski, D.; Andersen, J.P.; Larivière, V.: ¬The decrease in uncited articles and its effect on the concentration of citations (2024) 0.00
    0.003986755 = product of:
      0.01594702 = sum of:
        0.01594702 = weight(_text_:for in 1208) [ClassicSimilarity], result of:
          0.01594702 = score(doc=1208,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17964928 = fieldWeight in 1208, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1208)
      0.25 = coord(1/4)
    
    Abstract
    Empirical evidence demonstrates that citations received by scholarly publications follow a pattern of preferential attachment, resulting in a power-law distribution. Such asymmetry has sparked significant debate regarding the use of citations for research evaluation. However, a consensus has yet to be established concerning the historical trends in citation concentration. Are citations becoming more concentrated in a small number of articles? Or have recent geopolitical and technical changes in science led to more decentralized distributions? This ongoing debate stems from a lack of technical clarity in measuring inequality. Given the variations in citation practices across disciplines and over time, it is crucial to account for multiple factors that can influence the findings. This article explores how reference-based and citation-based approaches, uncited articles, citation inflation, the expansion of bibliometric databases, disciplinary differences, and self-citations affect the evolution of citation concentration. Our results indicate a decreasing trend in citation concentration, primarily driven by a decline in uncited articles, which, in turn, can be attributed to the growing significance of Asia and Europe. On the whole, our findings clarify current debates on citation concentration and show that, contrary to a widely-held belief, citations are increasingly scattered.
    Source
    Journal of the Association for Information Science and Technology. 75(2023) no.2, S.188-197
  12. Larivière, V.; Sugimoto, C.R.; Bergeron, P.: In their own image? : a comparison of doctoral students' and faculty members' referencing behavior (2013) 0.00
    0.0039062058 = product of:
      0.015624823 = sum of:
        0.015624823 = weight(_text_:for in 751) [ClassicSimilarity], result of:
          0.015624823 = score(doc=751,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17601961 = fieldWeight in 751, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=751)
      0.25 = coord(1/4)
    
    Abstract
    This article compares doctoral students' and faculty members' referencing behavior through the analysis of a large corpus of scientific articles. It shows that doctoral students tend to cite more documents per article than faculty members, and that the literature they cite is, on average, more recent. It also demonstrates that doctoral students cite a larger proportion of conference proceedings and journal articles than faculty members and faculty members are more likely to self-cite and cite theses than doctoral students. Analysis of the impact of cited journals indicates that in health research, faculty members tend to cite journals with slightly lower impact factors whereas in social sciences and humanities, faculty members cite journals with higher impact factors. Finally, it provides evidence that, in every discipline, faculty members tend to cite a higher proportion of clinical/applied research journals than doctoral students. This study contributes to the understanding of referencing patterns and age stratification in academia. Implications for understanding the information-seeking behavior of academics are discussed.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.5, S.1045-1054
  13. Lachance, C.; Poirier, S.; Larivière, V.: ¬The kiss of death? : the effect of being cited in a review on subsequent citations (2014) 0.00
    0.0039062058 = product of:
      0.015624823 = sum of:
        0.015624823 = weight(_text_:for in 1310) [ClassicSimilarity], result of:
          0.015624823 = score(doc=1310,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17601961 = fieldWeight in 1310, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=1310)
      0.25 = coord(1/4)
    
    Abstract
    This work investigates recent claims that citation in a review article provokes a decline in a paper's later citation count; citations being given to the review article instead of the original paper. Using the Science Citation Index Expanded, we looked at the yearly percentages of lifetime citations of papers published in 1990 first cited in review articles in 1992 and 1995 in the field of biomedical research, and found that no significant change occurred after citation in a review article, regardless of the papers' citation activity or specialty. Additional comparison was done for papers from the field of clinical research, and this yielded no meaningful results to support the notion that review articles have any substantial effect on the citation count of the papers they review.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.7, S.1501-1505
  14. Larivière, V.; Gingras, Y.; Sugimoto, C.R.; Tsou, A.: Team size matters : collaboration and scientific impact since 1900 (2015) 0.00
    0.0039062058 = product of:
      0.015624823 = sum of:
        0.015624823 = weight(_text_:for in 2035) [ClassicSimilarity], result of:
          0.015624823 = score(doc=2035,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17601961 = fieldWeight in 2035, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=2035)
      0.25 = coord(1/4)
    
    Abstract
    This article provides the first historical analysis of the relationship between collaboration and scientific impact using three indicators of collaboration (number of authors, number of addresses, and number of countries) derived from articles published between 1900 and 2011. The results demonstrate that an increase in the number of authors leads to an increase in impact, from the beginning of the last century onward, and that this is not due simply to self-citations. A similar trend is also observed for the number of addresses and number of countries represented in the byline of an article. However, the constant inflation of collaboration since 1900 has resulted in diminishing citation returns: Larger and more diverse (in terms of institutional and country affiliation) teams are necessary to realize higher impact. The article concludes with a discussion of the potential causes of the impact gain in citations of collaborative papers.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.7, S.1323-1332
  15. Hu, B.; Dong, X.; Zhang, C.; Bowman, T.D.; Ding, Y.; Milojevic, S.; Ni, C.; Yan, E.; Larivière, V.: ¬A lead-lag analysis of the topic evolution patterns for preprints and publications (2015) 0.00
    0.0039062058 = product of:
      0.015624823 = sum of:
        0.015624823 = weight(_text_:for in 2337) [ClassicSimilarity], result of:
          0.015624823 = score(doc=2337,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17601961 = fieldWeight in 2337, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=2337)
      0.25 = coord(1/4)
    
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.12, S.2643-2656
  16. Bertin, M.; Atanassova, I.; Gingras, Y.; Larivière, V.: ¬The invariant distribution of references in scientific articles (2016) 0.00
    0.0039062058 = product of:
      0.015624823 = sum of:
        0.015624823 = weight(_text_:for in 2497) [ClassicSimilarity], result of:
          0.015624823 = score(doc=2497,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17601961 = fieldWeight in 2497, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=2497)
      0.25 = coord(1/4)
    
    Abstract
    The organization of scientific papers typically follows a standardized pattern, the well-known IMRaD structure (introduction, methods, results, and discussion). Using the full text of 45,000 papers published in the PLoS series of journals as a case study, this paper investigates, from the viewpoint of bibliometrics, how references are distributed along the structure of scientific papers as well as the age of these cited references. Once the sections of articles are realigned to follow the IMRaD sequence, the position of cited references along the text of articles is invariant across all PLoS journals, with the introduction and discussion accounting for most of the references. It also provides evidence that the age of cited references varies by section, with older references being found in the methods and more recent references in the discussion. These results provide insight into the different roles citations have in the scholarly communication process.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.1, S.164-177
  17. Haustein, S.; Bowman, T.D.; Holmberg, K.; Tsou, A.; Sugimoto, C.R.; Larivière, V.: Tweets as impact indicators : Examining the implications of automated "bot" accounts on Twitter (2016) 0.00
    0.0039062058 = product of:
      0.015624823 = sum of:
        0.015624823 = weight(_text_:for in 2502) [ClassicSimilarity], result of:
          0.015624823 = score(doc=2502,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17601961 = fieldWeight in 2502, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=2502)
      0.25 = coord(1/4)
    
    Abstract
    This brief communication presents preliminary findings on automated Twitter accounts distributing links to scientific articles deposited on the preprint repository arXiv. It discusses the implication of the presence of such bots from the perspective of social media metrics (altmetrics), where mentions of scholarly documents on Twitter have been suggested as a means of measuring impact that is both broader and timelier than citations. Our results show that automated Twitter accounts create a considerable amount of tweets to scientific articles and that they behave differently than common social bots, which has critical implications for the use of raw tweet counts in research evaluation and assessment. We discuss some definitions of Twitter cyborgs and bots in scholarly communication and propose distinguishing between different levels of engagement-that is, differentiating between tweeting only bibliographic information to discussing or commenting on the content of a scientific work.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.1, S.232-238
  18. Sugimoto, C.R.; Work, S.; Larivière, V.; Haustein, S.: Scholarly use of social media and altmetrics : A review of the literature (2017) 0.00
    0.0039062058 = product of:
      0.015624823 = sum of:
        0.015624823 = weight(_text_:for in 3781) [ClassicSimilarity], result of:
          0.015624823 = score(doc=3781,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17601961 = fieldWeight in 3781, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=3781)
      0.25 = coord(1/4)
    
    Abstract
    Social media has become integrated into the fabric of the scholarly communication system in fundamental ways, principally through scholarly use of social media platforms and the promotion of new indicators on the basis of interactions with these platforms. Research and scholarship in this area has accelerated since the coining and subsequent advocacy for altmetrics-that is, research indicators based on social media activity. This review provides an extensive account of the state-of-the art in both scholarly use of social media and altmetrics. The review consists of 2 main parts: the first examines the use of social media in academia, reviewing the various functions these platforms have in the scholarly communication process and the factors that affect this use. The second part reviews empirical studies of altmetrics, discussing the various interpretations of altmetrics, data collection and methodological limitations, and differences according to platform. The review ends with a critical discussion of the implications of this transformation in the scholarly communication system.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.9, S.2037-2062
  19. Lisée, C.; Larivière, V.; Archambault, E.: Conference proceedings as a source of scientific information : a bibliometric analysis (2008) 0.00
    0.0032551715 = product of:
      0.013020686 = sum of:
        0.013020686 = weight(_text_:for in 2356) [ClassicSimilarity], result of:
          0.013020686 = score(doc=2356,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.14668301 = fieldWeight in 2356, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2356)
      0.25 = coord(1/4)
    
    Abstract
    While several authors have argued that conference proceedings are an important source of scientific knowledge, the extent of their importance has not been measured in a systematic manner. This article examines the scientific impact and aging of conference proceedings compared to those of scientific literature in general. It shows that the relative importance of proceedings is diminishing over time and currently represents only 1.7% of references made in the natural sciences and engineering, and 2.5% in the social sciences and humanities. Although the scientific impact of proceedings is losing ground to other types of scientific literature in nearly all fields, it has grown from 8% of the references in engineering papers in the early 1980s to its current 10%. Proceedings play a particularly important role in computer sciences, where they account for close to 20% of the references. This article also shows that not unexpectedly, proceedings age faster than cited scientific literature in general. The evidence thus shows that proceedings have a relatively limited scientific impact, on average representing only about 2% of total citations, that their relative importance is shrinking, and that they become obsolete faster than the scientific literature in general.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.11, S.1776-1784
  20. Larivière, V.; Lozano, G.A.; Gingras, Y.: Are elite journals declining? (2014) 0.00
    0.0032551715 = product of:
      0.013020686 = sum of:
        0.013020686 = weight(_text_:for in 1228) [ClassicSimilarity], result of:
          0.013020686 = score(doc=1228,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.14668301 = fieldWeight in 1228, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1228)
      0.25 = coord(1/4)
    
    Abstract
    Previous research indicates that during the past 20 years, the highest-quality work has been published in an increasingly diverse and larger group of journals. In this article, we examine whether this diversification has also affected the handful of elite journals that are traditionally considered to be the best. We examine citation patterns during the past 40 years of seven long-standing traditionally elite journals and six journals that have been increasing in importance during the past 20 years. To be among the top 5% or 1% cited papers, papers now need about twice as many citations as they did 40 years ago. Since the late 1980s and early 1990s, elite journals have been publishing a decreasing proportion of these top-cited papers. This also applies to the two journals that are typically considered as the top venues and often used as bibliometric indicators of "excellence": Science and Nature. On the other hand, several new and established journals are publishing an increasing proportion of the most-cited papers. These changes bring new challenges and opportunities for all parties. Journals can enact policies to increase or maintain their relative position in the journal hierarchy. Researchers now have the option to publish in more diverse venues knowing that their work can still reach the same audiences. Finally, evaluators and administrators need to know that although there will always be a certain prestige associated with publishing in "elite" journals, journal hierarchies are in constant flux.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.4, S.649-655