Search (14 results, page 1 of 1)

  • × author_ss:"Gingras, Y."
  1. Larivière, V.; Gingras, Y.; Archambault, E.: ¬The decline in the concentration of citations, 1900-2007 (2009) 0.01
    0.013028536 = product of:
      0.052114144 = sum of:
        0.052114144 = product of:
          0.078171216 = sum of:
            0.02824782 = weight(_text_:science in 2763) [ClassicSimilarity], result of:
              0.02824782 = score(doc=2763,freq=4.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.24694869 = fieldWeight in 2763, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2763)
            0.049923394 = weight(_text_:22 in 2763) [ClassicSimilarity], result of:
              0.049923394 = score(doc=2763,freq=4.0), product of:
                0.15206799 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043425296 = queryNorm
                0.32829654 = fieldWeight in 2763, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2763)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    This article challenges recent research (Evans, 2008) reporting that the concentration of cited scientific literature increases with the online availability of articles and journals. Using Thomson Reuters' Web of Science, the present article analyses changes in the concentration of citations received (2- and 5-year citation windows) by papers published between 1900 and 2005. Three measures of concentration are used: the percentage of papers that received at least one citation (cited papers); the percentage of papers needed to account for 20%, 50%, and 80% of the citations; and the Herfindahl-Hirschman index (HHI). These measures are used for four broad disciplines: natural sciences and engineering, medical fields, social sciences, and the humanities. All these measures converge and show that, contrary to what was reported by Evans, the dispersion of citations is actually increasing.
    Date
    22. 3.2009 19:22:35
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.4, S.858-862
  2. Archambault, E.; Campbell, D; Gingras, Y.; Larivière, V.: Comparing bibliometric statistics obtained from the Web of Science and Scopus (2009) 0.01
    0.0104958415 = product of:
      0.041983366 = sum of:
        0.041983366 = product of:
          0.06297505 = sum of:
            0.033290375 = weight(_text_:science in 2933) [ClassicSimilarity], result of:
              0.033290375 = score(doc=2933,freq=8.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.2910318 = fieldWeight in 2933, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2933)
            0.029684676 = weight(_text_:29 in 2933) [ClassicSimilarity], result of:
              0.029684676 = score(doc=2933,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.19432661 = fieldWeight in 2933, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2933)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    For more than 40 years, the Institute for Scientific Information (ISI, now part of Thomson Reuters) produced the only available bibliographic databases from which bibliometricians could compile large-scale bibliometric indicators. ISI's citation indexes, now regrouped under the Web of Science (WoS), were the major sources of bibliometric data until 2004, when Scopus was launched by the publisher Reed Elsevier. For those who perform bibliometric analyses and comparisons of countries or institutions, the existence of these two major databases raises the important question of the comparability and stability of statistics obtained from different data sources. This paper uses macrolevel bibliometric indicators to compare results obtained from the WoS and Scopus. It shows that the correlations between the measures obtained with both databases for the number of papers and the number of citations received by countries, as well as for their ranks, are extremely high. There is also a very high correlation when countries' papers are broken down by field. The paper thus provides evidence that indicators of scientific production and citations at the country level are stable and largely independent of the database.
    Date
    19. 7.2009 12:20:29
    Object
    Web of Science
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.7, S.1320-1326
  3. Larivière, V.; Archambault, V.; Gingras, Y.; Vignola-Gagné, E.: ¬The place of serials in referencing practices : comparing natural sciences and engineering with social sciences and humanities (2006) 0.00
    0.002774198 = product of:
      0.011096792 = sum of:
        0.011096792 = product of:
          0.033290375 = sum of:
            0.033290375 = weight(_text_:science in 5107) [ClassicSimilarity], result of:
              0.033290375 = score(doc=5107,freq=8.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.2910318 = fieldWeight in 5107, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5107)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Journal articles constitute the core documents for the diffusion of knowledge in the natural sciences. It has been argued that the same is not true for the social sciences and humanities where knowledge is more often disseminated in monographs that are not indexed in the journal-based databases used for bibliometric analysis. Previous studies have made only partial assessments of the role played by both serials and other types of literature. The importance of journal literature in the various scientific fields has therefore not been systematically characterized. The authors address this issue by providing a systematic measurement of the role played by journal literature in the building of knowledge in both the natural sciences and engineering and the social sciences and humanities. Using citation data from the CD-ROM versions of the Science Citation Index (SCI), Social Science Citation Index (SSCI), and Arts and Humanities Citation Index (AHCI) databases from 1981 to 2000 (Thomson ISI, Philadelphia, PA), the authors quantify the share of citations to both serials and other types of literature. Variations in time and between fields are also analyzed. The results show that journal literature is increasingly important in the natural and social sciences, but that its role in the humanities is stagnant and has even tended to diminish slightly in the 1990s. Journal literature accounts for less than 50% of the citations in several disciplines of the social sciences and humanities; hence, special care should be used when using bibliometric indicators that rely only on journal literature.
    Object
    Science Citation Index
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.8, S.997-1004
  4. Larivière, V.; Archambault, E.; Gingras, Y.: Long-term variations in the aging of scientific literature : from exponential growth to steady-state science (1900-2004) (2008) 0.00
    0.0024025259 = product of:
      0.009610103 = sum of:
        0.009610103 = product of:
          0.02883031 = sum of:
            0.02883031 = weight(_text_:science in 1357) [ClassicSimilarity], result of:
              0.02883031 = score(doc=1357,freq=6.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.25204095 = fieldWeight in 1357, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1357)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Despite a very large number of studies on the aging and obsolescence of scientific literature, no study has yet measured, over a very long time period, the changes in the rates at which scientific literature becomes obsolete. This article studies the evolution of the aging phenomenon and, in particular, how the age of cited literature has changed over more than 100 years of scientific activity. It shows that the average and median ages of cited literature have undergone several changes over the period. Specifically, both World War I and World War II had the effect of significantly increasing the age of the cited literature. The major finding of this article is that contrary to a widely held belief, the age of cited material has risen continuously since the mid-1960s. In other words, during that period, researchers were relying on an increasingly old body of literature. Our data suggest that this phenomenon is a direct response to the steady-state dynamics of modern science that followed its exponential growth; however, we also have observed that online preprint archives such as arXiv have had the opposite effect in some subfields.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.2, S.288-296
  5. Kirchik, O.; Gingras, Y.; Larivière, V.: Changes in publication languages and citation practices and their effect on the scientific impact of Russian science (1993-2010) (2012) 0.00
    0.0024025259 = product of:
      0.009610103 = sum of:
        0.009610103 = product of:
          0.02883031 = sum of:
            0.02883031 = weight(_text_:science in 284) [ClassicSimilarity], result of:
              0.02883031 = score(doc=284,freq=6.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.25204095 = fieldWeight in 284, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=284)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    This article analyzes the effects of publication language on the international scientific visibility of Russia using the Web of Science (WoS). Like other developing and transition countries, it is subject to a growing pressure to "internationalize" its scientific activities, which primarily means a shift to English as a language of scientific communication. But to what extent does the transition to English improve the impact of research? The case of Russia is of interest in this respect as the existence of many combinations of national journals and languages of publications (namely, Russian and English, including translated journals) provide a kind of natural I experiment to test the effects of language and publisher's country on the international visibility of research through citations as well as on the referencing practices of authors. Our analysis points to the conclusion that the production of original English-language papers in foreign journals is a more efficient strategy of internationalization than the mere translation of domestic journals. If the objective of a country is to maximize the international visibility of its scientific work, then the efforts should go into the promotion of publication in reputed English-language journals to profit from the added effect provided by the Matthew effect of these venues.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.7, S.1411-1419
  6. Larivière, V.; Gingras, Y.: ¬The impact factor's Matthew Effect : a natural experiment in bibliometrics (2010) 0.00
    0.002353985 = product of:
      0.00941594 = sum of:
        0.00941594 = product of:
          0.02824782 = sum of:
            0.02824782 = weight(_text_:science in 3338) [ClassicSimilarity], result of:
              0.02824782 = score(doc=3338,freq=4.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.24694869 = fieldWeight in 3338, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3338)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Since the publication of Robert K. Merton's theory of cumulative advantage in science (Matthew Effect), several empirical studies have tried to measure its presence at the level of papers, individual researchers, institutions, or countries. However, these studies seldom control for the intrinsic quality of papers or of researchers - better (however defined) papers or researchers could receive higher citation rates because they are indeed of better quality. Using an original method for controlling the intrinsic value of papers - identical duplicate papers published in different journals with different impact factors - this paper shows that the journal in which papers are published have a strong influence on their citation rates, as duplicate papers published in high-impact journals obtain, on average, twice as many citations as their identical counterparts published in journals with lower impact factors. The intrinsic value of a paper is thus not the only reason a given paper gets cited or not, there is a specific Matthew Effect attached to journals and this gives to papers published there an added value over and above their intrinsic quality.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.2, S.424-427
  7. Wallace, M.L.; Gingras, Y.; Duhon, R.: ¬A new approach for detecting scientific specialties from raw cocitation networks (2009) 0.00
    0.001961654 = product of:
      0.007846616 = sum of:
        0.007846616 = product of:
          0.023539849 = sum of:
            0.023539849 = weight(_text_:science in 2709) [ClassicSimilarity], result of:
              0.023539849 = score(doc=2709,freq=4.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.20579056 = fieldWeight in 2709, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2709)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    We use a technique recently developed by V. Blondel, J.-L. Guillaume, R. Lambiotte, and E. Lefebvre (2008) to detect scientific specialties from author cocitation networks. This algorithm has distinct advantages over most previous methods used to obtain cocitation clusters since it avoids the use of similarity measures, relies entirely on the topology of the weighted network, and can be applied to relatively large networks. Most importantly, it requires no subjective interpretation of the cocitation data or of the communities found. Using two examples, we show that the resulting specialties are the smallest coherent groups of researchers (within a hierarchy of cluster sizes) and can thus be identified unambiguously. Furthermore, we confirm that these communities are indeed representative of what we know about the structure of a given scientific discipline and that as specialties, they can be accurately characterized by a few keywords (from the publication titles). We argue that this robust and efficient algorithm is particularly well-suited to cocitation networks and that the results generated can be of great use to researchers studying various facets of the structure and evolution of science.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.2, S.240-246
  8. Larivière, V.; Gingras, Y.: On the relationship between interdisciplinarity and scientific impact (2009) 0.00
    0.001961654 = product of:
      0.007846616 = sum of:
        0.007846616 = product of:
          0.023539849 = sum of:
            0.023539849 = weight(_text_:science in 3316) [ClassicSimilarity], result of:
              0.023539849 = score(doc=3316,freq=4.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.20579056 = fieldWeight in 3316, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3316)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    This article analyzes the effect of interdisciplinarity on the scientific impact of individual articles. Using all the articles published in Web of Science in 2000, we define the degree of interdisciplinarity of a given article as the percentage of its cited references made to journals of other disciplines. We show that although for all disciplines combined there is no clear correlation between the level of interdisciplinarity of articles and their citation rates, there are nonetheless some disciplines in which a higher level of interdisciplinarity is related to a higher citation rates. For other disciplines, citations decline as interdisciplinarity grows. One characteristic is visible in all disciplines: Highly disciplinary and highly interdisciplinary articles have a low scientific impact. This suggests that there might be an optimum of interdisciplinarity beyond which the research is too dispersed to find its niche and under which it is too mainstream to have high impact. Finally, the relationship between interdisciplinarity and scientific impact is highly determined by the citation characteristics of the disciplines involved: Articles citing citation-intensive disciplines are more likely to be cited by those disciplines and, hence, obtain higher citation scores than would articles citing non-citation-intensive disciplines.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.1, S.126-131
  9. Larivière, V.; Lozano, G.A.; Gingras, Y.: Are elite journals declining? (2014) 0.00
    0.001961654 = product of:
      0.007846616 = sum of:
        0.007846616 = product of:
          0.023539849 = sum of:
            0.023539849 = weight(_text_:science in 1228) [ClassicSimilarity], result of:
              0.023539849 = score(doc=1228,freq=4.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.20579056 = fieldWeight in 1228, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1228)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Previous research indicates that during the past 20 years, the highest-quality work has been published in an increasingly diverse and larger group of journals. In this article, we examine whether this diversification has also affected the handful of elite journals that are traditionally considered to be the best. We examine citation patterns during the past 40 years of seven long-standing traditionally elite journals and six journals that have been increasing in importance during the past 20 years. To be among the top 5% or 1% cited papers, papers now need about twice as many citations as they did 40 years ago. Since the late 1980s and early 1990s, elite journals have been publishing a decreasing proportion of these top-cited papers. This also applies to the two journals that are typically considered as the top venues and often used as bibliometric indicators of "excellence": Science and Nature. On the other hand, several new and established journals are publishing an increasing proportion of the most-cited papers. These changes bring new challenges and opportunities for all parties. Journals can enact policies to increase or maintain their relative position in the journal hierarchy. Researchers now have the option to publish in more diverse venues knowing that their work can still reach the same audiences. Finally, evaluators and administrators need to know that although there will always be a certain prestige associated with publishing in "elite" journals, journal hierarchies are in constant flux.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.4, S.649-655
  10. Larivière, V.; Gingras, Y.; Sugimoto, C.R.; Tsou, A.: Team size matters : collaboration and scientific impact since 1900 (2015) 0.00
    0.0016645187 = product of:
      0.006658075 = sum of:
        0.006658075 = product of:
          0.019974224 = sum of:
            0.019974224 = weight(_text_:science in 2035) [ClassicSimilarity], result of:
              0.019974224 = score(doc=2035,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.17461908 = fieldWeight in 2035, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2035)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.7, S.1323-1332
  11. Bertin, M.; Atanassova, I.; Gingras, Y.; Larivière, V.: ¬The invariant distribution of references in scientific articles (2016) 0.00
    0.0016645187 = product of:
      0.006658075 = sum of:
        0.006658075 = product of:
          0.019974224 = sum of:
            0.019974224 = weight(_text_:science in 2497) [ClassicSimilarity], result of:
              0.019974224 = score(doc=2497,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.17461908 = fieldWeight in 2497, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2497)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.1, S.164-177
  12. Gingras, Y.: Bibliometrics and research evaluation : uses and abuses (2016) 0.00
    0.0015693232 = product of:
      0.006277293 = sum of:
        0.006277293 = product of:
          0.018831879 = sum of:
            0.018831879 = weight(_text_:science in 3805) [ClassicSimilarity], result of:
              0.018831879 = score(doc=3805,freq=4.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.16463245 = fieldWeight in 3805, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3805)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    The research evaluation market is booming. "Ranking," "metrics," "h-index," and "impact factors" are reigning buzzwords. Government and research administrators want to evaluate everything -- teachers, professors, training programs, universities -- using quantitative indicators. Among the tools used to measure "research excellence," bibliometrics -- aggregate data on publications and citations -- has become dominant. Bibliometrics is hailed as an "objective" measure of research quality, a quantitative measure more useful than "subjective" and intuitive evaluation methods such as peer review that have been used since scientific papers were first published in the seventeenth century. In this book, Yves Gingras offers a spirited argument against an unquestioning reliance on bibliometrics as an indicator of research quality. Gingras shows that bibliometric rankings have no real scientific validity, rarely measuring what they pretend to. Although the study of publication and citation patterns, at the proper scales, can yield insights on the global dynamics of science over time, ill-defined quantitative indicators often generate perverse and unintended effects on the direction of research. Moreover, abuse of bibliometrics occurs when data is manipulated to boost rankings. Gingras looks at the politics of evaluation and argues that using numbers can be a way to control scientists and diminish their autonomy in the evaluation process. Proposing precise criteria for establishing the validity of indicators at a given scale of analysis, Gingras questions why universities are so eager to let invalid indicators influence their research strategy.
    Series
    History and foundations of information science
  13. Larivière, V.; Gingras, Y.: On the prevalence and scientific impact of duplicate publications in different scientific fields (1980-2007) (2010) 0.00
    0.001387099 = product of:
      0.005548396 = sum of:
        0.005548396 = product of:
          0.016645188 = sum of:
            0.016645188 = weight(_text_:science in 3622) [ClassicSimilarity], result of:
              0.016645188 = score(doc=3622,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.1455159 = fieldWeight in 3622, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3622)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The issue of duplicate publications has received a lot of attention in the medical literature, but much less in the information science community. This paper aims to analyze the prevalence and scientific impact of duplicate publications across all fields of research between 1980 and 2007. Design/methodology/approach - The approach is a bibliometric analysis of duplicate papers based on their metadata. Duplicate papers are defined as papers published in two different journals having: the exact same title; the same first author; and the same number of cited references. Findings - In all fields combined, the prevalence of duplicates is one out of 2,000 papers, but is higher in the natural and medical sciences than in the social sciences and humanities. A very high proportion (>85 percent) of these papers are published the same year or one year apart, which suggest that most duplicate papers were submitted simultaneously. Furthermore, duplicate papers are generally published in journals with impact factors below the average of their field and obtain lower citations. Originality/value - The paper provides clear evidence that the prevalence of duplicate papers is low and, more importantly, that the scientific impact of such papers is below average.
  14. Lozano, G.A.; Larivière, V.; Gingras, Y.: ¬The weakening relationship between the impact factor and papers' citations in the digital age (2012) 0.00
    0.001387099 = product of:
      0.005548396 = sum of:
        0.005548396 = product of:
          0.016645188 = sum of:
            0.016645188 = weight(_text_:science in 486) [ClassicSimilarity], result of:
              0.016645188 = score(doc=486,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.1455159 = fieldWeight in 486, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=486)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.11, S.2140-2145