Search (56 results, page 1 of 3)

  • × author_ss:"Thelwall, M."
  1. Kousha, K.; Thelwall, M.: ¬An automatic method for extracting citations from Google Books (2015) 0.12
    0.12422483 = product of:
      0.18633723 = sum of:
        0.14898928 = weight(_text_:citation in 1658) [ClassicSimilarity], result of:
          0.14898928 = score(doc=1658,freq=12.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.6345377 = fieldWeight in 1658, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1658)
        0.03734795 = product of:
          0.0746959 = sum of:
            0.0746959 = weight(_text_:index in 1658) [ClassicSimilarity], result of:
              0.0746959 = score(doc=1658,freq=4.0), product of:
                0.21880072 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.050071523 = queryNorm
                0.3413878 = fieldWeight in 1658, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1658)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Recent studies have shown that counting citations from books can help scholarly impact assessment and that Google Books (GB) is a useful source of such citation counts, despite its lack of a public citation index. Searching GB for citations produces approximate matches, however, and so its raw results need time-consuming human filtering. In response, this article introduces a method to automatically remove false and irrelevant matches from GB citation searches in addition to introducing refinements to a previous GB manual citation extraction method. The method was evaluated by manual checking of sampled GB results and comparing citations to about 14,500 monographs in the Thomson Reuters Book Citation Index (BKCI) against automatically extracted citations from GB across 24 subject areas. GB citations were 103% to 137% as numerous as BKCI citations in the humanities, except for tourism (72%) and linguistics (91%), 46% to 85% in social sciences, but only 8% to 53% in the sciences. In all cases, however, GB had substantially more citing books than did BKCI, with BKCI's results coming predominantly from journal articles. Moderate correlations between the GB and BKCI citation counts in social sciences and humanities, with most BKCI results coming from journal articles rather than books, suggests that they could measure the different aspects of impact, however.
  2. Levitt, J.M.; Thelwall, M.: Citation levels and collaboration within library and information science (2009) 0.12
    0.11531623 = product of:
      0.17297433 = sum of:
        0.14898928 = weight(_text_:citation in 2734) [ClassicSimilarity], result of:
          0.14898928 = score(doc=2734,freq=12.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.6345377 = fieldWeight in 2734, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2734)
        0.023985062 = product of:
          0.047970124 = sum of:
            0.047970124 = weight(_text_:22 in 2734) [ClassicSimilarity], result of:
              0.047970124 = score(doc=2734,freq=4.0), product of:
                0.17534193 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050071523 = queryNorm
                0.27358043 = fieldWeight in 2734, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2734)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Collaboration is a major research policy objective, but does it deliver higher quality research? This study uses citation analysis to examine the Web of Science (WoS) Information Science & Library Science subject category (IS&LS) to ascertain whether, in general, more highly cited articles are more highly collaborative than other articles. It consists of two investigations. The first investigation is a longitudinal comparison of the degree and proportion of collaboration in five strata of citation; it found that collaboration in the highest four citation strata (all in the most highly cited 22%) increased in unison over time, whereas collaboration in the lowest citation strata (un-cited articles) remained low and stable. Given that over 40% of the articles were un-cited, it seems important to take into account the differences found between un-cited articles and relatively highly cited articles when investigating collaboration in IS&LS. The second investigation compares collaboration for 35 influential information scientists; it found that their more highly cited articles on average were not more highly collaborative than their less highly cited articles. In summary, although collaborative research is conducive to high citation in general, collaboration has apparently not tended to be essential to the success of current and former elite information scientists.
    Date
    22. 3.2009 12:43:51
  3. Kousha, K.; Thelwall, M.: Patent citation analysis with Google (2017) 0.11
    0.10827799 = product of:
      0.16241698 = sum of:
        0.136008 = weight(_text_:citation in 3317) [ClassicSimilarity], result of:
          0.136008 = score(doc=3317,freq=10.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.57925105 = fieldWeight in 3317, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3317)
        0.026408987 = product of:
          0.052817974 = sum of:
            0.052817974 = weight(_text_:index in 3317) [ClassicSimilarity], result of:
              0.052817974 = score(doc=3317,freq=2.0), product of:
                0.21880072 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.050071523 = queryNorm
                0.24139762 = fieldWeight in 3317, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3317)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Citations from patents to scientific publications provide useful evidence about the commercial impact of academic research, but automatically searchable databases are needed to exploit this connection for large-scale patent citation evaluations. Google covers multiple different international patent office databases but does not index patent citations or allow automatic searches. In response, this article introduces a semiautomatic indirect method via Bing to extract and filter patent citations from Google to academic papers with an overall precision of 98%. The method was evaluated with 322,192 science and engineering Scopus articles from every second year for the period 1996-2012. Although manual Google Patent searches give more results, especially for articles with many patent citations, the difference is not large enough to be a major problem. Within Biomedical Engineering, Biotechnology, and Pharmacology & Pharmaceutics, 7% to 10% of Scopus articles had at least one patent citation but other fields had far fewer, so patent citation analysis is only relevant for a minority of publications. Low but positive correlations between Google Patent citations and Scopus citations across all fields suggest that traditional citation counts cannot substitute for patent citations when evaluating research.
  4. Thelwall, M.: Bibliometrics to webometrics (2009) 0.10
    0.1049328 = product of:
      0.15739919 = sum of:
        0.12042661 = weight(_text_:citation in 4239) [ClassicSimilarity], result of:
          0.12042661 = score(doc=4239,freq=4.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.51289076 = fieldWeight in 4239, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4239)
        0.036972582 = product of:
          0.073945165 = sum of:
            0.073945165 = weight(_text_:index in 4239) [ClassicSimilarity], result of:
              0.073945165 = score(doc=4239,freq=2.0), product of:
                0.21880072 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.050071523 = queryNorm
                0.33795667 = fieldWeight in 4239, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4239)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Bibliometrics has changed out of all recognition since 1958; becoming established as a field, being taught widely in library and information science schools, and being at the core of a number of science evaluation research groups around the world. This was all made possible by the work of Eugene Garfield and his Science Citation Index. This article reviews the distance that bibliometrics has travelled since 1958 by comparing early bibliometrics with current practice, and by giving an overview of a range of recent developments, such as patent analysis, national research evaluation exercises, visualization techniques, new applications, online citation indexes, and the creation of digital libraries. Webometrics, a modern, fast-growing offshoot of bibliometrics, is reviewed in detail. Finally, future prospects are discussed with regard to both bibliometrics and webometrics.
  5. Thelwall, M.; Maflahi, N.: Guideline references and academic citations as evidence of the clinical value of health research (2016) 0.10
    0.09784907 = product of:
      0.1467736 = sum of:
        0.12642162 = weight(_text_:citation in 2856) [ClassicSimilarity], result of:
          0.12642162 = score(doc=2856,freq=6.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.5384232 = fieldWeight in 2856, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.046875 = fieldNorm(doc=2856)
        0.020351999 = product of:
          0.040703997 = sum of:
            0.040703997 = weight(_text_:22 in 2856) [ClassicSimilarity], result of:
              0.040703997 = score(doc=2856,freq=2.0), product of:
                0.17534193 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050071523 = queryNorm
                0.23214069 = fieldWeight in 2856, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2856)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This article introduces a new source of evidence of the value of medical-related research: citations from clinical guidelines. These give evidence that research findings have been used to inform the day-to-day practice of medical staff. To identify whether citations from guidelines can give different information from that of traditional citation counts, this article assesses the extent to which references in clinical guidelines tend to be highly cited in the academic literature and highly read in Mendeley. Using evidence from the United Kingdom, references associated with the UK's National Institute of Health and Clinical Excellence (NICE) guidelines tended to be substantially more cited than comparable articles, unless they had been published in the most recent 3 years. Citation counts also seemed to be stronger indicators than Mendeley readership altmetrics. Hence, although presence in guidelines may be particularly useful to highlight the contributions of recently published articles, for older articles citation counts may already be sufficient to recognize their contributions to health in society.
    Date
    19. 3.2016 12:22:00
  6. Thelwall, M.; Sud, P.: Mendeley readership counts : an investigation of temporal and disciplinary differences (2016) 0.08
    0.0823832 = product of:
      0.1235748 = sum of:
        0.1032228 = weight(_text_:citation in 3211) [ClassicSimilarity], result of:
          0.1032228 = score(doc=3211,freq=4.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.4396206 = fieldWeight in 3211, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.046875 = fieldNorm(doc=3211)
        0.020351999 = product of:
          0.040703997 = sum of:
            0.040703997 = weight(_text_:22 in 3211) [ClassicSimilarity], result of:
              0.040703997 = score(doc=3211,freq=2.0), product of:
                0.17534193 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050071523 = queryNorm
                0.23214069 = fieldWeight in 3211, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3211)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Scientists and managers using citation-based indicators to help evaluate research cannot evaluate recent articles because of the time needed for citations to accrue. Reading occurs before citing, however, and so it makes sense to count readers rather than citations for recent publications. To assess this, Mendeley readers and citations were obtained for articles from 2004 to late 2014 in five broad categories (agriculture, business, decision science, pharmacy, and the social sciences) and 50 subcategories. In these areas, citation counts tended to increase with every extra year since publication, and readership counts tended to increase faster initially but then stabilize after about 5 years. The correlation between citations and readers was also higher for longer time periods, stabilizing after about 5 years. Although there were substantial differences between broad fields and smaller differences between subfields, the results confirm the value of Mendeley reader counts as early scientific impact indicators.
    Date
    16.11.2016 11:07:22
  7. Didegah, F.; Thelwall, M.: Co-saved, co-tweeted, and co-cited networks (2018) 0.08
    0.0823832 = product of:
      0.1235748 = sum of:
        0.1032228 = weight(_text_:citation in 4291) [ClassicSimilarity], result of:
          0.1032228 = score(doc=4291,freq=4.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.4396206 = fieldWeight in 4291, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.046875 = fieldNorm(doc=4291)
        0.020351999 = product of:
          0.040703997 = sum of:
            0.040703997 = weight(_text_:22 in 4291) [ClassicSimilarity], result of:
              0.040703997 = score(doc=4291,freq=2.0), product of:
                0.17534193 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050071523 = queryNorm
                0.23214069 = fieldWeight in 4291, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4291)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Counts of tweets and Mendeley user libraries have been proposed as altmetric alternatives to citation counts for the impact assessment of articles. Although both have been investigated to discover whether they correlate with article citations, it is not known whether users tend to tweet or save (in Mendeley) the same kinds of articles that they cite. In response, this article compares pairs of articles that are tweeted, saved to a Mendeley library, or cited by the same user, but possibly a different user for each source. The study analyzes 1,131,318 articles published in 2012, with minimum tweeted (10), saved to Mendeley (100), and cited (10) thresholds. The results show surprisingly minor overall overlaps between the three phenomena. The importance of journals for Twitter and the presence of many bots at different levels of activity suggest that this site has little value for impact altmetrics. The moderate differences between patterns of saving and citation suggest that Mendeley can be used for some types of impact assessments, but sensitivity is needed for underlying differences.
    Date
    28. 7.2018 10:00:22
  8. Thelwall, M.: Are Mendeley reader counts high enough for research evaluations when articles are published? (2017) 0.07
    0.068652675 = product of:
      0.102979004 = sum of:
        0.086019 = weight(_text_:citation in 3806) [ClassicSimilarity], result of:
          0.086019 = score(doc=3806,freq=4.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.36635053 = fieldWeight in 3806, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3806)
        0.01696 = product of:
          0.03392 = sum of:
            0.03392 = weight(_text_:22 in 3806) [ClassicSimilarity], result of:
              0.03392 = score(doc=3806,freq=2.0), product of:
                0.17534193 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050071523 = queryNorm
                0.19345059 = fieldWeight in 3806, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3806)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose Mendeley reader counts have been proposed as early indicators for the impact of academic publications. The purpose of this paper is to assess whether there are enough Mendeley readers for research evaluation purposes during the month when an article is first published. Design/methodology/approach Average Mendeley reader counts were compared to the average Scopus citation counts for 104,520 articles from ten disciplines during the second half of 2016. Findings Articles attracted, on average, between 0.1 and 0.8 Mendeley readers per article in the month in which they first appeared in Scopus. This is about ten times more than the average Scopus citation count. Research limitations/implications Other disciplines may use Mendeley more or less than the ten investigated here. The results are dependent on Scopus's indexing practices, and Mendeley reader counts can be manipulated and have national and seniority biases. Practical implications Mendeley reader counts during the month of publication are more powerful than Scopus citations for comparing the average impacts of groups of documents but are not high enough to differentiate between the impacts of typical individual articles. Originality/value This is the first multi-disciplinary and systematic analysis of Mendeley reader counts from the publication month of an article.
    Date
    20. 1.2015 18:30:22
  9. Thelwall, M.; Kousha, K.; Abdoli, M.; Stuart, E.; Makita, M.; Wilson, P.; Levitt, J.: Why are coauthored academic articles more cited : higher quality or larger audience? (2023) 0.07
    0.068652675 = product of:
      0.102979004 = sum of:
        0.086019 = weight(_text_:citation in 995) [ClassicSimilarity], result of:
          0.086019 = score(doc=995,freq=4.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.36635053 = fieldWeight in 995, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=995)
        0.01696 = product of:
          0.03392 = sum of:
            0.03392 = weight(_text_:22 in 995) [ClassicSimilarity], result of:
              0.03392 = score(doc=995,freq=2.0), product of:
                0.17534193 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050071523 = queryNorm
                0.19345059 = fieldWeight in 995, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=995)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Collaboration is encouraged because it is believed to improve academic research, supported by indirect evidence in the form of more coauthored articles being more cited. Nevertheless, this might not reflect quality but increased self-citations or the "audience effect": citations from increased awareness through multiple author networks. We address this with the first science wide investigation into whether author numbers associate with journal article quality, using expert peer quality judgments for 122,331 articles from the 2014-20 UK national assessment. Spearman correlations between author numbers and quality scores show moderately strong positive associations (0.2-0.4) in the health, life, and physical sciences, but weak or no positive associations in engineering and social sciences, with weak negative/positive or no associations in various arts and humanities, and a possible negative association for decision sciences. This gives the first systematic evidence that greater numbers of authors associates with higher quality journal articles in the majority of academia outside the arts and humanities, at least for the UK. Positive associations between team size and citation counts in areas with little association between team size and quality also show that audience effects or other nonquality factors account for the higher citation rates of coauthored articles in some fields.
    Date
    22. 6.2023 18:11:50
  10. Levitt, J.M.; Thelwall, M.; Oppenheim, C.: Variations between subjects in the extent to which the social sciences have become more interdisciplinary (2011) 0.06
    0.05815574 = product of:
      0.08723361 = sum of:
        0.06082462 = weight(_text_:citation in 4465) [ClassicSimilarity], result of:
          0.06082462 = score(doc=4465,freq=2.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.25904894 = fieldWeight in 4465, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4465)
        0.026408987 = product of:
          0.052817974 = sum of:
            0.052817974 = weight(_text_:index in 4465) [ClassicSimilarity], result of:
              0.052817974 = score(doc=4465,freq=2.0), product of:
                0.21880072 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.050071523 = queryNorm
                0.24139762 = fieldWeight in 4465, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4465)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Increasing interdisciplinarity has been a policy objective since the 1990s, promoted by many governments and funding agencies, but the question is: How deeply has this affected the social sciences? Although numerous articles have suggested that research has become more interdisciplinary, yet no study has compared the extent to which the interdisciplinarity of different social science subjects has changed. To address this gap, changes in the level of interdisciplinarity since 1980 are investigated for subjects with many articles in the Social Sciences Citation Index (SSCI), using the percentage of cross-disciplinary citing documents (PCDCD) to evaluate interdisciplinarity. For the 14 SSCI subjects investigated, the median level of interdisciplinarity, as measured using cross-disciplinary citations, declined from 1980 to 1990, but rose sharply between 1990 and 2000, confirming previous research. This increase was not fully matched by an increase in the percentage of articles that were assigned to more than one subject category. Nevertheless, although on average the social sciences have recently become more interdisciplinary, the extent of this change varies substantially from subject to subject. The SSCI subject with the largest increase in interdisciplinarity between 1990 and 2000 was Information Science & Library Science (IS&LS) but there is evidence that the level of interdisciplinarity of IS&LS increased less quickly during the first decade of this century.
  11. Kousha, K.; Thelwall, M.: Google Scholar citations and Google Web/URL citations : a multi-discipline exploratory analysis (2007) 0.05
    0.05364227 = product of:
      0.1609268 = sum of:
        0.1609268 = weight(_text_:citation in 337) [ClassicSimilarity], result of:
          0.1609268 = score(doc=337,freq=14.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.685379 = fieldWeight in 337, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=337)
      0.33333334 = coord(1/3)
    
    Abstract
    We use a new data gathering method, "Web/URL citation," Web/URL and Google Scholar to compare traditional and Web-based citation patterns across multiple disciplines (biology, chemistry, physics, computing, sociology, economics, psychology, and education) based upon a sample of 1,650 articles from 108 open access (OA) journals published in 2001. A Web/URL citation of an online journal article is a Web mention of its title, URL, or both. For each discipline, except psychology, we found significant correlations between Thomson Scientific (formerly Thomson ISI, here: ISI) citations and both Google Scholar and Google Web/URL citations. Google Scholar citations correlated more highly with ISI citations than did Google Web/URL citations, indicating that the Web/URL method measures a broader type of citation phenomenon. Google Scholar citations were more numerous than ISI citations in computer science and the four social science disciplines, suggesting that Google Scholar is more comprehensive for social sciences and perhaps also when conference articles are valued and published online. We also found large disciplinary differences in the percentage overlap between ISI and Google Scholar citation sources. Finally, although we found many significant trends, there were also numerous exceptions, suggesting that replacing traditional citation sources with the Web or Google Scholar for research impact calculations would be problematic.
    Theme
    Citation indexing
  12. Kousha, K.; Thelwall, M.: How is science cited on the Web? : a classification of google unique Web citations (2007) 0.05
    0.051856413 = product of:
      0.07778462 = sum of:
        0.06082462 = weight(_text_:citation in 586) [ClassicSimilarity], result of:
          0.06082462 = score(doc=586,freq=2.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.25904894 = fieldWeight in 586, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=586)
        0.01696 = product of:
          0.03392 = sum of:
            0.03392 = weight(_text_:22 in 586) [ClassicSimilarity], result of:
              0.03392 = score(doc=586,freq=2.0), product of:
                0.17534193 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050071523 = queryNorm
                0.19345059 = fieldWeight in 586, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=586)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Although the analysis of citations in the scholarly literature is now an established and relatively well understood part of information science, not enough is known about citations that can be found on the Web. In particular, are there new Web types, and if so, are these trivial or potentially useful for studying or evaluating research communication? We sought evidence based upon a sample of 1,577 Web citations of the URLs or titles of research articles in 64 open-access journals from biology, physics, chemistry, and computing. Only 25% represented intellectual impact, from references of Web documents (23%) and other informal scholarly sources (2%). Many of the Web/URL citations were created for general or subject-specific navigation (45%) or for self-publicity (22%). Additional analyses revealed significant disciplinary differences in the types of Google unique Web/URL citations as well as some characteristics of scientific open-access publishing on the Web. We conclude that the Web provides access to a new and different type of citation information, one that may therefore enable us to measure different aspects of research, and the research process in particular; but to obtain good information, the different types should be separated.
  13. Thelwall, M.; Sud, P.; Wilkinson, D.: Link and co-inlink network diagrams with URL citations or title mentions (2012) 0.05
    0.051856413 = product of:
      0.07778462 = sum of:
        0.06082462 = weight(_text_:citation in 57) [ClassicSimilarity], result of:
          0.06082462 = score(doc=57,freq=2.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.25904894 = fieldWeight in 57, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=57)
        0.01696 = product of:
          0.03392 = sum of:
            0.03392 = weight(_text_:22 in 57) [ClassicSimilarity], result of:
              0.03392 = score(doc=57,freq=2.0), product of:
                0.17534193 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050071523 = queryNorm
                0.19345059 = fieldWeight in 57, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=57)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Webometric network analyses have been used to map the connectivity of groups of websites to identify clusters, important sites or overall structure. Such analyses have mainly been based upon hyperlink counts, the number of hyperlinks between a pair of websites, although some have used title mentions or URL citations instead. The ability to automatically gather hyperlink counts from Yahoo! ceased in April 2011 and the ability to manually gather such counts was due to cease by early 2012, creating a need for alternatives. This article assesses URL citations and title mentions as possible replacements for hyperlinks in both binary and weighted direct link and co-inlink network diagrams. It also assesses three different types of data for the network connections: hit count estimates, counts of matching URLs, and filtered counts of matching URLs. Results from analyses of U.S. library and information science departments and U.K. universities give evidence that metrics based upon URLs or titles can be appropriate replacements for metrics based upon hyperlinks for both binary and weighted networks, although filtered counts of matching URLs are necessary to give the best results for co-title mention and co-URL citation network diagrams.
    Date
    6. 4.2012 18:16:22
  14. Kousha, K.; Thelwall, M.; Rezaie, S.: Assessing the citation impact of books : the role of Google Books, Google Scholar, and Scopus (2011) 0.05
    0.049663093 = product of:
      0.14898928 = sum of:
        0.14898928 = weight(_text_:citation in 4920) [ClassicSimilarity], result of:
          0.14898928 = score(doc=4920,freq=12.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.6345377 = fieldWeight in 4920, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4920)
      0.33333334 = coord(1/3)
    
    Abstract
    Citation indictors are increasingly used in some subject areas to support peer review in the evaluation of researchers and departments. Nevertheless, traditional journal-based citation indexes may be inadequate for the citation impact assessment of book-based disciplines. This article examines whether online citations from Google Books and Google Scholar can provide alternative sources of citation evidence. To investigate this, we compared the citation counts to 1,000 books submitted to the 2008 U.K. Research Assessment Exercise (RAE) from Google Books and Google Scholar with Scopus citations across seven book-based disciplines (archaeology; law; politics and international studies; philosophy; sociology; history; and communication, cultural, and media studies). Google Books and Google Scholar citations to books were 1.4 and 3.2 times more common than were Scopus citations, and their medians were more than twice and three times as high as were Scopus median citations, respectively. This large number of citations is evidence that in book-oriented disciplines in the social sciences, arts, and humanities, online book citations may be sufficiently numerous to support peer review for research evaluation, at least in the United Kingdom.
  15. Thelwall, M.; Levitt, J.M.: National scientific performance evolution patterns : retrenchment, successful expansion, or overextension (2018) 0.05
    0.049663093 = product of:
      0.14898928 = sum of:
        0.14898928 = weight(_text_:citation in 4225) [ClassicSimilarity], result of:
          0.14898928 = score(doc=4225,freq=12.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.6345377 = fieldWeight in 4225, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4225)
      0.33333334 = coord(1/3)
    
    Abstract
    National governments would like to preside over an expanding and increasingly high-impact science system but are these two goals largely independent or closely linked? This article investigates the relationship between changes in the share of the world's scientific output and changes in relative citation impact for 2.6 million articles from 26 fields in the 25 countries with the most Scopus-indexed journal articles from 1996 to 2015. There is a negative correlation between expansion and relative citation impact, but their relationship varies. China, Spain, Australia, and Poland were successful overall across the 26 fields, expanding both their share of the world's output and its relative citation impact, whereas Japan, France, Sweden, and Israel had decreased shares and relative citation impact. In contrast, the USA, UK, Germany, Italy, Russia, The Netherlands, Switzerland, Finland, and Denmark all enjoyed increased relative citation impact despite a declining share of publications. Finally, India, South Korea, Brazil, Taiwan, and Turkey all experienced sustained expansion but a recent fall in relative citation impact. These results may partly reflect changes in the coverage of Scopus and the selection of fields.
  16. Thelwall, M.: Female citation impact superiority 1996-2018 in six out of seven English-speaking nations (2020) 0.05
    0.049663093 = product of:
      0.14898928 = sum of:
        0.14898928 = weight(_text_:citation in 5948) [ClassicSimilarity], result of:
          0.14898928 = score(doc=5948,freq=12.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.6345377 = fieldWeight in 5948, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5948)
      0.33333334 = coord(1/3)
    
    Abstract
    Efforts to combat continuing gender inequalities in academia need to be informed by evidence about where differences occur. Citations are relevant as potential evidence in appointment and promotion decisions, but it is unclear whether there have been historical gender differences in average citation impact that might explain the current shortfall of senior female academics. This study investigates the evolution of gender differences in citation impact 1996-2018 for six million articles from seven large English-speaking nations: Australia, Canada, Ireland, Jamaica, New Zealand, UK, and the USA. The results show that a small female citation advantage has been the norm over time for all these countries except the USA, where there has been no practical difference. The female citation advantage is largest, and statistically significant in most years, for Australia and the UK. This suggests that any academic bias against citing female-authored research cannot explain current employment inequalities. Nevertheless, comparisons using recent citation data, or avoiding it altogether, during appointments or promotion may disadvantage females in some countries by underestimating the likely greater impact of their work, especially in the long term.
  17. Thelwall, M.; Maflahi, N.: Academic collaboration rates and citation associations vary substantially between countries and fields (2020) 0.05
    0.049663093 = product of:
      0.14898928 = sum of:
        0.14898928 = weight(_text_:citation in 5952) [ClassicSimilarity], result of:
          0.14898928 = score(doc=5952,freq=12.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.6345377 = fieldWeight in 5952, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5952)
      0.33333334 = coord(1/3)
    
    Abstract
    Research collaboration is promoted by governments and research funders, but if the relative prevalence and merits of collaboration vary internationally then different national and disciplinary strategies may be needed to promote it. This study compares the team size and field normalized citation impact of research across all 27 Scopus broad fields in the 10 countries with the most journal articles indexed in Scopus 2008-2012. The results show that team size varies substantially by discipline and country, with Japan (4.2) having two-thirds more authors per article than the United Kingdom (2.5). Solo authorship is rare in China (4%) but common in the United Kingdom (27%). While increasing team size associates with higher citation impact in almost all countries and fields, this association is much weaker in China than elsewhere. There are also field differences in the association between citation impact and collaboration. For example, larger team sizes in the Business, Management & Accounting category do not seem to associate with greater research impact, and for China and India, solo authorship associates with higher citation impact in this field. Overall, there are substantial international and field differences in the extent to which researchers collaborate and the extent to which collaboration associates with higher citation impact.
  18. Didegah, F.; Thelwall, M.: Determinants of research citation impact in nanoscience and nanotechnology (2013) 0.05
    0.048659697 = product of:
      0.14597909 = sum of:
        0.14597909 = weight(_text_:citation in 737) [ClassicSimilarity], result of:
          0.14597909 = score(doc=737,freq=8.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.62171745 = fieldWeight in 737, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.046875 = fieldNorm(doc=737)
      0.33333334 = coord(1/3)
    
    Abstract
    This study investigates a range of metrics available when a nanoscience and nanotechnology article is published to see which metrics correlate more with the number of citations to the article. It also introduces the degree of internationality of journals and references as new metrics for this purpose. The journal impact factor; the impact of references; the internationality of authors, journals, and references; and the number of authors, institutions, and references were all calculated for papers published in nanoscience and nanotechnology journals in the Web of Science from 2007 to 2009. Using a zero-inflated negative binomial regression model on the data set, the impact factor of the publishing journal and the citation impact of the cited references were found to be the most effective determinants of citation counts in all four time periods. In the entire 2007 to 2009 period, apart from journal internationality and author numbers and internationality, all other predictor variables had significant effects on citation counts.
  19. Thelwall, M.: Mendeley readership altmetrics for medical articles : an analysis of 45 fields (2016) 0.05
    0.048659697 = product of:
      0.14597909 = sum of:
        0.14597909 = weight(_text_:citation in 3055) [ClassicSimilarity], result of:
          0.14597909 = score(doc=3055,freq=8.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.62171745 = fieldWeight in 3055, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.046875 = fieldNorm(doc=3055)
      0.33333334 = coord(1/3)
    
    Abstract
    Medical research is highly funded and often expensive and so is particularly important to evaluate effectively. Nevertheless, citation counts may accrue too slowly for use in some formal and informal evaluations. It is therefore important to investigate whether alternative metrics could be used as substitutes. This article assesses whether one such altmetric, Mendeley readership counts, correlates strongly with citation counts across all medical fields, whether the relationship is stronger if student readers are excluded, and whether they are distributed similarly to citation counts. Based on a sample of 332,975 articles from 2009 in 45 medical fields in Scopus, citation counts correlated strongly (about 0.7; 78% of articles had at least one reader) with Mendeley readership counts (from the new version 1 applications programming interface [API]) in almost all fields, with one minor exception, and the correlations tended to decrease slightly when student readers were excluded. Readership followed either a lognormal or a hooked power law distribution, whereas citations always followed a hooked power law, showing that the two may have underlying differences.
  20. Thelwall, M.: ¬A comparison of link and URL citation counting (2011) 0.05
    0.045336 = product of:
      0.136008 = sum of:
        0.136008 = weight(_text_:citation in 4533) [ClassicSimilarity], result of:
          0.136008 = score(doc=4533,freq=10.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.57925105 = fieldWeight in 4533, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4533)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - Link analysis is an established topic within webometrics. It normally uses counts of links between sets of web sites or to sets of web sites. These link counts are derived from web crawlers or commercial search engines with the latter being the only alternative for some investigations. This paper compares link counts with URL citation counts in order to assess whether the latter could be a replacement for the former if the major search engines withdraw their advanced hyperlink search facilities. Design/methodology/approach - URL citation counts are compared with link counts for a variety of data sets used in previous webometric studies. Findings - The results show a high degree of correlation between the two but with URL citations being much less numerous, at least outside academia and business. Research limitations/implications - The results cover a small selection of 15 case studies and so the findings are only indicative. Significant differences between results indicate that the difference between link counts and URL citation counts will vary between webometric studies. Practical implications - Should link searches be withdrawn, then link analyses of less well linked non-academic, non-commercial sites would be seriously weakened, although citations based on e-mail addresses could help to make citations more numerous than links for some business and academic contexts. Originality/value - This is the first systematic study of the difference between link counts and URL citation counts in a variety of contexts and it shows that there are significant differences between the two.