Search (32 results, page 1 of 2)

  • × author_ss:"Thelwall, M."
  1. Thelwall, M.; Sud, P.: Mendeley readership counts : an investigation of temporal and disciplinary differences (2016) 0.08
    0.07781324 = product of:
      0.15562648 = sum of:
        0.10374144 = weight(_text_:fields in 3211) [ClassicSimilarity], result of:
          0.10374144 = score(doc=3211,freq=2.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.32825118 = fieldWeight in 3211, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.046875 = fieldNorm(doc=3211)
        0.051885046 = weight(_text_:22 in 3211) [ClassicSimilarity], result of:
          0.051885046 = score(doc=3211,freq=2.0), product of:
            0.2235069 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06382575 = queryNorm
            0.23214069 = fieldWeight in 3211, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=3211)
      0.5 = coord(2/4)
    
    Abstract
    Scientists and managers using citation-based indicators to help evaluate research cannot evaluate recent articles because of the time needed for citations to accrue. Reading occurs before citing, however, and so it makes sense to count readers rather than citations for recent publications. To assess this, Mendeley readers and citations were obtained for articles from 2004 to late 2014 in five broad categories (agriculture, business, decision science, pharmacy, and the social sciences) and 50 subcategories. In these areas, citation counts tended to increase with every extra year since publication, and readership counts tended to increase faster initially but then stabilize after about 5 years. The correlation between citations and readers was also higher for longer time periods, stabilizing after about 5 years. Although there were substantial differences between broad fields and smaller differences between subfields, the results confirm the value of Mendeley reader counts as early scientific impact indicators.
    Date
    16.11.2016 11:07:22
  2. Thelwall, M.; Kousha, K.; Abdoli, M.; Stuart, E.; Makita, M.; Wilson, P.; Levitt, J.: Why are coauthored academic articles more cited : higher quality or larger audience? (2023) 0.06
    0.06484437 = product of:
      0.12968874 = sum of:
        0.0864512 = weight(_text_:fields in 995) [ClassicSimilarity], result of:
          0.0864512 = score(doc=995,freq=2.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.27354267 = fieldWeight in 995, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0390625 = fieldNorm(doc=995)
        0.04323754 = weight(_text_:22 in 995) [ClassicSimilarity], result of:
          0.04323754 = score(doc=995,freq=2.0), product of:
            0.2235069 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06382575 = queryNorm
            0.19345059 = fieldWeight in 995, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=995)
      0.5 = coord(2/4)
    
    Abstract
    Collaboration is encouraged because it is believed to improve academic research, supported by indirect evidence in the form of more coauthored articles being more cited. Nevertheless, this might not reflect quality but increased self-citations or the "audience effect": citations from increased awareness through multiple author networks. We address this with the first science wide investigation into whether author numbers associate with journal article quality, using expert peer quality judgments for 122,331 articles from the 2014-20 UK national assessment. Spearman correlations between author numbers and quality scores show moderately strong positive associations (0.2-0.4) in the health, life, and physical sciences, but weak or no positive associations in engineering and social sciences, with weak negative/positive or no associations in various arts and humanities, and a possible negative association for decision sciences. This gives the first systematic evidence that greater numbers of authors associates with higher quality journal articles in the majority of academia outside the arts and humanities, at least for the UK. Positive associations between team size and citation counts in areas with little association between team size and quality also show that audience effects or other nonquality factors account for the higher citation rates of coauthored articles in some fields.
    Date
    22. 6.2023 18:11:50
  3. Thelwall, M.: Mendeley readership altmetrics for medical articles : an analysis of 45 fields (2016) 0.05
    0.05187072 = product of:
      0.20748287 = sum of:
        0.20748287 = weight(_text_:fields in 3055) [ClassicSimilarity], result of:
          0.20748287 = score(doc=3055,freq=8.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.65650237 = fieldWeight in 3055, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.046875 = fieldNorm(doc=3055)
      0.25 = coord(1/4)
    
    Abstract
    Medical research is highly funded and often expensive and so is particularly important to evaluate effectively. Nevertheless, citation counts may accrue too slowly for use in some formal and informal evaluations. It is therefore important to investigate whether alternative metrics could be used as substitutes. This article assesses whether one such altmetric, Mendeley readership counts, correlates strongly with citation counts across all medical fields, whether the relationship is stronger if student readers are excluded, and whether they are distributed similarly to citation counts. Based on a sample of 332,975 articles from 2009 in 45 medical fields in Scopus, citation counts correlated strongly (about 0.7; 78% of articles had at least one reader) with Mendeley readership counts (from the new version 1 applications programming interface [API]) in almost all fields, with one minor exception, and the correlations tended to decrease slightly when student readers were excluded. Readership followed either a lognormal or a hooked power law distribution, whereas citations always followed a hooked power law, showing that the two may have underlying differences.
  4. Thelwall, M.; Klitkou, A.; Verbeek, A.; Stuart, D.; Vincent, C.: Policy-relevant Webometrics for individual scientific fields (2010) 0.04
    0.04492136 = product of:
      0.17968544 = sum of:
        0.17968544 = weight(_text_:fields in 3574) [ClassicSimilarity], result of:
          0.17968544 = score(doc=3574,freq=6.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.5685477 = fieldWeight in 3574, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.046875 = fieldNorm(doc=3574)
      0.25 = coord(1/4)
    
    Abstract
    Despite over 10 years of research there is no agreement on the most suitable roles for Webometric indicators in support of research policy and almost no field-based Webometrics. This article partly fills these gaps by analyzing the potential of policy-relevant Webometrics for individual scientific fields with the help of 4 case studies. Although Webometrics cannot provide robust indicators of knowledge flows or research impact, it can provide some evidence of networking and mutual awareness. The scope of Webometrics is also relatively wide, including not only research organizations and firms but also intermediary groups like professional associations, Web portals, and government agencies. Webometrics can, therefore, provide evidence about the research process to compliment peer review, bibliometric, and patent indicators: tracking the early, mainly prepublication development of new fields and research funding initiatives, assessing the role and impact of intermediary organizations and the need for new ones, and monitoring the extent of mutual awareness in particular research areas.
  5. Thelwall, M.; Kousha, K.; Stuart, E.; Makita, M.; Abdoli, M.; Wilson, P.; Levitt, J.: In which fields are citations indicators of research quality? (2023) 0.04
    0.0432256 = product of:
      0.1729024 = sum of:
        0.1729024 = weight(_text_:fields in 1033) [ClassicSimilarity], result of:
          0.1729024 = score(doc=1033,freq=8.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.54708534 = fieldWeight in 1033, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1033)
      0.25 = coord(1/4)
    
    Abstract
    Citation counts are widely used as indicators of research quality to support or replace human peer review and for lists of top cited papers, researchers, and institutions. Nevertheless, the relationship between citations and research quality is poorly evidenced. We report the first large-scale science-wide academic evaluation of the relationship between research quality and citations (field normalized citation counts), correlating them for 87,739 journal articles in 34 field-based UK Units of Assessment (UoA). The two correlate positively in all academic fields, from very weak (0.1) to strong (0.5), reflecting broadly linear relationships in all fields. We give the first evidence that the correlations are positive even across the arts and humanities. The patterns are similar for the field classification schemes of Scopus and Dimensions.ai, although varying for some individual subjects and therefore more uncertain for these. We also show for the first time that no field has a citation threshold beyond which all articles are excellent quality, so lists of top cited articles are not pure collections of excellence, and neither is any top citation percentile indicator. Thus, while appropriately field normalized citations associate positively with research quality in all fields, they never perfectly reflect it, even at high values.
  6. Thelwall, M.; Levitt, J.M.: National scientific performance evolution patterns : retrenchment, successful expansion, or overextension (2018) 0.04
    0.03743447 = product of:
      0.14973788 = sum of:
        0.14973788 = weight(_text_:fields in 4225) [ClassicSimilarity], result of:
          0.14973788 = score(doc=4225,freq=6.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.4737898 = fieldWeight in 4225, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4225)
      0.25 = coord(1/4)
    
    Abstract
    National governments would like to preside over an expanding and increasingly high-impact science system but are these two goals largely independent or closely linked? This article investigates the relationship between changes in the share of the world's scientific output and changes in relative citation impact for 2.6 million articles from 26 fields in the 25 countries with the most Scopus-indexed journal articles from 1996 to 2015. There is a negative correlation between expansion and relative citation impact, but their relationship varies. China, Spain, Australia, and Poland were successful overall across the 26 fields, expanding both their share of the world's output and its relative citation impact, whereas Japan, France, Sweden, and Israel had decreased shares and relative citation impact. In contrast, the USA, UK, Germany, Italy, Russia, The Netherlands, Switzerland, Finland, and Denmark all enjoyed increased relative citation impact despite a declining share of publications. Finally, India, South Korea, Brazil, Taiwan, and Turkey all experienced sustained expansion but a recent fall in relative citation impact. These results may partly reflect changes in the coverage of Scopus and the selection of fields.
  7. Thelwall, M.; Maflahi, N.: Academic collaboration rates and citation associations vary substantially between countries and fields (2020) 0.04
    0.03743447 = product of:
      0.14973788 = sum of:
        0.14973788 = weight(_text_:fields in 5952) [ClassicSimilarity], result of:
          0.14973788 = score(doc=5952,freq=6.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.4737898 = fieldWeight in 5952, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5952)
      0.25 = coord(1/4)
    
    Abstract
    Research collaboration is promoted by governments and research funders, but if the relative prevalence and merits of collaboration vary internationally then different national and disciplinary strategies may be needed to promote it. This study compares the team size and field normalized citation impact of research across all 27 Scopus broad fields in the 10 countries with the most journal articles indexed in Scopus 2008-2012. The results show that team size varies substantially by discipline and country, with Japan (4.2) having two-thirds more authors per article than the United Kingdom (2.5). Solo authorship is rare in China (4%) but common in the United Kingdom (27%). While increasing team size associates with higher citation impact in almost all countries and fields, this association is much weaker in China than elsewhere. There are also field differences in the association between citation impact and collaboration. For example, larger team sizes in the Business, Management & Accounting category do not seem to associate with greater research impact, and for China and India, solo authorship associates with higher citation impact in this field. Overall, there are substantial international and field differences in the extent to which researchers collaborate and the extent to which collaboration associates with higher citation impact.
  8. Thelwall, M.; Sud, P.: Do new research issues attract more citations? : a comparison between 25 Scopus subject categories (2021) 0.04
    0.03743447 = product of:
      0.14973788 = sum of:
        0.14973788 = weight(_text_:fields in 157) [ClassicSimilarity], result of:
          0.14973788 = score(doc=157,freq=6.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.4737898 = fieldWeight in 157, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0390625 = fieldNorm(doc=157)
      0.25 = coord(1/4)
    
    Abstract
    Finding new ways to help researchers and administrators understand academic fields is an important task for information scientists. Given the importance of interdisciplinary research, it is essential to be aware of disciplinary differences in aspects of scholarship, such as the significance of recent changes in a field. This paper identifies potential changes in 25 subject categories through a term comparison of words in article titles, keywords and abstracts in 1 year compared to the previous 4 years. The scholarly influence of new research issues is indirectly assessed with a citation analysis of articles matching each trending term. While topic-related words dominate the top terms, style, national focus, and language changes are also evident. Thus, as reflected in Scopus, fields evolve along multiple dimensions. Moreover, while articles exploiting new issues are usually more cited in some fields, such as Organic Chemistry, they are usually less cited in others, including History. The possible causes of new issues being less cited include externally driven temporary factors, such as disease outbreaks, and internally driven temporary decisions, such as a deliberate emphasis on a single topic (e.g., through a journal special issue).
  9. Thelwall, M.; Maflahi, N.: Are scholarly articles disproportionately read in their own country? : An analysis of mendeley readers (2015) 0.03
    0.030565115 = product of:
      0.12226046 = sum of:
        0.12226046 = weight(_text_:fields in 1850) [ClassicSimilarity], result of:
          0.12226046 = score(doc=1850,freq=4.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.38684773 = fieldWeight in 1850, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1850)
      0.25 = coord(1/4)
    
    Abstract
    International collaboration tends to result in more highly cited research and, partly as a result of this, many research funding schemes are specifically international in scope. Nevertheless, it is not clear whether this citation advantage is the result of higher quality research or due to other factors, such as a larger audience for the publications. To test whether the apparent advantage of internationally collaborative research may be due to additional interest in articles from the countries of the authors, this article assesses the extent to which the national affiliations of the authors of articles affect the national affiliations of their Mendeley readers. Based on English-language Web of Science articles in 10 fields from science, medicine, social science, and the humanities, the results of statistical models comparing author and reader affiliations suggest that, in most fields, Mendeley users are disproportionately readers of articles authored from within their own country. In addition, there are several cases in which Mendeley users from certain countries tend to ignore articles from specific other countries, although it is not clear whether this reflects national biases or different national specialisms within a field. In conclusion, research funders should not incentivize international collaboration on the basis that it is, in general, higher quality because its higher impact may be primarily due to its larger audience. Moreover, authors should guard against national biases in their reading to select only the best and most relevant publications to inform their research.
  10. Mohammadi , E.; Thelwall, M.: Mendeley readership altmetrics for the social sciences and humanities : research evaluation and knowledge flows (2014) 0.03
    0.030565115 = product of:
      0.12226046 = sum of:
        0.12226046 = weight(_text_:fields in 2190) [ClassicSimilarity], result of:
          0.12226046 = score(doc=2190,freq=4.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.38684773 = fieldWeight in 2190, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2190)
      0.25 = coord(1/4)
    
    Abstract
    Although there is evidence that counting the readers of an article in the social reference site, Mendeley, may help to capture its research impact, the extent to which this is true for different scientific fields is unknown. In this study, we compare Mendeley readership counts with citations for different social sciences and humanities disciplines. The overall correlation between Mendeley readership counts and citations for the social sciences was higher than for the humanities. Low and medium correlations between Mendeley bookmarks and citation counts in all the investigated disciplines suggest that these measures reflect different aspects of research impact. Mendeley data were also used to discover patterns of information flow between scientific fields. Comparing information flows based on Mendeley bookmarking data and cross-disciplinary citation analysis for the disciplines revealed substantial similarities and some differences. Thus, the evidence from this study suggests that Mendeley readership data could be used to help capture knowledge transfer across scientific disciplines, especially for people that read but do not author articles, as well as giving impact evidence at an earlier stage than is possible with citation counts.
  11. Thelwall, M.; Wilson, P.: Does research with statistics have more impact? : the citation rank advantage of structural equation modeling (2016) 0.03
    0.030565115 = product of:
      0.12226046 = sum of:
        0.12226046 = weight(_text_:fields in 2900) [ClassicSimilarity], result of:
          0.12226046 = score(doc=2900,freq=4.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.38684773 = fieldWeight in 2900, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2900)
      0.25 = coord(1/4)
    
    Abstract
    Statistics are essential to many areas of research and individual statistical techniques may change the ways in which problems are addressed as well as the types of problems that can be tackled. Hence, specific techniques may tend to generate high-impact findings within science. This article estimates the citation advantage of a technique by calculating the average citation rank of articles using it in the issue of the journal in which they were published. Applied to structural equation modeling (SEM) and four related techniques in 3 broad fields, the results show citation advantages that vary by technique and broad field. For example, SEM seems to be more influential in all broad fields than the 4 simpler methods, with one exception, and hence seems to be particularly worth adding to statistical curricula. In contrast, Pearson correlation apparently has the highest average impact in medicine but the least in psychology. In conclusion, the results suggest that the importance of a statistical technique may vary by discipline and that even simple techniques can help to generate high-impact research in some contexts.
  12. Kousha, K.; Thelwall, M.: Patent citation analysis with Google (2017) 0.03
    0.030565115 = product of:
      0.12226046 = sum of:
        0.12226046 = weight(_text_:fields in 3317) [ClassicSimilarity], result of:
          0.12226046 = score(doc=3317,freq=4.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.38684773 = fieldWeight in 3317, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3317)
      0.25 = coord(1/4)
    
    Abstract
    Citations from patents to scientific publications provide useful evidence about the commercial impact of academic research, but automatically searchable databases are needed to exploit this connection for large-scale patent citation evaluations. Google covers multiple different international patent office databases but does not index patent citations or allow automatic searches. In response, this article introduces a semiautomatic indirect method via Bing to extract and filter patent citations from Google to academic papers with an overall precision of 98%. The method was evaluated with 322,192 science and engineering Scopus articles from every second year for the period 1996-2012. Although manual Google Patent searches give more results, especially for articles with many patent citations, the difference is not large enough to be a major problem. Within Biomedical Engineering, Biotechnology, and Pharmacology & Pharmaceutics, 7% to 10% of Scopus articles had at least one patent citation but other fields had far fewer, so patent citation analysis is only relevant for a minority of publications. Low but positive correlations between Google Patent citations and Scopus citations across all fields suggest that traditional citation counts cannot substitute for patent citations when evaluating research.
  13. Vaughan, L.; Thelwall, M.: Scholarly use of the Web : what are the key inducers of links to journal Web sites? (2003) 0.02
    0.0216128 = product of:
      0.0864512 = sum of:
        0.0864512 = weight(_text_:fields in 1236) [ClassicSimilarity], result of:
          0.0864512 = score(doc=1236,freq=2.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.27354267 = fieldWeight in 1236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1236)
      0.25 = coord(1/4)
    
    Abstract
    Web links have been studied by information scientists for at least six years but it is only in the past two that clear evidence has emerged to show that counts of links to scholarly Web spaces (universities and departments) can correlate significantly with research measures, giving some credence to their use for the investigation of scholarly communication. This paper reports an a study to investigate the factors that influence the creation of links to journal Web sites. An empirical approach is used: collecting data and testing for significant patterns. The specific questions addressed are whether site age and site content are inducers of links to a journal's Web site as measured by the ratio of link counts to Journal Impact Factors, two variables previously discovered to be related. A new methodology for data collection is also introduced that uses the Internet Archive to obtain an earliest known creation date for Web sites. The results show that both site age and site content are significant factors for the disciplines studied: library and information science, and law. Comparisons between the two fields also show disciplinary differences in Web site characteristics. Scholars and publishers should be particularly aware that richer content an a journal's Web site tends to generate links and thus the traffic to the site.
  14. Thelwall, M.; Vaughan, L.; Björneborn, L.: Webometrics (2004) 0.02
    0.0216128 = product of:
      0.0864512 = sum of:
        0.0864512 = weight(_text_:fields in 4279) [ClassicSimilarity], result of:
          0.0864512 = score(doc=4279,freq=2.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.27354267 = fieldWeight in 4279, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4279)
      0.25 = coord(1/4)
    
    Abstract
    Webometrics, the quantitative study of Web-related phenomena, emerged from the realization that methods originally designed for bibliometric analysis of scientific journal article citation patterns could be applied to the Web, with commercial search engines providing the raw data. Almind and Ingwersen (1997) defined the field and gave it its name. Other pioneers included Rodriguez Gairin (1997) and Aguillo (1998). Larson (1996) undertook exploratory link structure analysis, as did Rousseau (1997). Webometrics encompasses research from fields beyond information science such as communication studies, statistical physics, and computer science. In this review we concentrate on link analysis, but also cover other aspects of webometrics, including Web log fle analysis. One theme that runs through this chapter is the messiness of Web data and the need for data cleansing heuristics. The uncontrolled Web creates numerous problems in the interpretation of results, for instance, from the automatic creation or replication of links. The loose connection between top-level domain specifications (e.g., com, edu, and org) and their actual content is also a frustrating problem. For example, many .com sites contain noncommercial content, although com is ostensibly the main commercial top-level domain. Indeed, a skeptical researcher could claim that obstacles of this kind are so great that all Web analyses lack value. As will be seen, one response to this view, a view shared by critics of evaluative bibliometrics, is to demonstrate that Web data correlate significantly with some non-Web data in order to prove that the Web data are not wholly random. A practical response has been to develop increasingly sophisticated data cleansing techniques and multiple data analysis methods.
  15. Larivière, V.; Sugimoto, C.R.; Macaluso, B.; Milojevi´c, S.; Cronin, B.; Thelwall, M.: arXiv E-prints and the journal of record : an analysis of roles and relationships (2014) 0.02
    0.0216128 = product of:
      0.0864512 = sum of:
        0.0864512 = weight(_text_:fields in 1285) [ClassicSimilarity], result of:
          0.0864512 = score(doc=1285,freq=2.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.27354267 = fieldWeight in 1285, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1285)
      0.25 = coord(1/4)
    
    Abstract
    Since its creation in 1991, arXiv has become central to the diffusion of research in a number of fields. Combining data from the entirety of arXiv and the Web of Science (WoS), this article investigates (a) the proportion of papers across all disciplines that are on arXiv and the proportion of arXiv papers that are in the WoS, (b) the elapsed time between arXiv submission and journal publication, and (c) the aging characteristics and scientific impact of arXiv e-prints and their published version. It shows that the proportion of WoS papers found on arXiv varies across the specialties of physics and mathematics, and that only a few specialties make extensive use of the repository. Elapsed time between arXiv submission and journal publication has shortened but remains longer in mathematics than in physics. In physics, mathematics, as well as in astronomy and astrophysics, arXiv versions are cited more promptly and decay faster than WoS papers. The arXiv versions of papers-both published and unpublished-have lower citation rates than published papers, although there is almost no difference in the impact of the arXiv versions of published and unpublished papers.
  16. Kousha, K.; Thelwall, M.: ¬An automatic method for assessing the teaching impact of books from online academic syllabi (2016) 0.02
    0.0216128 = product of:
      0.0864512 = sum of:
        0.0864512 = weight(_text_:fields in 3226) [ClassicSimilarity], result of:
          0.0864512 = score(doc=3226,freq=2.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.27354267 = fieldWeight in 3226, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3226)
      0.25 = coord(1/4)
    
    Abstract
    Scholars writing books that are widely used to support teaching in higher education may be undervalued because of a lack of evidence of teaching value. Although sales data may give credible evidence for textbooks, these data may poorly reflect educational uses of other types of books. As an alternative, this article proposes a method to search automatically for mentions of books in online academic course syllabi based on Bing searches for syllabi mentioning a given book, filtering out false matches through an extensive set of rules. The method had an accuracy of over 90% based on manual checks of a sample of 2,600 results from the initial Bing searches. Over one third of about 14,000 monographs checked had one or more academic syllabus mention, with more in the arts and humanities (56%) and social sciences (52%). Low but significant correlations between syllabus mentions and citations across most fields, except the social sciences, suggest that books tend to have different levels of impact for teaching and research. In conclusion, the automatic syllabus search method gives a new way to estimate the educational utility of books in a way that sales data and citation counts cannot.
  17. Thelwall, M.; Kousha, K.: ResearchGate articles : age, discipline, audience size, and impact (2017) 0.02
    0.0216128 = product of:
      0.0864512 = sum of:
        0.0864512 = weight(_text_:fields in 3349) [ClassicSimilarity], result of:
          0.0864512 = score(doc=3349,freq=2.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.27354267 = fieldWeight in 3349, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3349)
      0.25 = coord(1/4)
    
    Abstract
    The large multidisciplinary academic social website ResearchGate aims to help academics to connect with each other and to publicize their work. Despite its popularity, little is known about the age and discipline of the articles uploaded and viewed in the site and whether publication statistics from the site could be useful impact indicators. In response, this article assesses samples of ResearchGate articles uploaded at specific dates, comparing their views in the site to their Mendeley readers and Scopus-indexed citations. This analysis shows that ResearchGate is dominated by recent articles, which attract about three times as many views as older articles. ResearchGate has uneven coverage of scholarship, with the arts and humanities, health professions, and decision sciences poorly represented and some fields receiving twice as many views per article as others. View counts for uploaded articles have low to moderate positive correlations with both Scopus citations and Mendeley readers, which is consistent with them tending to reflect a wider audience than Scopus-publishing scholars. Hence, for articles uploaded to the site, view counts may give a genuinely new audience indicator.
  18. Kousha, K.; Thelwall, M.: Are wikipedia citations important evidence of the impact of scholarly articles and books? (2017) 0.02
    0.0216128 = product of:
      0.0864512 = sum of:
        0.0864512 = weight(_text_:fields in 3440) [ClassicSimilarity], result of:
          0.0864512 = score(doc=3440,freq=2.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.27354267 = fieldWeight in 3440, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3440)
      0.25 = coord(1/4)
    
    Abstract
    Individual academics and research evaluators often need to assess the value of published research. Although citation counts are a recognized indicator of scholarly impact, alternative data is needed to provide evidence of other types of impact, including within education and wider society. Wikipedia is a logical choice for both of these because the role of a general encyclopaedia is to be an understandable repository of facts about a diverse array of topics and hence it may cite research to support its claims. To test whether Wikipedia could provide new evidence about the impact of scholarly research, this article counted citations to 302,328 articles and 18,735 monographs in English indexed by Scopus in the period 2005 to 2012. The results show that citations from Wikipedia to articles are too rare for most research evaluation purposes, with only 5% of articles being cited in all fields. In contrast, a third of monographs have at least one citation from Wikipedia, with the most in the arts and humanities. Hence, Wikipedia citations can provide extra impact evidence for academic monographs. Nevertheless, the results may be relatively easily manipulated and so Wikipedia is not recommended for evaluations affecting stakeholder interests.
  19. Kousha, K.; Thelwall, M.; Abdoli, M.: Goodreads reviews to assess the wider impacts of books (2017) 0.02
    0.0216128 = product of:
      0.0864512 = sum of:
        0.0864512 = weight(_text_:fields in 3768) [ClassicSimilarity], result of:
          0.0864512 = score(doc=3768,freq=2.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.27354267 = fieldWeight in 3768, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3768)
      0.25 = coord(1/4)
    
    Abstract
    Although peer-review and citation counts are commonly used to help assess the scholarly impact of published research, informal reader feedback might also be exploited to help assess the wider impacts of books, such as their educational or cultural value. The social website Goodreads seems to be a reasonable source for this purpose because it includes a large number of book reviews and ratings by many users inside and outside of academia. To check this, Goodreads book metrics were compared with different book-based impact indicators for 15,928 academic books across broad fields. Goodreads engagements were numerous enough in the arts (85% of books had at least one), humanities (80%), and social sciences (67%) for use as a source of impact evidence. Low and moderate correlations between Goodreads book metrics and scholarly or non-scholarly indicators suggest that reader feedback in Goodreads reflects the many purposes of books rather than a single type of impact. Although Goodreads book metrics can be manipulated, they could be used guardedly by academics, authors, and publishers in evaluations.
  20. Thelwall, M.; Kousha, K.; Abdoli, M.; Stuart, E.; Makita, M.; Wilson, P.; Levitt, J.: Do altmetric scores reflect article quality? : evidence from the UK Research Excellence Framework 2021 (2023) 0.02
    0.0216128 = product of:
      0.0864512 = sum of:
        0.0864512 = weight(_text_:fields in 947) [ClassicSimilarity], result of:
          0.0864512 = score(doc=947,freq=2.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.27354267 = fieldWeight in 947, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0390625 = fieldNorm(doc=947)
      0.25 = coord(1/4)
    
    Abstract
    Altmetrics are web-based quantitative impact or attention indicators for academic articles that have been proposed to supplement citation counts. This article reports the first assessment of the extent to which mature altmetrics from Altmetric.com and Mendeley associate with individual article quality scores. It exploits expert norm-referenced peer review scores from the UK Research Excellence Framework 2021 for 67,030+ journal articles in all fields 2014-2017/2018, split into 34 broadly field-based Units of Assessment (UoAs). Altmetrics correlated more strongly with research quality than previously found, although less strongly than raw and field normalized Scopus citation counts. Surprisingly, field normalizing citation counts can reduce their strength as a quality indicator for articles in a single field. For most UoAs, Mendeley reader counts are the best altmetric (e.g., three Spearman correlations with quality scores above 0.5), tweet counts are also a moderate strength indicator in eight UoAs (Spearman correlations with quality scores above 0.3), ahead of news (eight correlations above 0.3, but generally weaker), blogs (five correlations above 0.3), and Facebook (three correlations above 0.3) citations, at least in the United Kingdom. In general, altmetrics are the strongest indicators of research quality in the health and physical sciences and weakest in the arts and humanities.