Search (27 results, page 1 of 2)

  • × author_ss:"Thelwall, M."
  • × type_ss:"a"
  • × year_i:[2010 TO 2020}
  1. Thelwall, M.; Levitt, J.M.: National scientific performance evolution patterns : retrenchment, successful expansion, or overextension (2018) 0.01
    0.0081539685 = product of:
      0.073385715 = sum of:
        0.073385715 = weight(_text_:germany in 4225) [ClassicSimilarity], result of:
          0.073385715 = score(doc=4225,freq=2.0), product of:
            0.22275731 = queryWeight, product of:
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.037353165 = queryNorm
            0.32944247 = fieldWeight in 4225, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.963546 = idf(docFreq=308, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4225)
      0.11111111 = coord(1/9)
    
    Abstract
    National governments would like to preside over an expanding and increasingly high-impact science system but are these two goals largely independent or closely linked? This article investigates the relationship between changes in the share of the world's scientific output and changes in relative citation impact for 2.6 million articles from 26 fields in the 25 countries with the most Scopus-indexed journal articles from 1996 to 2015. There is a negative correlation between expansion and relative citation impact, but their relationship varies. China, Spain, Australia, and Poland were successful overall across the 26 fields, expanding both their share of the world's output and its relative citation impact, whereas Japan, France, Sweden, and Israel had decreased shares and relative citation impact. In contrast, the USA, UK, Germany, Italy, Russia, The Netherlands, Switzerland, Finland, and Denmark all enjoyed increased relative citation impact despite a declining share of publications. Finally, India, South Korea, Brazil, Taiwan, and Turkey all experienced sustained expansion but a recent fall in relative citation impact. These results may partly reflect changes in the coverage of Scopus and the selection of fields.
  2. Thelwall, M.; Sud, P.; Wilkinson, D.: Link and co-inlink network diagrams with URL citations or title mentions (2012) 0.01
    0.007396452 = product of:
      0.033284035 = sum of:
        0.02063194 = weight(_text_:data in 57) [ClassicSimilarity], result of:
          0.02063194 = score(doc=57,freq=2.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.17468026 = fieldWeight in 57, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=57)
        0.012652095 = product of:
          0.02530419 = sum of:
            0.02530419 = weight(_text_:22 in 57) [ClassicSimilarity], result of:
              0.02530419 = score(doc=57,freq=2.0), product of:
                0.13080442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037353165 = queryNorm
                0.19345059 = fieldWeight in 57, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=57)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Webometric network analyses have been used to map the connectivity of groups of websites to identify clusters, important sites or overall structure. Such analyses have mainly been based upon hyperlink counts, the number of hyperlinks between a pair of websites, although some have used title mentions or URL citations instead. The ability to automatically gather hyperlink counts from Yahoo! ceased in April 2011 and the ability to manually gather such counts was due to cease by early 2012, creating a need for alternatives. This article assesses URL citations and title mentions as possible replacements for hyperlinks in both binary and weighted direct link and co-inlink network diagrams. It also assesses three different types of data for the network connections: hit count estimates, counts of matching URLs, and filtered counts of matching URLs. Results from analyses of U.S. library and information science departments and U.K. universities give evidence that metrics based upon URLs or titles can be appropriate replacements for metrics based upon hyperlinks for both binary and weighted networks, although filtered counts of matching URLs are necessary to give the best results for co-title mention and co-URL citation network diagrams.
    Date
    6. 4.2012 18:16:22
  3. Thelwall, M.; Delgado, M.M.: Arts and humanities research evaluation : no metrics please, just data (2015) 0.01
    0.0061512557 = product of:
      0.0553613 = sum of:
        0.0553613 = weight(_text_:data in 2313) [ClassicSimilarity], result of:
          0.0553613 = score(doc=2313,freq=10.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.46871632 = fieldWeight in 2313, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2313)
      0.11111111 = coord(1/9)
    
    Abstract
    Purpose The purpose of this paper is to make an explicit case for the use of data with contextual information as evidence in arts and humanities research evaluations rather than systematic metrics. Design/methodology/approach A survey of the strengths and limitations of citation-based indicators is combined with evidence about existing uses of wider impact data in the arts and humanities, with particular reference to the 2014 UK Research Excellence Framework. Findings Data are already used as impact evidence in the arts and humanities but this practice should become more widespread. Practical implications Arts and humanities researchers should be encouraged to think creatively about the kinds of data that they may be able to generate in support of the value of their research and should not rely upon standardised metrics. Originality/value This paper combines practices emerging in the arts and humanities with research evaluation from a scientometric perspective to generate new recommendations.
  4. Mohammadi , E.; Thelwall, M.: Mendeley readership altmetrics for the social sciences and humanities : research evaluation and knowledge flows (2014) 0.00
    0.0039706184 = product of:
      0.035735566 = sum of:
        0.035735566 = weight(_text_:data in 2190) [ClassicSimilarity], result of:
          0.035735566 = score(doc=2190,freq=6.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.30255508 = fieldWeight in 2190, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2190)
      0.11111111 = coord(1/9)
    
    Abstract
    Although there is evidence that counting the readers of an article in the social reference site, Mendeley, may help to capture its research impact, the extent to which this is true for different scientific fields is unknown. In this study, we compare Mendeley readership counts with citations for different social sciences and humanities disciplines. The overall correlation between Mendeley readership counts and citations for the social sciences was higher than for the humanities. Low and medium correlations between Mendeley bookmarks and citation counts in all the investigated disciplines suggest that these measures reflect different aspects of research impact. Mendeley data were also used to discover patterns of information flow between scientific fields. Comparing information flows based on Mendeley bookmarking data and cross-disciplinary citation analysis for the disciplines revealed substantial similarities and some differences. Thus, the evidence from this study suggests that Mendeley readership data could be used to help capture knowledge transfer across scientific disciplines, especially for people that read but do not author articles, as well as giving impact evidence at an earlier stage than is possible with citation counts.
  5. Kousha, K.; Thelwall, M.: ¬An automatic method for assessing the teaching impact of books from online academic syllabi (2016) 0.00
    0.0039706184 = product of:
      0.035735566 = sum of:
        0.035735566 = weight(_text_:data in 3226) [ClassicSimilarity], result of:
          0.035735566 = score(doc=3226,freq=6.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.30255508 = fieldWeight in 3226, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3226)
      0.11111111 = coord(1/9)
    
    Abstract
    Scholars writing books that are widely used to support teaching in higher education may be undervalued because of a lack of evidence of teaching value. Although sales data may give credible evidence for textbooks, these data may poorly reflect educational uses of other types of books. As an alternative, this article proposes a method to search automatically for mentions of books in online academic course syllabi based on Bing searches for syllabi mentioning a given book, filtering out false matches through an extensive set of rules. The method had an accuracy of over 90% based on manual checks of a sample of 2,600 results from the initial Bing searches. Over one third of about 14,000 monographs checked had one or more academic syllabus mention, with more in the arts and humanities (56%) and social sciences (52%). Low but significant correlations between syllabus mentions and citations across most fields, except the social sciences, suggest that books tend to have different levels of impact for teaching and research. In conclusion, the automatic syllabus search method gives a new way to estimate the educational utility of books in a way that sales data and citation counts cannot.
  6. Thelwall, M.; Buckley, K.: Topic-based sentiment analysis for the social web : the role of mood and issue-related words (2013) 0.00
    0.0038903956 = product of:
      0.03501356 = sum of:
        0.03501356 = weight(_text_:data in 1004) [ClassicSimilarity], result of:
          0.03501356 = score(doc=1004,freq=4.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.29644224 = fieldWeight in 1004, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1004)
      0.11111111 = coord(1/9)
    
    Abstract
    General sentiment analysis for the social web has become increasingly useful for shedding light on the role of emotion in online communication and offline events in both academic research and data journalism. Nevertheless, existing general-purpose social web sentiment analysis algorithms may not be optimal for texts focussed around specific topics. This article introduces 2 new methods, mood setting and lexicon extension, to improve the accuracy of topic-specific lexical sentiment strength detection for the social web. Mood setting allows the topic mood to determine the default polarity for ostensibly neutral expressive text. Topic-specific lexicon extension involves adding topic-specific words to the default general sentiment lexicon. Experiments with 8 data sets show that both methods can improve sentiment analysis performance in corpora and are recommended when the topic focus is tightest.
  7. Thelwall, M.; Buckley, K.; Paltoglou, G.: Sentiment strength detection for the social web (2012) 0.00
    0.0032419965 = product of:
      0.029177967 = sum of:
        0.029177967 = weight(_text_:data in 4972) [ClassicSimilarity], result of:
          0.029177967 = score(doc=4972,freq=4.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.24703519 = fieldWeight in 4972, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4972)
      0.11111111 = coord(1/9)
    
    Abstract
    Sentiment analysis is concerned with the automatic extraction of sentiment-related information from text. Although most sentiment analysis addresses commercial tasks, such as extracting opinions from product reviews, there is increasing interest in the affective dimension of the social web, and Twitter in particular. Most sentiment analysis algorithms are not ideally suited to this task because they exploit indirect indicators of sentiment that can reflect genre or topic instead. Hence, such algorithms used to process social web texts can identify spurious sentiment patterns caused by topics rather than affective phenomena. This article assesses an improved version of the algorithm SentiStrength for sentiment strength detection across the social web that primarily uses direct indications of sentiment. The results from six diverse social web data sets (MySpace, Twitter, YouTube, Digg, Runners World, BBC Forums) indicate that SentiStrength 2 is successful in the sense of performing better than a baseline approach for all data sets in both supervised and unsupervised cases. SentiStrength is not always better than machine-learning approaches that exploit indirect indicators of sentiment, however, and is particularly weaker for positive sentiment in news-related discussions. Overall, the results suggest that, even unsupervised, SentiStrength is robust enough to be applied to a wide variety of different social web contexts.
  8. Haustein, S.; Peters, I.; Sugimoto, C.R.; Thelwall, M.; Larivière, V.: Tweeting biomedicine : an analysis of tweets and citations in the biomedical literature (2014) 0.00
    0.0032419965 = product of:
      0.029177967 = sum of:
        0.029177967 = weight(_text_:data in 1229) [ClassicSimilarity], result of:
          0.029177967 = score(doc=1229,freq=4.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.24703519 = fieldWeight in 1229, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1229)
      0.11111111 = coord(1/9)
    
    Abstract
    Data collected by social media platforms have been introduced as new sources for indicators to help measure the impact of scholarly research in ways that are complementary to traditional citation analysis. Data generated from social media activities can be used to reflect broad types of impact. This article aims to provide systematic evidence about how often Twitter is used to disseminate information about journal articles in the biomedical sciences. The analysis is based on 1.4 million documents covered by both PubMed and Web of Science and published between 2010 and 2012. The number of tweets containing links to these documents was analyzed and compared to citations to evaluate the degree to which certain journals, disciplines, and specialties were represented on Twitter and how far tweets correlate with citation impact. With less than 10% of PubMed articles mentioned on Twitter, its uptake is low in general but differs between journals and specialties. Correlations between tweets and citations are low, implying that impact metrics based on tweets are different from those based on citations. A framework using the coverage of articles and the correlation between Twitter mentions and citations is proposed to facilitate the evaluation of novel social-media-based metrics.
  9. Mohammadi, E.; Thelwall, M.; Haustein, S.; Larivière, V.: Who reads research articles? : an altmetrics analysis of Mendeley user categories (2015) 0.00
    0.0032419965 = product of:
      0.029177967 = sum of:
        0.029177967 = weight(_text_:data in 2162) [ClassicSimilarity], result of:
          0.029177967 = score(doc=2162,freq=4.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.24703519 = fieldWeight in 2162, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2162)
      0.11111111 = coord(1/9)
    
    Abstract
    Little detailed information is known about who reads research articles and the contexts in which research articles are read. Using data about people who register in Mendeley as readers of articles, this article explores different types of users of Clinical Medicine, Engineering and Technology, Social Science, Physics, and Chemistry articles inside and outside academia. The majority of readers for all disciplines were PhD students, postgraduates, and postdocs but other types of academics were also represented. In addition, many Clinical Medicine articles were read by medical professionals. The highest correlations between citations and Mendeley readership counts were found for types of users who often authored academic articles, except for associate professors in some sub-disciplines. This suggests that Mendeley readership can reflect usage similar to traditional citation impact if the data are restricted to readers who are also authors without the delay of impact measured by citation counts. At the same time, Mendeley statistics can also reveal the hidden impact of some research articles, such as educational value for nonauthor users inside academia or the impact of research articles on practice for readers outside academia.
  10. Thelwall, M.; Wilkinson, D.: Public dialogs in social network sites : What is their purpose? (2010) 0.00
    0.002750925 = product of:
      0.024758326 = sum of:
        0.024758326 = weight(_text_:data in 3327) [ClassicSimilarity], result of:
          0.024758326 = score(doc=3327,freq=2.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.2096163 = fieldWeight in 3327, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3327)
      0.11111111 = coord(1/9)
    
    Theme
    Data Mining
  11. Didegah, F.; Thelwall, M.: Determinants of research citation impact in nanoscience and nanotechnology (2013) 0.00
    0.002750925 = product of:
      0.024758326 = sum of:
        0.024758326 = weight(_text_:data in 737) [ClassicSimilarity], result of:
          0.024758326 = score(doc=737,freq=2.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.2096163 = fieldWeight in 737, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=737)
      0.11111111 = coord(1/9)
    
    Abstract
    This study investigates a range of metrics available when a nanoscience and nanotechnology article is published to see which metrics correlate more with the number of citations to the article. It also introduces the degree of internationality of journals and references as new metrics for this purpose. The journal impact factor; the impact of references; the internationality of authors, journals, and references; and the number of authors, institutions, and references were all calculated for papers published in nanoscience and nanotechnology journals in the Web of Science from 2007 to 2009. Using a zero-inflated negative binomial regression model on the data set, the impact factor of the publishing journal and the citation impact of the cited references were found to be the most effective determinants of citation counts in all four time periods. In the entire 2007 to 2009 period, apart from journal internationality and author numbers and internationality, all other predictor variables had significant effects on citation counts.
  12. Shema, H.; Bar-Ilan, J.; Thelwall, M.: Do blog citations correlate with a higher number of future citations? : Research blogs as a potential source for alternative metrics (2014) 0.00
    0.002750925 = product of:
      0.024758326 = sum of:
        0.024758326 = weight(_text_:data in 1258) [ClassicSimilarity], result of:
          0.024758326 = score(doc=1258,freq=2.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.2096163 = fieldWeight in 1258, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1258)
      0.11111111 = coord(1/9)
    
    Abstract
    Journal-based citations are an important source of data for impact indices. However, the impact of journal articles extends beyond formal scholarly discourse. Measuring online scholarly impact calls for new indices, complementary to the older ones. This article examines a possible alternative metric source, blog posts aggregated at ResearchBlogging.org, which discuss peer-reviewed articles and provide full bibliographic references. Articles reviewed in these blogs therefore receive "blog citations." We hypothesized that articles receiving blog citations close to their publication time receive more journal citations later than the articles in the same journal published in the same year that did not receive such blog citations. Statistically significant evidence for articles published in 2009 and 2010 support this hypothesis for seven of 12 journals (58%) in 2009 and 13 of 19 journals (68%) in 2010. We suggest, based on these results, that blog citations can be used as an alternative metric source.
  13. Thelwall, M.; Kousha, K.: ResearchGate: Disseminating, communicating, and measuring scholarship? (2015) 0.00
    0.002750925 = product of:
      0.024758326 = sum of:
        0.024758326 = weight(_text_:data in 1813) [ClassicSimilarity], result of:
          0.024758326 = score(doc=1813,freq=2.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.2096163 = fieldWeight in 1813, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1813)
      0.11111111 = coord(1/9)
    
    Abstract
    ResearchGate is a social network site for academics to create their own profiles, list their publications, and interact with each other. Like Academia.edu, it provides a new way for scholars to disseminate their work and hence potentially changes the dynamics of informal scholarly communication. This article assesses whether ResearchGate usage and publication data broadly reflect existing academic hierarchies and whether individual countries are set to benefit or lose out from the site. The results show that rankings based on ResearchGate statistics correlate moderately well with other rankings of academic institutions, suggesting that ResearchGate use broadly reflects the traditional distribution of academic capital. Moreover, while Brazil, India, and some other countries seem to be disproportionately taking advantage of ResearchGate, academics in China, South Korea, and Russia may be missing opportunities to use ResearchGate to maximize the academic impact of their publications.
  14. Maflahi, N.; Thelwall, M.: When are readership counts as useful as citation counts? : Scopus versus Mendeley for LIS journals (2016) 0.00
    0.002750925 = product of:
      0.024758326 = sum of:
        0.024758326 = weight(_text_:data in 2495) [ClassicSimilarity], result of:
          0.024758326 = score(doc=2495,freq=2.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.2096163 = fieldWeight in 2495, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2495)
      0.11111111 = coord(1/9)
    
    Abstract
    In theory, articles can attract readers on the social reference sharing site Mendeley before they can attract citations, so Mendeley altmetrics could provide early indications of article impact. This article investigates the influence of time on the number of Mendeley readers of an article through a theoretical discussion and an investigation into the relationship between counts of readers of, and citations to, 4 general library and information science (LIS) journals. For this discipline, it takes about 7 years for articles to attract as many Scopus citations as Mendeley readers, and after this the Spearman correlation between readers and citers is stable at about 0.6 for all years. This suggests that Mendeley readership counts may be useful impact indicators for both newer and older articles. The lack of dates for individual Mendeley article readers and an unknown bias toward more recent articles mean that readership data should be normalized individually by year, however, before making any comparisons between articles published in different years.
  15. Thelwall, M.: Book genre and author gender : romance > paranormal-romance to autobiography > memoir (2017) 0.00
    0.002750925 = product of:
      0.024758326 = sum of:
        0.024758326 = weight(_text_:data in 3598) [ClassicSimilarity], result of:
          0.024758326 = score(doc=3598,freq=2.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.2096163 = fieldWeight in 3598, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3598)
      0.11111111 = coord(1/9)
    
    Abstract
    Although gender differences are known to exist in the publishing industry and in reader preferences, there is little public systematic data about them. This article uses evidence from the book-based social website Goodreads to provide a large scale analysis of 50 major English book genres based on author genders. The results show gender differences in authorship in almost all categories and gender differences the level of interest in, and ratings of, books in a minority of categories. Perhaps surprisingly in this context, there is not a clear gender-based relationship between the success of an author and their prevalence within a genre. The unexpected almost universal authorship gender differences should give new impetus to investigations of the importance of gender in fiction and the success of minority genders in some genres should encourage publishers and librarians to take their work seriously, except perhaps for most male-authored chick-lit.
  16. Thelwall, M.: ¬A comparison of link and URL citation counting (2011) 0.00
    0.0022924377 = product of:
      0.02063194 = sum of:
        0.02063194 = weight(_text_:data in 4533) [ClassicSimilarity], result of:
          0.02063194 = score(doc=4533,freq=2.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.17468026 = fieldWeight in 4533, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4533)
      0.11111111 = coord(1/9)
    
    Abstract
    Purpose - Link analysis is an established topic within webometrics. It normally uses counts of links between sets of web sites or to sets of web sites. These link counts are derived from web crawlers or commercial search engines with the latter being the only alternative for some investigations. This paper compares link counts with URL citation counts in order to assess whether the latter could be a replacement for the former if the major search engines withdraw their advanced hyperlink search facilities. Design/methodology/approach - URL citation counts are compared with link counts for a variety of data sets used in previous webometric studies. Findings - The results show a high degree of correlation between the two but with URL citations being much less numerous, at least outside academia and business. Research limitations/implications - The results cover a small selection of 15 case studies and so the findings are only indicative. Significant differences between results indicate that the difference between link counts and URL citation counts will vary between webometric studies. Practical implications - Should link searches be withdrawn, then link analyses of less well linked non-academic, non-commercial sites would be seriously weakened, although citations based on e-mail addresses could help to make citations more numerous than links for some business and academic contexts. Originality/value - This is the first systematic study of the difference between link counts and URL citation counts in a variety of contexts and it shows that there are significant differences between the two.
  17. Thelwall, M.; Sud, P.: ¬A comparison of methods for collecting web citation data for academic organizations (2011) 0.00
    0.0022924377 = product of:
      0.02063194 = sum of:
        0.02063194 = weight(_text_:data in 4626) [ClassicSimilarity], result of:
          0.02063194 = score(doc=4626,freq=2.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.17468026 = fieldWeight in 4626, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4626)
      0.11111111 = coord(1/9)
    
  18. Larivière, V.; Sugimoto, C.R.; Macaluso, B.; Milojevi´c, S.; Cronin, B.; Thelwall, M.: arXiv E-prints and the journal of record : an analysis of roles and relationships (2014) 0.00
    0.0022924377 = product of:
      0.02063194 = sum of:
        0.02063194 = weight(_text_:data in 1285) [ClassicSimilarity], result of:
          0.02063194 = score(doc=1285,freq=2.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.17468026 = fieldWeight in 1285, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1285)
      0.11111111 = coord(1/9)
    
    Abstract
    Since its creation in 1991, arXiv has become central to the diffusion of research in a number of fields. Combining data from the entirety of arXiv and the Web of Science (WoS), this article investigates (a) the proportion of papers across all disciplines that are on arXiv and the proportion of arXiv papers that are in the WoS, (b) the elapsed time between arXiv submission and journal publication, and (c) the aging characteristics and scientific impact of arXiv e-prints and their published version. It shows that the proportion of WoS papers found on arXiv varies across the specialties of physics and mathematics, and that only a few specialties make extensive use of the repository. Elapsed time between arXiv submission and journal publication has shortened but remains longer in mathematics than in physics. In physics, mathematics, as well as in astronomy and astrophysics, arXiv versions are cited more promptly and decay faster than WoS papers. The arXiv versions of papers-both published and unpublished-have lower citation rates than published papers, although there is almost no difference in the impact of the arXiv versions of published and unpublished papers.
  19. Mohammadi, E.; Thelwall, M.; Kousha, K.: Can Mendeley bookmarks reflect readership? : a survey of user motivations (2016) 0.00
    0.0022924377 = product of:
      0.02063194 = sum of:
        0.02063194 = weight(_text_:data in 2897) [ClassicSimilarity], result of:
          0.02063194 = score(doc=2897,freq=2.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.17468026 = fieldWeight in 2897, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2897)
      0.11111111 = coord(1/9)
    
    Abstract
    Although Mendeley bookmarking counts appear to correlate moderately with conventional citation metrics, it is not known whether academic publications are bookmarked in Mendeley in order to be read or not. Without this information, it is not possible to give a confident interpretation of altmetrics derived from Mendeley. In response, a survey of 860 Mendeley users shows that it is reasonable to use Mendeley bookmarking counts as an indication of readership because most (55%) users with a Mendeley library had read or intended to read at least half of their bookmarked publications. This was true across all broad areas of scholarship except for the arts and humanities (42%). About 85% of the respondents also declared that they bookmarked articles in Mendeley to cite them in their publications, but some also bookmark articles for use in professional (50%), teaching (25%), and educational activities (13%). Of course, it is likely that most readers do not record articles in Mendeley and so these data do not represent all readers. In conclusion, Mendeley bookmark counts seem to be indicators of readership leading to a combination of scholarly impact and wider professional impact.
  20. Kousha, K.; Thelwall, M.: Are wikipedia citations important evidence of the impact of scholarly articles and books? (2017) 0.00
    0.0022924377 = product of:
      0.02063194 = sum of:
        0.02063194 = weight(_text_:data in 3440) [ClassicSimilarity], result of:
          0.02063194 = score(doc=3440,freq=2.0), product of:
            0.118112594 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.037353165 = queryNorm
            0.17468026 = fieldWeight in 3440, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3440)
      0.11111111 = coord(1/9)
    
    Abstract
    Individual academics and research evaluators often need to assess the value of published research. Although citation counts are a recognized indicator of scholarly impact, alternative data is needed to provide evidence of other types of impact, including within education and wider society. Wikipedia is a logical choice for both of these because the role of a general encyclopaedia is to be an understandable repository of facts about a diverse array of topics and hence it may cite research to support its claims. To test whether Wikipedia could provide new evidence about the impact of scholarly research, this article counted citations to 302,328 articles and 18,735 monographs in English indexed by Scopus in the period 2005 to 2012. The results show that citations from Wikipedia to articles are too rare for most research evaluation purposes, with only 5% of articles being cited in all fields. In contrast, a third of monographs have at least one citation from Wikipedia, with the most in the arts and humanities. Hence, Wikipedia citations can provide extra impact evidence for academic monographs. Nevertheless, the results may be relatively easily manipulated and so Wikipedia is not recommended for evaluations affecting stakeholder interests.