Search (115 results, page 1 of 6)

  • × author_ss:"Thelwall, M."
  1. Kousha, K.; Thelwall, M.: How is science cited on the Web? : a classification of google unique Web citations (2007) 0.09
    0.08575615 = product of:
      0.11434154 = sum of:
        0.01841403 = weight(_text_:for in 586) [ClassicSimilarity], result of:
          0.01841403 = score(doc=586,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.20744109 = fieldWeight in 586, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=586)
        0.079913534 = weight(_text_:computing in 586) [ClassicSimilarity], result of:
          0.079913534 = score(doc=586,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.3055734 = fieldWeight in 586, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=586)
        0.016013984 = product of:
          0.032027967 = sum of:
            0.032027967 = weight(_text_:22 in 586) [ClassicSimilarity], result of:
              0.032027967 = score(doc=586,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.19345059 = fieldWeight in 586, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=586)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Although the analysis of citations in the scholarly literature is now an established and relatively well understood part of information science, not enough is known about citations that can be found on the Web. In particular, are there new Web types, and if so, are these trivial or potentially useful for studying or evaluating research communication? We sought evidence based upon a sample of 1,577 Web citations of the URLs or titles of research articles in 64 open-access journals from biology, physics, chemistry, and computing. Only 25% represented intellectual impact, from references of Web documents (23%) and other informal scholarly sources (2%). Many of the Web/URL citations were created for general or subject-specific navigation (45%) or for self-publicity (22%). Additional analyses revealed significant disciplinary differences in the types of Google unique Web/URL citations as well as some characteristics of scientific open-access publishing on the Web. We conclude that the Web provides access to a new and different type of citation information, one that may therefore enable us to measure different aspects of research, and the research process in particular; but to obtain good information, the different types should be separated.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.11, S.1631-1644
  2. Thelwall, M.: Text characteristics of English language university Web sites (2005) 0.06
    0.05899654 = product of:
      0.11799308 = sum of:
        0.022096837 = weight(_text_:for in 3463) [ClassicSimilarity], result of:
          0.022096837 = score(doc=3463,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.24892932 = fieldWeight in 3463, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=3463)
        0.095896244 = weight(_text_:computing in 3463) [ClassicSimilarity], result of:
          0.095896244 = score(doc=3463,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.36668807 = fieldWeight in 3463, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.046875 = fieldNorm(doc=3463)
      0.5 = coord(2/4)
    
    Abstract
    The nature of the contents of academic Web sites is of direct relevance to the new field of scientific Web intelligence, and for search engine and topic-specific crawler designers. We analyze word frequencies in national academic Webs using the Web sites of three Englishspeaking nations: Australia, New Zealand, and the United Kingdom. Strong regularities were found in page size and word frequency distributions, but with significant anomalies. At least 26% of pages contain no words. High frequency words include university names and acronyms, Internet terminology, and computing product names: not always words in common usage away from the Web. A minority of low frequency words are spelling mistakes, with other common types including nonwords, proper names, foreign language terms or computer science variable names. Based upon these findings, recommendations for data cleansing and filtering are made, particularly for clustering applications.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.6, S.609-619
  3. Kousha, K.; Thelwall, M.: Google book search : citation analysis for social science and the humanities (2009) 0.05
    0.052136503 = product of:
      0.10427301 = sum of:
        0.024359472 = weight(_text_:for in 2946) [ClassicSimilarity], result of:
          0.024359472 = score(doc=2946,freq=14.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.27441877 = fieldWeight in 2946, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2946)
        0.079913534 = weight(_text_:computing in 2946) [ClassicSimilarity], result of:
          0.079913534 = score(doc=2946,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.3055734 = fieldWeight in 2946, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2946)
      0.5 = coord(2/4)
    
    Abstract
    In both the social sciences and the humanities, books and monographs play significant roles in research communication. The absence of citations from most books and monographs from the Thomson Reuters/Institute for Scientific Information databases (ISI) has been criticized, but attempts to include citations from or to books in the research evaluation of the social sciences and humanities have not led to widespread adoption. This article assesses whether Google Book Search (GBS) can partially fill this gap by comparing citations from books with citations from journal articles to journal articles in 10 science, social science, and humanities disciplines. Book citations were 31% to 212% of ISI citations and, hence, numerous enough to supplement ISI citations in the social sciences and humanities covered, but not in the sciences (3%-5%), except for computing (46%), due to numerous published conference proceedings. A case study was also made of all 1,923 articles in the 51 information science and library science ISI-indexed journals published in 2003. Within this set, highly book-cited articles tended to receive many ISI citations, indicating a significant relationship between the two types of citation data, but with important exceptions that point to the additional information provided by book citations. In summary, GBS is clearly a valuable new source of citation data for the social sciences and humanities. One practical implication is that book-oriented scholars should consult it for additional citations to their work when applying for promotion and tenure.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.8, S.1537-1549
  4. Kousha, K.; Thelwall, M.: Google Scholar citations and Google Web/URL citations : a multi-discipline exploratory analysis (2007) 0.05
    0.04916378 = product of:
      0.09832756 = sum of:
        0.01841403 = weight(_text_:for in 337) [ClassicSimilarity], result of:
          0.01841403 = score(doc=337,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.20744109 = fieldWeight in 337, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=337)
        0.079913534 = weight(_text_:computing in 337) [ClassicSimilarity], result of:
          0.079913534 = score(doc=337,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.3055734 = fieldWeight in 337, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=337)
      0.5 = coord(2/4)
    
    Abstract
    We use a new data gathering method, "Web/URL citation," Web/URL and Google Scholar to compare traditional and Web-based citation patterns across multiple disciplines (biology, chemistry, physics, computing, sociology, economics, psychology, and education) based upon a sample of 1,650 articles from 108 open access (OA) journals published in 2001. A Web/URL citation of an online journal article is a Web mention of its title, URL, or both. For each discipline, except psychology, we found significant correlations between Thomson Scientific (formerly Thomson ISI, here: ISI) citations and both Google Scholar and Google Web/URL citations. Google Scholar citations correlated more highly with ISI citations than did Google Web/URL citations, indicating that the Web/URL method measures a broader type of citation phenomenon. Google Scholar citations were more numerous than ISI citations in computer science and the four social science disciplines, suggesting that Google Scholar is more comprehensive for social sciences and perhaps also when conference articles are valued and published online. We also found large disciplinary differences in the percentage overlap between ISI and Google Scholar citation sources. Finally, although we found many significant trends, there were also numerous exceptions, suggesting that replacing traditional citation sources with the Web or Google Scholar for research impact calculations would be problematic.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.7, S.1055-1065
  5. Didegah, F.; Thelwall, M.: Co-saved, co-tweeted, and co-cited networks (2018) 0.02
    0.024224073 = product of:
      0.048448145 = sum of:
        0.029231368 = weight(_text_:for in 4291) [ClassicSimilarity], result of:
          0.029231368 = score(doc=4291,freq=14.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.32930255 = fieldWeight in 4291, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=4291)
        0.019216778 = product of:
          0.038433556 = sum of:
            0.038433556 = weight(_text_:22 in 4291) [ClassicSimilarity], result of:
              0.038433556 = score(doc=4291,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.23214069 = fieldWeight in 4291, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4291)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Counts of tweets and Mendeley user libraries have been proposed as altmetric alternatives to citation counts for the impact assessment of articles. Although both have been investigated to discover whether they correlate with article citations, it is not known whether users tend to tweet or save (in Mendeley) the same kinds of articles that they cite. In response, this article compares pairs of articles that are tweeted, saved to a Mendeley library, or cited by the same user, but possibly a different user for each source. The study analyzes 1,131,318 articles published in 2012, with minimum tweeted (10), saved to Mendeley (100), and cited (10) thresholds. The results show surprisingly minor overall overlaps between the three phenomena. The importance of journals for Twitter and the presence of many bots at different levels of activity suggest that this site has little value for impact altmetrics. The moderate differences between patterns of saving and citation suggest that Mendeley can be used for some types of impact assessments, but sensitivity is needed for underlying differences.
    Date
    28. 7.2018 10:00:22
    Source
    Journal of the Association for Information Science and Technology. 69(2018) no.8, S.959-973
  6. Thelwall, M.; Sud, P.: Mendeley readership counts : an investigation of temporal and disciplinary differences (2016) 0.02
    0.021960896 = product of:
      0.04392179 = sum of:
        0.024705013 = weight(_text_:for in 3211) [ClassicSimilarity], result of:
          0.024705013 = score(doc=3211,freq=10.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.27831143 = fieldWeight in 3211, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=3211)
        0.019216778 = product of:
          0.038433556 = sum of:
            0.038433556 = weight(_text_:22 in 3211) [ClassicSimilarity], result of:
              0.038433556 = score(doc=3211,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.23214069 = fieldWeight in 3211, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3211)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Scientists and managers using citation-based indicators to help evaluate research cannot evaluate recent articles because of the time needed for citations to accrue. Reading occurs before citing, however, and so it makes sense to count readers rather than citations for recent publications. To assess this, Mendeley readers and citations were obtained for articles from 2004 to late 2014 in five broad categories (agriculture, business, decision science, pharmacy, and the social sciences) and 50 subcategories. In these areas, citation counts tended to increase with every extra year since publication, and readership counts tended to increase faster initially but then stabilize after about 5 years. The correlation between citations and readers was also higher for longer time periods, stabilizing after about 5 years. Although there were substantial differences between broad fields and smaller differences between subfields, the results confirm the value of Mendeley reader counts as early scientific impact indicators.
    Date
    16.11.2016 11:07:22
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.12, S.3036-3050
  7. Thelwall, M.; Sud, P.; Wilkinson, D.: Link and co-inlink network diagrams with URL citations or title mentions (2012) 0.02
    0.020186728 = product of:
      0.040373456 = sum of:
        0.024359472 = weight(_text_:for in 57) [ClassicSimilarity], result of:
          0.024359472 = score(doc=57,freq=14.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.27441877 = fieldWeight in 57, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=57)
        0.016013984 = product of:
          0.032027967 = sum of:
            0.032027967 = weight(_text_:22 in 57) [ClassicSimilarity], result of:
              0.032027967 = score(doc=57,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.19345059 = fieldWeight in 57, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=57)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Webometric network analyses have been used to map the connectivity of groups of websites to identify clusters, important sites or overall structure. Such analyses have mainly been based upon hyperlink counts, the number of hyperlinks between a pair of websites, although some have used title mentions or URL citations instead. The ability to automatically gather hyperlink counts from Yahoo! ceased in April 2011 and the ability to manually gather such counts was due to cease by early 2012, creating a need for alternatives. This article assesses URL citations and title mentions as possible replacements for hyperlinks in both binary and weighted direct link and co-inlink network diagrams. It also assesses three different types of data for the network connections: hit count estimates, counts of matching URLs, and filtered counts of matching URLs. Results from analyses of U.S. library and information science departments and U.K. universities give evidence that metrics based upon URLs or titles can be appropriate replacements for metrics based upon hyperlinks for both binary and weighted networks, although filtered counts of matching URLs are necessary to give the best results for co-title mention and co-URL citation network diagrams.
    Date
    6. 4.2012 18:16:22
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.4, S.805-816
  8. Thelwall, M.: Are Mendeley reader counts high enough for research evaluations when articles are published? (2017) 0.02
    0.01830075 = product of:
      0.0366015 = sum of:
        0.020587513 = weight(_text_:for in 3806) [ClassicSimilarity], result of:
          0.020587513 = score(doc=3806,freq=10.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.2319262 = fieldWeight in 3806, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3806)
        0.016013984 = product of:
          0.032027967 = sum of:
            0.032027967 = weight(_text_:22 in 3806) [ClassicSimilarity], result of:
              0.032027967 = score(doc=3806,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.19345059 = fieldWeight in 3806, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3806)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose Mendeley reader counts have been proposed as early indicators for the impact of academic publications. The purpose of this paper is to assess whether there are enough Mendeley readers for research evaluation purposes during the month when an article is first published. Design/methodology/approach Average Mendeley reader counts were compared to the average Scopus citation counts for 104,520 articles from ten disciplines during the second half of 2016. Findings Articles attracted, on average, between 0.1 and 0.8 Mendeley readers per article in the month in which they first appeared in Scopus. This is about ten times more than the average Scopus citation count. Research limitations/implications Other disciplines may use Mendeley more or less than the ten investigated here. The results are dependent on Scopus's indexing practices, and Mendeley reader counts can be manipulated and have national and seniority biases. Practical implications Mendeley reader counts during the month of publication are more powerful than Scopus citations for comparing the average impacts of groups of documents but are not high enough to differentiate between the impacts of typical individual articles. Originality/value This is the first multi-disciplinary and systematic analysis of Mendeley reader counts from the publication month of an article.
    Date
    20. 1.2015 18:30:22
  9. Thelwall, M.; Thelwall, S.: ¬A thematic analysis of highly retweeted early COVID-19 tweets : consensus, information, dissent and lockdown life (2020) 0.02
    0.01830075 = product of:
      0.0366015 = sum of:
        0.020587513 = weight(_text_:for in 178) [ClassicSimilarity], result of:
          0.020587513 = score(doc=178,freq=10.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.2319262 = fieldWeight in 178, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=178)
        0.016013984 = product of:
          0.032027967 = sum of:
            0.032027967 = weight(_text_:22 in 178) [ClassicSimilarity], result of:
              0.032027967 = score(doc=178,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.19345059 = fieldWeight in 178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=178)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose Public attitudes towards COVID-19 and social distancing are critical in reducing its spread. It is therefore important to understand public reactions and information dissemination in all major forms, including on social media. This article investigates important issues reflected on Twitter in the early stages of the public reaction to COVID-19. Design/methodology/approach A thematic analysis of the most retweeted English-language tweets mentioning COVID-19 during March 10-29, 2020. Findings The main themes identified for the 87 qualifying tweets accounting for 14 million retweets were: lockdown life; attitude towards social restrictions; politics; safety messages; people with COVID-19; support for key workers; work; and COVID-19 facts/news. Research limitations/implications Twitter played many positive roles, mainly through unofficial tweets. Users shared social distancing information, helped build support for social distancing, criticised government responses, expressed support for key workers and helped each other cope with social isolation. A few popular tweets not supporting social distancing show that government messages sometimes failed. Practical implications Public health campaigns in future may consider encouraging grass roots social web activity to support campaign goals. At a methodological level, analysing retweet counts emphasised politics and ignored practical implementation issues. Originality/value This is the first qualitative analysis of general COVID-19-related retweeting.
    Date
    20. 1.2015 18:30:22
  10. Thelwall, M.; Kousha, K.; Abdoli, M.; Stuart, E.; Makita, M.; Wilson, P.; Levitt, J.: Why are coauthored academic articles more cited : higher quality or larger audience? (2023) 0.02
    0.01830075 = product of:
      0.0366015 = sum of:
        0.020587513 = weight(_text_:for in 995) [ClassicSimilarity], result of:
          0.020587513 = score(doc=995,freq=10.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.2319262 = fieldWeight in 995, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=995)
        0.016013984 = product of:
          0.032027967 = sum of:
            0.032027967 = weight(_text_:22 in 995) [ClassicSimilarity], result of:
              0.032027967 = score(doc=995,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.19345059 = fieldWeight in 995, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=995)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Collaboration is encouraged because it is believed to improve academic research, supported by indirect evidence in the form of more coauthored articles being more cited. Nevertheless, this might not reflect quality but increased self-citations or the "audience effect": citations from increased awareness through multiple author networks. We address this with the first science wide investigation into whether author numbers associate with journal article quality, using expert peer quality judgments for 122,331 articles from the 2014-20 UK national assessment. Spearman correlations between author numbers and quality scores show moderately strong positive associations (0.2-0.4) in the health, life, and physical sciences, but weak or no positive associations in engineering and social sciences, with weak negative/positive or no associations in various arts and humanities, and a possible negative association for decision sciences. This gives the first systematic evidence that greater numbers of authors associates with higher quality journal articles in the majority of academia outside the arts and humanities, at least for the UK. Positive associations between team size and citation counts in areas with little association between team size and quality also show that audience effects or other nonquality factors account for the higher citation rates of coauthored articles in some fields.
    Date
    22. 6.2023 18:11:50
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.7, S.791-810
  11. Levitt, J.M.; Thelwall, M.: Citation levels and collaboration within library and information science (2009) 0.02
    0.017833939 = product of:
      0.035667878 = sum of:
        0.013020686 = weight(_text_:for in 2734) [ClassicSimilarity], result of:
          0.013020686 = score(doc=2734,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.14668301 = fieldWeight in 2734, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2734)
        0.02264719 = product of:
          0.04529438 = sum of:
            0.04529438 = weight(_text_:22 in 2734) [ClassicSimilarity], result of:
              0.04529438 = score(doc=2734,freq=4.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.27358043 = fieldWeight in 2734, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2734)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Collaboration is a major research policy objective, but does it deliver higher quality research? This study uses citation analysis to examine the Web of Science (WoS) Information Science & Library Science subject category (IS&LS) to ascertain whether, in general, more highly cited articles are more highly collaborative than other articles. It consists of two investigations. The first investigation is a longitudinal comparison of the degree and proportion of collaboration in five strata of citation; it found that collaboration in the highest four citation strata (all in the most highly cited 22%) increased in unison over time, whereas collaboration in the lowest citation strata (un-cited articles) remained low and stable. Given that over 40% of the articles were un-cited, it seems important to take into account the differences found between un-cited articles and relatively highly cited articles when investigating collaboration in IS&LS. The second investigation compares collaboration for 35 influential information scientists; it found that their more highly cited articles on average were not more highly collaborative than their less highly cited articles. In summary, although collaborative research is conducive to high citation in general, collaboration has apparently not tended to be essential to the success of current and former elite information scientists.
    Date
    22. 3.2009 12:43:51
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.3, S.434-442
  12. Thelwall, M.; Buckley, K.; Paltoglou, G.: Sentiment in Twitter events (2011) 0.02
    0.0174208 = product of:
      0.0348416 = sum of:
        0.015624823 = weight(_text_:for in 4345) [ClassicSimilarity], result of:
          0.015624823 = score(doc=4345,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17601961 = fieldWeight in 4345, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=4345)
        0.019216778 = product of:
          0.038433556 = sum of:
            0.038433556 = weight(_text_:22 in 4345) [ClassicSimilarity], result of:
              0.038433556 = score(doc=4345,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.23214069 = fieldWeight in 4345, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4345)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The microblogging site Twitter generates a constant stream of communication, some of which concerns events of general interest. An analysis of Twitter may, therefore, give insights into why particular events resonate with the population. This article reports a study of a month of English Twitter posts, assessing whether popular events are typically associated with increases in sentiment strength, as seems intuitively likely. Using the top 30 events, determined by a measure of relative increase in (general) term usage, the results give strong evidence that popular events are normally associated with increases in negative sentiment strength and some evidence that peaks of interest in events have stronger positive sentiment than the time before the peak. It seems that many positive events, such as the Oscars, are capable of generating increased negative sentiment in reaction to them. Nevertheless, the surprisingly small average change in sentiment associated with popular events (typically 1% and only 6% for Tiger Woods' confessions) is consistent with events affording posters opportunities to satisfy pre-existing personal goals more often than eliciting instinctive reactions.
    Date
    22. 1.2011 14:27:06
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.2, S.406-418
  13. Thelwall, M.; Maflahi, N.: Guideline references and academic citations as evidence of the clinical value of health research (2016) 0.02
    0.0174208 = product of:
      0.0348416 = sum of:
        0.015624823 = weight(_text_:for in 2856) [ClassicSimilarity], result of:
          0.015624823 = score(doc=2856,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17601961 = fieldWeight in 2856, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=2856)
        0.019216778 = product of:
          0.038433556 = sum of:
            0.038433556 = weight(_text_:22 in 2856) [ClassicSimilarity], result of:
              0.038433556 = score(doc=2856,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.23214069 = fieldWeight in 2856, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2856)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This article introduces a new source of evidence of the value of medical-related research: citations from clinical guidelines. These give evidence that research findings have been used to inform the day-to-day practice of medical staff. To identify whether citations from guidelines can give different information from that of traditional citation counts, this article assesses the extent to which references in clinical guidelines tend to be highly cited in the academic literature and highly read in Mendeley. Using evidence from the United Kingdom, references associated with the UK's National Institute of Health and Clinical Excellence (NICE) guidelines tended to be substantially more cited than comparable articles, unless they had been published in the most recent 3 years. Citation counts also seemed to be stronger indicators than Mendeley readership altmetrics. Hence, although presence in guidelines may be particularly useful to highlight the contributions of recently published articles, for older articles citation counts may already be sufficient to recognize their contributions to health in society.
    Date
    19. 3.2016 12:22:00
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.4, S.960-966
  14. Thelwall, M.; Buckley, K.; Paltoglou, G.; Cai, D.; Kappas, A.: Sentiment strength detection in short informal text (2010) 0.02
    0.0159805 = product of:
      0.031961 = sum of:
        0.01594702 = weight(_text_:for in 4200) [ClassicSimilarity], result of:
          0.01594702 = score(doc=4200,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17964928 = fieldWeight in 4200, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4200)
        0.016013984 = product of:
          0.032027967 = sum of:
            0.032027967 = weight(_text_:22 in 4200) [ClassicSimilarity], result of:
              0.032027967 = score(doc=4200,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.19345059 = fieldWeight in 4200, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4200)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    A huge number of informal messages are posted every day in social network sites, blogs, and discussion forums. Emotions seem to be frequently important in these texts for expressing friendship, showing social support or as part of online arguments. Algorithms to identify sentiment and sentiment strength are needed to help understand the role of emotion in this informal communication and also to identify inappropriate or anomalous affective utterances, potentially associated with threatening behavior to the self or others. Nevertheless, existing sentiment detection algorithms tend to be commercially oriented, designed to identify opinions about products rather than user behaviors. This article partly fills this gap with a new algorithm, SentiStrength, to extract sentiment strength from informal English text, using new methods to exploit the de facto grammars and spelling styles of cyberspace. Applied to MySpace comments and with a lookup table of term sentiment strengths optimized by machine learning, SentiStrength is able to predict positive emotion with 60.6% accuracy and negative emotion with 72.8% accuracy, both based upon strength scales of 1-5. The former, but not the latter, is better than baseline and a wide range of general machine learning approaches.
    Date
    22. 1.2011 14:29:23
    Footnote
    Vgl. auch das Erratum in: Journal of the American Society for Information Science and Technology. 62(2011) no.2, S.419
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.12, S.2544-2558
  15. Li, X.; Thelwall, M.; Kousha, K.: ¬The role of arXiv, RePEc, SSRN and PMC in formal scholarly communication (2015) 0.01
    0.012610499 = product of:
      0.025220998 = sum of:
        0.009207015 = weight(_text_:for in 2593) [ClassicSimilarity], result of:
          0.009207015 = score(doc=2593,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.103720546 = fieldWeight in 2593, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2593)
        0.016013984 = product of:
          0.032027967 = sum of:
            0.032027967 = weight(_text_:22 in 2593) [ClassicSimilarity], result of:
              0.032027967 = score(doc=2593,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.19345059 = fieldWeight in 2593, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2593)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose The four major Subject Repositories (SRs), arXiv, Research Papers in Economics (RePEc), Social Science Research Network (SSRN) and PubMed Central (PMC), are all important within their disciplines but no previous study has systematically compared how often they are cited in academic publications. In response, the purpose of this paper is to report an analysis of citations to SRs from Scopus publications, 2000-2013. Design/methodology/approach Scopus searches were used to count the number of documents citing the four SRs in each year. A random sample of 384 documents citing the four SRs was then visited to investigate the nature of the citations. Findings Each SR was most cited within its own subject area but attracted substantial citations from other subject areas, suggesting that they are open to interdisciplinary uses. The proportion of documents citing each SR is continuing to increase rapidly, and the SRs all seem to attract substantial numbers of citations from more than one discipline. Research limitations/implications Scopus does not cover all publications, and most citations to documents found in the four SRs presumably cite the published version, when one exists, rather than the repository version. Practical implications SRs are continuing to grow and do not seem to be threatened by institutional repositories and so research managers should encourage their continued use within their core disciplines, including for research that aims at an audience in other disciplines. Originality/value This is the first simultaneous analysis of Scopus citations to the four most popular SRs.
    Date
    20. 1.2015 18:30:22
  16. Thelwall, M.; Buckley, K.: Topic-based sentiment analysis for the social web : the role of mood and issue-related words (2013) 0.01
    0.007307842 = product of:
      0.029231368 = sum of:
        0.029231368 = weight(_text_:for in 1004) [ClassicSimilarity], result of:
          0.029231368 = score(doc=1004,freq=14.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.32930255 = fieldWeight in 1004, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=1004)
      0.25 = coord(1/4)
    
    Abstract
    General sentiment analysis for the social web has become increasingly useful for shedding light on the role of emotion in online communication and offline events in both academic research and data journalism. Nevertheless, existing general-purpose social web sentiment analysis algorithms may not be optimal for texts focussed around specific topics. This article introduces 2 new methods, mood setting and lexicon extension, to improve the accuracy of topic-specific lexical sentiment strength detection for the social web. Mood setting allows the topic mood to determine the default polarity for ostensibly neutral expressive text. Topic-specific lexicon extension involves adding topic-specific words to the default general sentiment lexicon. Experiments with 8 data sets show that both methods can improve sentiment analysis performance in corpora and are recommended when the topic focus is tightest.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.8, S.1608-1617
  17. Maflahi, N.; Thelwall, M.: When are readership counts as useful as citation counts? : Scopus versus Mendeley for LIS journals (2016) 0.01
    0.007307842 = product of:
      0.029231368 = sum of:
        0.029231368 = weight(_text_:for in 2495) [ClassicSimilarity], result of:
          0.029231368 = score(doc=2495,freq=14.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.32930255 = fieldWeight in 2495, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=2495)
      0.25 = coord(1/4)
    
    Abstract
    In theory, articles can attract readers on the social reference sharing site Mendeley before they can attract citations, so Mendeley altmetrics could provide early indications of article impact. This article investigates the influence of time on the number of Mendeley readers of an article through a theoretical discussion and an investigation into the relationship between counts of readers of, and citations to, 4 general library and information science (LIS) journals. For this discipline, it takes about 7 years for articles to attract as many Scopus citations as Mendeley readers, and after this the Spearman correlation between readers and citers is stable at about 0.6 for all years. This suggests that Mendeley readership counts may be useful impact indicators for both newer and older articles. The lack of dates for individual Mendeley article readers and an unknown bias toward more recent articles mean that readership data should be normalized individually by year, however, before making any comparisons between articles published in different years.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.1, S.191-199
  18. Thelwall, M.: Web indicators for research evaluation : a practical guide (2016) 0.01
    0.0072787846 = product of:
      0.029115139 = sum of:
        0.029115139 = weight(_text_:for in 3384) [ClassicSimilarity], result of:
          0.029115139 = score(doc=3384,freq=20.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.32799318 = fieldWeight in 3384, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3384)
      0.25 = coord(1/4)
    
    Abstract
    In recent years there has been an increasing demand for research evaluation within universities and other research-based organisations. In parallel, there has been an increasing recognition that traditional citation-based indicators are not able to reflect the societal impacts of research and are slow to appear. This has led to the creation of new indicators for different types of research impact as well as timelier indicators, mainly derived from the Web. These indicators have been called altmetrics, webometrics or just web metrics. This book describes and evaluates a range of web indicators for aspects of societal or scholarly impact, discusses the theory and practice of using and evaluating web indicators for research assessment and outlines practical strategies for obtaining many web indicators. In addition to describing impact indicators for traditional scholarly outputs, such as journal articles and monographs, it also covers indicators for videos, datasets, software and other non-standard scholarly outputs. The book describes strategies to analyse web indicators for individual publications as well as to compare the impacts of groups of publications. The practical part of the book includes descriptions of how to use the free software Webometric Analyst to gather and analyse web data. This book is written for information science undergraduate and Master?s students that are learning about alternative indicators or scientometrics as well as Ph.D. students and other researchers and practitioners using indicators to help assess research impact or to study scholarly communication.
  19. Price, L.; Thelwall, M.: ¬The clustering power of low frequency words in academic webs (2005) 0.01
    0.0072056293 = product of:
      0.028822517 = sum of:
        0.028822517 = weight(_text_:for in 3561) [ClassicSimilarity], result of:
          0.028822517 = score(doc=3561,freq=10.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.3246967 = fieldWeight in 3561, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3561)
      0.25 = coord(1/4)
    
    Abstract
    The value of low frequency words for subject-based academic Web site clustering is assessed. A new technique is introduced to compare the relative clustering power of different vocabularies. The technique is designed for word frequency tests in large document clustering exercises. Results for the Australian and New Zealand academic Web spaces indicate that low frequency words are useful for clustering academic Web sites along subject lines; removing low frequency words results in sites becoming, an average, less dissimilar to sites from other subjects.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.8, S.883-888
  20. Thelwall, M.; Sud, P.: ¬A comparison of methods for collecting web citation data for academic organizations (2011) 0.01
    0.0069052614 = product of:
      0.027621046 = sum of:
        0.027621046 = weight(_text_:for in 4626) [ClassicSimilarity], result of:
          0.027621046 = score(doc=4626,freq=18.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.31116164 = fieldWeight in 4626, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4626)
      0.25 = coord(1/4)
    
    Abstract
    The primary webometric method for estimating the online impact of an organization is to count links to its website. Link counts have been available from commercial search engines for over a decade but this was set to end by early 2012 and so a replacement is needed. This article compares link counts to two alternative methods: URL citations and organization title mentions. New variations of these methods are also introduced. The three methods are compared against each other using Yahoo!. Two of the three methods (URL citations and organization title mentions) are also compared against each other using Bing. Evidence from a case study of 131 UK universities and 49 US Library and Information Science (LIS) departments suggests that Bing's Hit Count Estimates (HCEs) for popular title searches are not useful for webometric research but that Yahoo!'s HCEs for all three types of search and Bing's URL citation HCEs seem to be consistent. For exact URL counts the results of all three methods in Yahoo! and both methods in Bing are also consistent. Four types of accuracy factors are also introduced and defined: search engine coverage, search engine retrieval variation, search engine retrieval anomalies, and query polysemy.
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.8, S.1488-1497