Search (69 results, page 1 of 4)

  • × author_ss:"Thelwall, M."
  • × language_ss:"e"
  • × theme_ss:"Informetrie"
  1. Levitt, J.M.; Thelwall, M.: Is multidisciplinary research more highly cited? : a macrolevel study (2008) 0.05
    0.049515717 = product of:
      0.12378929 = sum of:
        0.020760437 = weight(_text_:of in 2375) [ClassicSimilarity], result of:
          0.020760437 = score(doc=2375,freq=20.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.27317715 = fieldWeight in 2375, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2375)
        0.10302885 = weight(_text_:subject in 2375) [ClassicSimilarity], result of:
          0.10302885 = score(doc=2375,freq=18.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.5927426 = fieldWeight in 2375, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2375)
      0.4 = coord(2/5)
    
    Abstract
    Interdisciplinary collaboration is a major goal in research policy. This study uses citation analysis to examine diverse subjects in the Web of Science and Scopus to ascertain whether, in general, research published in journals classified in more than one subject is more highly cited than research published in journals classified in a single subject. For each subject, the study divides the journals into two disjoint sets called Multi and Mono. Multi consists of all journals in the subject and at least one other subject whereas Mono consists of all journals in the subject and in no other subject. The main findings are: (a) For social science subject categories in both the Web of Science and Scopus, the average citation levels of articles in Mono and Multi are very similar; and (b) for Scopus subject categories within life sciences, health sciences, and physical sciences, the average citation level of Mono articles is roughly twice that of Multi articles. Hence, one cannot assume that in general, multidisciplinary research will be more highly cited, and the converse is probably true for many areas of science. A policy implication is that, at least in the sciences, multidisciplinary researchers should not be evaluated by citations on the same basis as monodisciplinary researchers.
    Object
    Web of Science
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.12, S.1973-1984
  2. Levitt, J.M.; Thelwall, M.: Citation levels and collaboration within library and information science (2009) 0.05
    0.04571467 = product of:
      0.07619111 = sum of:
        0.0185687 = weight(_text_:of in 2734) [ClassicSimilarity], result of:
          0.0185687 = score(doc=2734,freq=16.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.24433708 = fieldWeight in 2734, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2734)
        0.034342952 = weight(_text_:subject in 2734) [ClassicSimilarity], result of:
          0.034342952 = score(doc=2734,freq=2.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.19758089 = fieldWeight in 2734, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2734)
        0.023279455 = product of:
          0.04655891 = sum of:
            0.04655891 = weight(_text_:22 in 2734) [ClassicSimilarity], result of:
              0.04655891 = score(doc=2734,freq=4.0), product of:
                0.17018363 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04859849 = queryNorm
                0.27358043 = fieldWeight in 2734, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2734)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Collaboration is a major research policy objective, but does it deliver higher quality research? This study uses citation analysis to examine the Web of Science (WoS) Information Science & Library Science subject category (IS&LS) to ascertain whether, in general, more highly cited articles are more highly collaborative than other articles. It consists of two investigations. The first investigation is a longitudinal comparison of the degree and proportion of collaboration in five strata of citation; it found that collaboration in the highest four citation strata (all in the most highly cited 22%) increased in unison over time, whereas collaboration in the lowest citation strata (un-cited articles) remained low and stable. Given that over 40% of the articles were un-cited, it seems important to take into account the differences found between un-cited articles and relatively highly cited articles when investigating collaboration in IS&LS. The second investigation compares collaboration for 35 influential information scientists; it found that their more highly cited articles on average were not more highly collaborative than their less highly cited articles. In summary, although collaborative research is conducive to high citation in general, collaboration has apparently not tended to be essential to the success of current and former elite information scientists.
    Date
    22. 3.2009 12:43:51
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.3, S.434-442
  3. Kousha, K.; Thelwall, M.: How is science cited on the Web? : a classification of google unique Web citations (2007) 0.04
    0.04468473 = product of:
      0.07447455 = sum of:
        0.023670541 = weight(_text_:of in 586) [ClassicSimilarity], result of:
          0.023670541 = score(doc=586,freq=26.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.31146988 = fieldWeight in 586, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=586)
        0.034342952 = weight(_text_:subject in 586) [ClassicSimilarity], result of:
          0.034342952 = score(doc=586,freq=2.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.19758089 = fieldWeight in 586, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=586)
        0.016461061 = product of:
          0.032922123 = sum of:
            0.032922123 = weight(_text_:22 in 586) [ClassicSimilarity], result of:
              0.032922123 = score(doc=586,freq=2.0), product of:
                0.17018363 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04859849 = queryNorm
                0.19345059 = fieldWeight in 586, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=586)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Although the analysis of citations in the scholarly literature is now an established and relatively well understood part of information science, not enough is known about citations that can be found on the Web. In particular, are there new Web types, and if so, are these trivial or potentially useful for studying or evaluating research communication? We sought evidence based upon a sample of 1,577 Web citations of the URLs or titles of research articles in 64 open-access journals from biology, physics, chemistry, and computing. Only 25% represented intellectual impact, from references of Web documents (23%) and other informal scholarly sources (2%). Many of the Web/URL citations were created for general or subject-specific navigation (45%) or for self-publicity (22%). Additional analyses revealed significant disciplinary differences in the types of Google unique Web/URL citations as well as some characteristics of scientific open-access publishing on the Web. We conclude that the Web provides access to a new and different type of citation information, one that may therefore enable us to measure different aspects of research, and the research process in particular; but to obtain good information, the different types should be separated.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.11, S.1631-1644
  4. Kousha, K.; Thelwall, M.: Can Amazon.com reviews help to assess the wider impacts of books? (2016) 0.03
    0.028275374 = product of:
      0.070688434 = sum of:
        0.029476898 = weight(_text_:of in 2768) [ClassicSimilarity], result of:
          0.029476898 = score(doc=2768,freq=28.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.38787308 = fieldWeight in 2768, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2768)
        0.041211538 = weight(_text_:subject in 2768) [ClassicSimilarity], result of:
          0.041211538 = score(doc=2768,freq=2.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.23709705 = fieldWeight in 2768, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=2768)
      0.4 = coord(2/5)
    
    Abstract
    Although citation counts are often used to evaluate the research impact of academic publications, they are problematic for books that aim for educational or cultural impact. To fill this gap, this article assesses whether a number of simple metrics derived from Amazon.com reviews of academic books could provide evidence of their impact. Based on a set of 2,739 academic monographs from 2008 and a set of 1,305 best-selling books in 15 Amazon.com academic subject categories, the existence of significant but low or moderate correlations between citations and numbers of reviews, combined with other evidence, suggests that online book reviews tend to reflect the wider popularity of a book rather than its academic impact, although there are substantial disciplinary differences. Metrics based on online reviews are therefore recommended for the evaluation of books that aim at a wide audience inside or outside academia when it is important to capture the broader impacts of educational or cultural activities and when they cannot be manipulated in advance of the evaluation.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.3, S.566-581
  5. Thelwall, M.; Sud, P.: Do new research issues attract more citations? : a comparison between 25 Scopus subject categories (2021) 0.03
    0.027305339 = product of:
      0.068263344 = sum of:
        0.019695079 = weight(_text_:of in 157) [ClassicSimilarity], result of:
          0.019695079 = score(doc=157,freq=18.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.25915858 = fieldWeight in 157, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=157)
        0.048568267 = weight(_text_:subject in 157) [ClassicSimilarity], result of:
          0.048568267 = score(doc=157,freq=4.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.27942157 = fieldWeight in 157, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=157)
      0.4 = coord(2/5)
    
    Abstract
    Finding new ways to help researchers and administrators understand academic fields is an important task for information scientists. Given the importance of interdisciplinary research, it is essential to be aware of disciplinary differences in aspects of scholarship, such as the significance of recent changes in a field. This paper identifies potential changes in 25 subject categories through a term comparison of words in article titles, keywords and abstracts in 1 year compared to the previous 4 years. The scholarly influence of new research issues is indirectly assessed with a citation analysis of articles matching each trending term. While topic-related words dominate the top terms, style, national focus, and language changes are also evident. Thus, as reflected in Scopus, fields evolve along multiple dimensions. Moreover, while articles exploiting new issues are usually more cited in some fields, such as Organic Chemistry, they are usually less cited in others, including History. The possible causes of new issues being less cited include externally driven temporary factors, such as disease outbreaks, and internally driven temporary decisions, such as a deliberate emphasis on a single topic (e.g., through a journal special issue).
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.3, S.269-279
  6. Thelwall, M.: ¬A layered approach for investigating the topological structure of communities in the Web (2003) 0.02
    0.024241224 = product of:
      0.06060306 = sum of:
        0.026260108 = weight(_text_:of in 4450) [ClassicSimilarity], result of:
          0.026260108 = score(doc=4450,freq=32.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.34554482 = fieldWeight in 4450, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4450)
        0.034342952 = weight(_text_:subject in 4450) [ClassicSimilarity], result of:
          0.034342952 = score(doc=4450,freq=2.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.19758089 = fieldWeight in 4450, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4450)
      0.4 = coord(2/5)
    
    Abstract
    A layered approach for identifying communities in the Web is presented and explored by applying the flake exact community identification algorithm to the UK academic Web. Although community or topic identification is a common task in information retrieval, a new perspective is developed by: the application of alternative document models, shifting the focus from individual pages to aggregated collections based upon Web directories, domains and entire sites; the removal of internal site links; and the adaptation of a new fast algorithm to allow fully-automated community identification using all possible single starting points. The overall topology of the graphs in the three least-aggregated layers was first investigated and found to include a large number of isolated points but, surprisingly, with most of the remainder being in one huge connected component, exact proportions varying by layer. The community identification process then found that the number of communities far exceeded the number of topological components, indicating that community identification is a potentially useful technique, even with random starting points. Both the number and size of communities identified was dependent on the parameter of the algorithm, with very different results being obtained in each case. In conclusion, the UK academic Web is embedded with layers of non-trivial communities and, if it is not unique in this, then there is the promise of improved results for information retrieval algorithms that can exploit this additional structure, and the application of the technique directly to partially automate Web metrics tasks such as that of finding all pages related to a given subject hosted by a single country's universities.
    Source
    Journal of documentation. 59(2003) no.4, S.410-429
  7. Kousha, K.; Thelwall, M.; Abdoli, M.: ¬The role of online videos in research communication : a content analysis of YouTube videos cited in academic publications (2012) 0.02
    0.021615213 = product of:
      0.054038033 = sum of:
        0.019695079 = weight(_text_:of in 382) [ClassicSimilarity], result of:
          0.019695079 = score(doc=382,freq=18.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.25915858 = fieldWeight in 382, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=382)
        0.034342952 = weight(_text_:subject in 382) [ClassicSimilarity], result of:
          0.034342952 = score(doc=382,freq=2.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.19758089 = fieldWeight in 382, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=382)
      0.4 = coord(2/5)
    
    Abstract
    Although there is some evidence that online videos are increasingly used by academics for informal scholarly communication and teaching, the extent to which they are used in published academic research is unknown. This article explores the extent to which YouTube videos are cited in academic publications and whether there are significant broad disciplinary differences in this practice. To investigate, we extracted the URL citations to YouTube videos from academic publications indexed by Scopus. A total of 1,808 Scopus publications cited at least one YouTube video, and there was a steady upward growth in citing online videos within scholarly publications from 2006 to 2011, with YouTube citations being most common within arts and humanities (0.3%) and the social sciences (0.2%). A content analysis of 551 YouTube videos cited by research articles indicated that in science (78%) and in medicine and health sciences (77%), over three fourths of the cited videos had either direct scientific (e.g., laboratory experiments) or scientific-related contents (e.g., academic lectures or education) whereas in the arts and humanities, about 80% of the YouTube videos had art, culture, or history themes, and in the social sciences, about 63% of the videos were related to news, politics, advertisements, and documentaries. This shows both the disciplinary differences and the wide variety of innovative research communication uses found for videos within the different subject areas.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.9, S.1710-1727
  8. Kousha, K.; Thelwall, M.; Rezaie, S.: Assessing the citation impact of books : the role of Google Books, Google Scholar, and Scopus (2011) 0.02
    0.020684952 = product of:
      0.05171238 = sum of:
        0.017369429 = weight(_text_:of in 4920) [ClassicSimilarity], result of:
          0.017369429 = score(doc=4920,freq=14.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.22855641 = fieldWeight in 4920, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4920)
        0.034342952 = weight(_text_:subject in 4920) [ClassicSimilarity], result of:
          0.034342952 = score(doc=4920,freq=2.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.19758089 = fieldWeight in 4920, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4920)
      0.4 = coord(2/5)
    
    Abstract
    Citation indictors are increasingly used in some subject areas to support peer review in the evaluation of researchers and departments. Nevertheless, traditional journal-based citation indexes may be inadequate for the citation impact assessment of book-based disciplines. This article examines whether online citations from Google Books and Google Scholar can provide alternative sources of citation evidence. To investigate this, we compared the citation counts to 1,000 books submitted to the 2008 U.K. Research Assessment Exercise (RAE) from Google Books and Google Scholar with Scopus citations across seven book-based disciplines (archaeology; law; politics and international studies; philosophy; sociology; history; and communication, cultural, and media studies). Google Books and Google Scholar citations to books were 1.4 and 3.2 times more common than were Scopus citations, and their medians were more than twice and three times as high as were Scopus median citations, respectively. This large number of citations is evidence that in book-oriented disciplines in the social sciences, arts, and humanities, online book citations may be sufficiently numerous to support peer review for research evaluation, at least in the United Kingdom.
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.11, S.2147-2164
  9. Kousha, K.; Thelwall, M.: ¬An automatic method for extracting citations from Google Books (2015) 0.02
    0.01960912 = product of:
      0.049022797 = sum of:
        0.014679846 = weight(_text_:of in 1658) [ClassicSimilarity], result of:
          0.014679846 = score(doc=1658,freq=10.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.19316542 = fieldWeight in 1658, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1658)
        0.034342952 = weight(_text_:subject in 1658) [ClassicSimilarity], result of:
          0.034342952 = score(doc=1658,freq=2.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.19758089 = fieldWeight in 1658, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1658)
      0.4 = coord(2/5)
    
    Abstract
    Recent studies have shown that counting citations from books can help scholarly impact assessment and that Google Books (GB) is a useful source of such citation counts, despite its lack of a public citation index. Searching GB for citations produces approximate matches, however, and so its raw results need time-consuming human filtering. In response, this article introduces a method to automatically remove false and irrelevant matches from GB citation searches in addition to introducing refinements to a previous GB manual citation extraction method. The method was evaluated by manual checking of sampled GB results and comparing citations to about 14,500 monographs in the Thomson Reuters Book Citation Index (BKCI) against automatically extracted citations from GB across 24 subject areas. GB citations were 103% to 137% as numerous as BKCI citations in the humanities, except for tourism (72%) and linguistics (91%), 46% to 85% in social sciences, but only 8% to 53% in the sciences. In all cases, however, GB had substantially more citing books than did BKCI, with BKCI's results coming predominantly from journal articles. Moderate correlations between the GB and BKCI citation counts in social sciences and humanities, with most BKCI results coming from journal articles rather than books, suggests that they could measure the different aspects of impact, however.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.2, S.309-320
  10. Thelwall, M.; Maflahi, N.: Guideline references and academic citations as evidence of the clinical value of health research (2016) 0.02
    0.017866319 = product of:
      0.0446658 = sum of:
        0.024912525 = weight(_text_:of in 2856) [ClassicSimilarity], result of:
          0.024912525 = score(doc=2856,freq=20.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.32781258 = fieldWeight in 2856, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2856)
        0.019753272 = product of:
          0.039506543 = sum of:
            0.039506543 = weight(_text_:22 in 2856) [ClassicSimilarity], result of:
              0.039506543 = score(doc=2856,freq=2.0), product of:
                0.17018363 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04859849 = queryNorm
                0.23214069 = fieldWeight in 2856, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2856)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This article introduces a new source of evidence of the value of medical-related research: citations from clinical guidelines. These give evidence that research findings have been used to inform the day-to-day practice of medical staff. To identify whether citations from guidelines can give different information from that of traditional citation counts, this article assesses the extent to which references in clinical guidelines tend to be highly cited in the academic literature and highly read in Mendeley. Using evidence from the United Kingdom, references associated with the UK's National Institute of Health and Clinical Excellence (NICE) guidelines tended to be substantially more cited than comparable articles, unless they had been published in the most recent 3 years. Citation counts also seemed to be stronger indicators than Mendeley readership altmetrics. Hence, although presence in guidelines may be particularly useful to highlight the contributions of recently published articles, for older articles citation counts may already be sufficient to recognize their contributions to health in society.
    Date
    19. 3.2016 12:22:00
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.4, S.960-966
  11. Didegah, F.; Thelwall, M.: Co-saved, co-tweeted, and co-cited networks (2018) 0.02
    0.017866319 = product of:
      0.0446658 = sum of:
        0.024912525 = weight(_text_:of in 4291) [ClassicSimilarity], result of:
          0.024912525 = score(doc=4291,freq=20.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.32781258 = fieldWeight in 4291, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4291)
        0.019753272 = product of:
          0.039506543 = sum of:
            0.039506543 = weight(_text_:22 in 4291) [ClassicSimilarity], result of:
              0.039506543 = score(doc=4291,freq=2.0), product of:
                0.17018363 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04859849 = queryNorm
                0.23214069 = fieldWeight in 4291, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4291)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Counts of tweets and Mendeley user libraries have been proposed as altmetric alternatives to citation counts for the impact assessment of articles. Although both have been investigated to discover whether they correlate with article citations, it is not known whether users tend to tweet or save (in Mendeley) the same kinds of articles that they cite. In response, this article compares pairs of articles that are tweeted, saved to a Mendeley library, or cited by the same user, but possibly a different user for each source. The study analyzes 1,131,318 articles published in 2012, with minimum tweeted (10), saved to Mendeley (100), and cited (10) thresholds. The results show surprisingly minor overall overlaps between the three phenomena. The importance of journals for Twitter and the presence of many bots at different levels of activity suggest that this site has little value for impact altmetrics. The moderate differences between patterns of saving and citation suggest that Mendeley can be used for some types of impact assessments, but sensitivity is needed for underlying differences.
    Date
    28. 7.2018 10:00:22
    Source
    Journal of the Association for Information Science and Technology. 69(2018) no.8, S.959-973
  12. Thelwall, M.; Sud, P.; Wilkinson, D.: Link and co-inlink network diagrams with URL citations or title mentions (2012) 0.01
    0.0148885995 = product of:
      0.0372215 = sum of:
        0.020760437 = weight(_text_:of in 57) [ClassicSimilarity], result of:
          0.020760437 = score(doc=57,freq=20.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.27317715 = fieldWeight in 57, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=57)
        0.016461061 = product of:
          0.032922123 = sum of:
            0.032922123 = weight(_text_:22 in 57) [ClassicSimilarity], result of:
              0.032922123 = score(doc=57,freq=2.0), product of:
                0.17018363 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04859849 = queryNorm
                0.19345059 = fieldWeight in 57, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=57)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Webometric network analyses have been used to map the connectivity of groups of websites to identify clusters, important sites or overall structure. Such analyses have mainly been based upon hyperlink counts, the number of hyperlinks between a pair of websites, although some have used title mentions or URL citations instead. The ability to automatically gather hyperlink counts from Yahoo! ceased in April 2011 and the ability to manually gather such counts was due to cease by early 2012, creating a need for alternatives. This article assesses URL citations and title mentions as possible replacements for hyperlinks in both binary and weighted direct link and co-inlink network diagrams. It also assesses three different types of data for the network connections: hit count estimates, counts of matching URLs, and filtered counts of matching URLs. Results from analyses of U.S. library and information science departments and U.K. universities give evidence that metrics based upon URLs or titles can be appropriate replacements for metrics based upon hyperlinks for both binary and weighted networks, although filtered counts of matching URLs are necessary to give the best results for co-title mention and co-URL citation network diagrams.
    Date
    6. 4.2012 18:16:22
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.4, S.805-816
  13. Thelwall, M.: Are Mendeley reader counts high enough for research evaluations when articles are published? (2017) 0.01
    0.0148885995 = product of:
      0.0372215 = sum of:
        0.020760437 = weight(_text_:of in 3806) [ClassicSimilarity], result of:
          0.020760437 = score(doc=3806,freq=20.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.27317715 = fieldWeight in 3806, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3806)
        0.016461061 = product of:
          0.032922123 = sum of:
            0.032922123 = weight(_text_:22 in 3806) [ClassicSimilarity], result of:
              0.032922123 = score(doc=3806,freq=2.0), product of:
                0.17018363 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04859849 = queryNorm
                0.19345059 = fieldWeight in 3806, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3806)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Purpose Mendeley reader counts have been proposed as early indicators for the impact of academic publications. The purpose of this paper is to assess whether there are enough Mendeley readers for research evaluation purposes during the month when an article is first published. Design/methodology/approach Average Mendeley reader counts were compared to the average Scopus citation counts for 104,520 articles from ten disciplines during the second half of 2016. Findings Articles attracted, on average, between 0.1 and 0.8 Mendeley readers per article in the month in which they first appeared in Scopus. This is about ten times more than the average Scopus citation count. Research limitations/implications Other disciplines may use Mendeley more or less than the ten investigated here. The results are dependent on Scopus's indexing practices, and Mendeley reader counts can be manipulated and have national and seniority biases. Practical implications Mendeley reader counts during the month of publication are more powerful than Scopus citations for comparing the average impacts of groups of documents but are not high enough to differentiate between the impacts of typical individual articles. Originality/value This is the first multi-disciplinary and systematic analysis of Mendeley reader counts from the publication month of an article.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 69(2017) no.2, S.174-183
  14. Thelwall, M.; Sud, P.: Mendeley readership counts : an investigation of temporal and disciplinary differences (2016) 0.01
    0.014203735 = product of:
      0.035509337 = sum of:
        0.015756065 = weight(_text_:of in 3211) [ClassicSimilarity], result of:
          0.015756065 = score(doc=3211,freq=8.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.20732689 = fieldWeight in 3211, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3211)
        0.019753272 = product of:
          0.039506543 = sum of:
            0.039506543 = weight(_text_:22 in 3211) [ClassicSimilarity], result of:
              0.039506543 = score(doc=3211,freq=2.0), product of:
                0.17018363 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04859849 = queryNorm
                0.23214069 = fieldWeight in 3211, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3211)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Scientists and managers using citation-based indicators to help evaluate research cannot evaluate recent articles because of the time needed for citations to accrue. Reading occurs before citing, however, and so it makes sense to count readers rather than citations for recent publications. To assess this, Mendeley readers and citations were obtained for articles from 2004 to late 2014 in five broad categories (agriculture, business, decision science, pharmacy, and the social sciences) and 50 subcategories. In these areas, citation counts tended to increase with every extra year since publication, and readership counts tended to increase faster initially but then stabilize after about 5 years. The correlation between citations and readers was also higher for longer time periods, stabilizing after about 5 years. Although there were substantial differences between broad fields and smaller differences between subfields, the results confirm the value of Mendeley reader counts as early scientific impact indicators.
    Date
    16.11.2016 11:07:22
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.12, S.3036-3050
  15. Thelwall, M.; Thelwall, S.: ¬A thematic analysis of highly retweeted early COVID-19 tweets : consensus, information, dissent and lockdown life (2020) 0.01
    0.012456364 = product of:
      0.031140909 = sum of:
        0.014679846 = weight(_text_:of in 178) [ClassicSimilarity], result of:
          0.014679846 = score(doc=178,freq=10.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.19316542 = fieldWeight in 178, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=178)
        0.016461061 = product of:
          0.032922123 = sum of:
            0.032922123 = weight(_text_:22 in 178) [ClassicSimilarity], result of:
              0.032922123 = score(doc=178,freq=2.0), product of:
                0.17018363 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04859849 = queryNorm
                0.19345059 = fieldWeight in 178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=178)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Purpose Public attitudes towards COVID-19 and social distancing are critical in reducing its spread. It is therefore important to understand public reactions and information dissemination in all major forms, including on social media. This article investigates important issues reflected on Twitter in the early stages of the public reaction to COVID-19. Design/methodology/approach A thematic analysis of the most retweeted English-language tweets mentioning COVID-19 during March 10-29, 2020. Findings The main themes identified for the 87 qualifying tweets accounting for 14 million retweets were: lockdown life; attitude towards social restrictions; politics; safety messages; people with COVID-19; support for key workers; work; and COVID-19 facts/news. Research limitations/implications Twitter played many positive roles, mainly through unofficial tweets. Users shared social distancing information, helped build support for social distancing, criticised government responses, expressed support for key workers and helped each other cope with social isolation. A few popular tweets not supporting social distancing show that government messages sometimes failed. Practical implications Public health campaigns in future may consider encouraging grass roots social web activity to support campaign goals. At a methodological level, analysing retweet counts emphasised politics and ignored practical implementation issues. Originality/value This is the first qualitative analysis of general COVID-19-related retweeting.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 72(2020) no.6, S.945-962
  16. Thelwall, M.; Kousha, K.; Abdoli, M.; Stuart, E.; Makita, M.; Wilson, P.; Levitt, J.: Why are coauthored academic articles more cited : higher quality or larger audience? (2023) 0.01
    0.012456364 = product of:
      0.031140909 = sum of:
        0.014679846 = weight(_text_:of in 995) [ClassicSimilarity], result of:
          0.014679846 = score(doc=995,freq=10.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.19316542 = fieldWeight in 995, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=995)
        0.016461061 = product of:
          0.032922123 = sum of:
            0.032922123 = weight(_text_:22 in 995) [ClassicSimilarity], result of:
              0.032922123 = score(doc=995,freq=2.0), product of:
                0.17018363 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04859849 = queryNorm
                0.19345059 = fieldWeight in 995, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=995)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Collaboration is encouraged because it is believed to improve academic research, supported by indirect evidence in the form of more coauthored articles being more cited. Nevertheless, this might not reflect quality but increased self-citations or the "audience effect": citations from increased awareness through multiple author networks. We address this with the first science wide investigation into whether author numbers associate with journal article quality, using expert peer quality judgments for 122,331 articles from the 2014-20 UK national assessment. Spearman correlations between author numbers and quality scores show moderately strong positive associations (0.2-0.4) in the health, life, and physical sciences, but weak or no positive associations in engineering and social sciences, with weak negative/positive or no associations in various arts and humanities, and a possible negative association for decision sciences. This gives the first systematic evidence that greater numbers of authors associates with higher quality journal articles in the majority of academia outside the arts and humanities, at least for the UK. Positive associations between team size and citation counts in areas with little association between team size and quality also show that audience effects or other nonquality factors account for the higher citation rates of coauthored articles in some fields.
    Date
    22. 6.2023 18:11:50
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.7, S.791-810
  17. Thelwall, M.; Wilkinson, D.: Finding similar academic Web sites with links, bibliometric couplings and colinks (2004) 0.01
    0.00568093 = product of:
      0.02840465 = sum of:
        0.02840465 = weight(_text_:of in 2571) [ClassicSimilarity], result of:
          0.02840465 = score(doc=2571,freq=26.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.37376386 = fieldWeight in 2571, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2571)
      0.2 = coord(1/5)
    
    Abstract
    A common task in both Webmetrics and Web information retrieval is to identify a set of Web pages or sites that are similar in content. In this paper we assess the extent to which links, colinks and couplings can be used to identify similar Web sites. As an experiment, a random sample of 500 pairs of domains from the UK academic Web were taken and human assessments of site similarity, based upon content type, were compared against ratings for the three concepts. The results show that using a combination of all three gives the highest probability of identifying similar sites, but surprisingly this was only a marginal improvement over using links alone. Another unexpected result was that high values for either colink counts or couplings were associated with only a small increased likelihood of similarity. The principal advantage of using couplings and colinks was found to be greater coverage in terms of a much larger number of pairs of sites being connected by these measures, instead of increased probability of similarity. In information retrieval terminology, this is improved recall rather than improved precision.
  18. Didegah, F.; Thelwall, M.: Determinants of research citation impact in nanoscience and nanotechnology (2013) 0.01
    0.00568093 = product of:
      0.02840465 = sum of:
        0.02840465 = weight(_text_:of in 737) [ClassicSimilarity], result of:
          0.02840465 = score(doc=737,freq=26.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.37376386 = fieldWeight in 737, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=737)
      0.2 = coord(1/5)
    
    Abstract
    This study investigates a range of metrics available when a nanoscience and nanotechnology article is published to see which metrics correlate more with the number of citations to the article. It also introduces the degree of internationality of journals and references as new metrics for this purpose. The journal impact factor; the impact of references; the internationality of authors, journals, and references; and the number of authors, institutions, and references were all calculated for papers published in nanoscience and nanotechnology journals in the Web of Science from 2007 to 2009. Using a zero-inflated negative binomial regression model on the data set, the impact factor of the publishing journal and the citation impact of the cited references were found to be the most effective determinants of citation counts in all four time periods. In the entire 2007 to 2009 period, apart from journal internationality and author numbers and internationality, all other predictor variables had significant effects on citation counts.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.5, S.1055-1064
  19. Larivière, V.; Sugimoto, C.R.; Macaluso, B.; Milojevi´c, S.; Cronin, B.; Thelwall, M.: arXiv E-prints and the journal of record : an analysis of roles and relationships (2014) 0.01
    0.005252022 = product of:
      0.026260108 = sum of:
        0.026260108 = weight(_text_:of in 1285) [ClassicSimilarity], result of:
          0.026260108 = score(doc=1285,freq=32.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.34554482 = fieldWeight in 1285, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1285)
      0.2 = coord(1/5)
    
    Abstract
    Since its creation in 1991, arXiv has become central to the diffusion of research in a number of fields. Combining data from the entirety of arXiv and the Web of Science (WoS), this article investigates (a) the proportion of papers across all disciplines that are on arXiv and the proportion of arXiv papers that are in the WoS, (b) the elapsed time between arXiv submission and journal publication, and (c) the aging characteristics and scientific impact of arXiv e-prints and their published version. It shows that the proportion of WoS papers found on arXiv varies across the specialties of physics and mathematics, and that only a few specialties make extensive use of the repository. Elapsed time between arXiv submission and journal publication has shortened but remains longer in mathematics than in physics. In physics, mathematics, as well as in astronomy and astrophysics, arXiv versions are cited more promptly and decay faster than WoS papers. The arXiv versions of papers-both published and unpublished-have lower citation rates than published papers, although there is almost no difference in the impact of the arXiv versions of published and unpublished papers.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.6, S.1157-1169
  20. Thelwall, M.: Interpreting social science link analysis research : a theoretical framework (2006) 0.01
    0.0052256957 = product of:
      0.026128478 = sum of:
        0.026128478 = weight(_text_:of in 4908) [ClassicSimilarity], result of:
          0.026128478 = score(doc=4908,freq=22.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.34381276 = fieldWeight in 4908, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4908)
      0.2 = coord(1/5)
    
    Abstract
    Link analysis in various forms is now an established technique in many different subjects, reflecting the perceived importance of links and of the Web. A critical but very difficult issue is how to interpret the results of social science link analyses. lt is argued that the dynamic nature of the Web, its lack of quality control, and the online proliferation of copying and imitation mean that methodologies operating within a highly positivist, quantitative framework are ineffective. Conversely, the sheer variety of the Web makes application of qualitative methodologies and pure reason very problematic to large-scale studies. Methodology triangulation is consequently advocated, in combination with a warning that the Web is incapable of giving definitive answers to large-scale link analysis research questions concerning social factors underlying link creation. Finally, it is claimed that although theoretical frameworks are appropriate for guiding research, a Theory of Link Analysis is not possible.
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.1, S.60-68