Search (29 results, page 1 of 2)

  • × author_ss:"Thelwall, M."
  1. Kousha, K.; Thelwall, M.: How is science cited on the Web? : a classification of google unique Web citations (2007) 0.04
    0.04413954 = product of:
      0.08827908 = sum of:
        0.07272967 = weight(_text_:open in 586) [ClassicSimilarity], result of:
          0.07272967 = score(doc=586,freq=4.0), product of:
            0.20672844 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.045906994 = queryNorm
            0.3518126 = fieldWeight in 586, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=586)
        0.015549411 = product of:
          0.031098822 = sum of:
            0.031098822 = weight(_text_:22 in 586) [ClassicSimilarity], result of:
              0.031098822 = score(doc=586,freq=2.0), product of:
                0.16075848 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045906994 = queryNorm
                0.19345059 = fieldWeight in 586, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=586)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Although the analysis of citations in the scholarly literature is now an established and relatively well understood part of information science, not enough is known about citations that can be found on the Web. In particular, are there new Web types, and if so, are these trivial or potentially useful for studying or evaluating research communication? We sought evidence based upon a sample of 1,577 Web citations of the URLs or titles of research articles in 64 open-access journals from biology, physics, chemistry, and computing. Only 25% represented intellectual impact, from references of Web documents (23%) and other informal scholarly sources (2%). Many of the Web/URL citations were created for general or subject-specific navigation (45%) or for self-publicity (22%). Additional analyses revealed significant disciplinary differences in the types of Google unique Web/URL citations as well as some characteristics of scientific open-access publishing on the Web. We conclude that the Web provides access to a new and different type of citation information, one that may therefore enable us to measure different aspects of research, and the research process in particular; but to obtain good information, the different types should be separated.
  2. Thelwall, M.; Kousha, K.: Online presentations as a source of scientific impact? : an analysis of PowerPoint files citing academic journals (2008) 0.04
    0.041296 = product of:
      0.082592 = sum of:
        0.051427644 = weight(_text_:open in 1614) [ClassicSimilarity], result of:
          0.051427644 = score(doc=1614,freq=2.0), product of:
            0.20672844 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.045906994 = queryNorm
            0.24876907 = fieldWeight in 1614, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1614)
        0.031164357 = product of:
          0.062328715 = sum of:
            0.062328715 = weight(_text_:source in 1614) [ClassicSimilarity], result of:
              0.062328715 = score(doc=1614,freq=2.0), product of:
                0.22758624 = queryWeight, product of:
                  4.9575505 = idf(docFreq=844, maxDocs=44218)
                  0.045906994 = queryNorm
                0.27386856 = fieldWeight in 1614, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.9575505 = idf(docFreq=844, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1614)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Open-access online publication has made available an increasingly wide range of document types for scientometric analysis. In this article, we focus on citations in online presentations, seeking evidence of their value as nontraditional indicators of research impact. For this purpose, we searched for online PowerPoint files mentioning any one of 1,807 ISI-indexed journals in ten science and ten social science disciplines. We also manually classified 1,378 online PowerPoint citations to journals in eight additional science and social science disciplines. The results showed that very few journals were cited frequently enough in online PowerPoint files to make impact assessment worthwhile, with the main exceptions being popular magazines like Scientific American and Harvard Business Review. Surprisingly, however, there was little difference overall in the number of PowerPoint citations to science and to the social sciences, and also in the proportion representing traditional impact (about 60%) and wider impact (about 15%). It seems that the main scientometric value for online presentations may be in tracking the popularization of research, or for comparing the impact of whole journals rather than individual articles.
  3. Li, X.; Thelwall, M.; Kousha, K.: ¬The role of arXiv, RePEc, SSRN and PMC in formal scholarly communication (2015) 0.03
    0.033488527 = product of:
      0.066977054 = sum of:
        0.051427644 = weight(_text_:open in 2593) [ClassicSimilarity], result of:
          0.051427644 = score(doc=2593,freq=2.0), product of:
            0.20672844 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.045906994 = queryNorm
            0.24876907 = fieldWeight in 2593, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2593)
        0.015549411 = product of:
          0.031098822 = sum of:
            0.031098822 = weight(_text_:22 in 2593) [ClassicSimilarity], result of:
              0.031098822 = score(doc=2593,freq=2.0), product of:
                0.16075848 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045906994 = queryNorm
                0.19345059 = fieldWeight in 2593, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2593)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose The four major Subject Repositories (SRs), arXiv, Research Papers in Economics (RePEc), Social Science Research Network (SSRN) and PubMed Central (PMC), are all important within their disciplines but no previous study has systematically compared how often they are cited in academic publications. In response, the purpose of this paper is to report an analysis of citations to SRs from Scopus publications, 2000-2013. Design/methodology/approach Scopus searches were used to count the number of documents citing the four SRs in each year. A random sample of 384 documents citing the four SRs was then visited to investigate the nature of the citations. Findings Each SR was most cited within its own subject area but attracted substantial citations from other subject areas, suggesting that they are open to interdisciplinary uses. The proportion of documents citing each SR is continuing to increase rapidly, and the SRs all seem to attract substantial numbers of citations from more than one discipline. Research limitations/implications Scopus does not cover all publications, and most citations to documents found in the four SRs presumably cite the published version, when one exists, rather than the repository version. Practical implications SRs are continuing to grow and do not seem to be threatened by institutional repositories and so research managers should encourage their continued use within their core disciplines, including for research that aims at an audience in other disciplines. Originality/value This is the first simultaneous analysis of Scopus citations to the four most popular SRs.
    Date
    20. 1.2015 18:30:22
  4. Thelwall, M.; Maflahi, N.: Guideline references and academic citations as evidence of the clinical value of health research (2016) 0.03
    0.02802826 = product of:
      0.11211304 = sum of:
        0.11211304 = sum of:
          0.07479446 = weight(_text_:source in 2856) [ClassicSimilarity], result of:
            0.07479446 = score(doc=2856,freq=2.0), product of:
              0.22758624 = queryWeight, product of:
                4.9575505 = idf(docFreq=844, maxDocs=44218)
                0.045906994 = queryNorm
              0.32864225 = fieldWeight in 2856, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.9575505 = idf(docFreq=844, maxDocs=44218)
                0.046875 = fieldNorm(doc=2856)
          0.037318584 = weight(_text_:22 in 2856) [ClassicSimilarity], result of:
            0.037318584 = score(doc=2856,freq=2.0), product of:
              0.16075848 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045906994 = queryNorm
              0.23214069 = fieldWeight in 2856, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2856)
      0.25 = coord(1/4)
    
    Abstract
    This article introduces a new source of evidence of the value of medical-related research: citations from clinical guidelines. These give evidence that research findings have been used to inform the day-to-day practice of medical staff. To identify whether citations from guidelines can give different information from that of traditional citation counts, this article assesses the extent to which references in clinical guidelines tend to be highly cited in the academic literature and highly read in Mendeley. Using evidence from the United Kingdom, references associated with the UK's National Institute of Health and Clinical Excellence (NICE) guidelines tended to be substantially more cited than comparable articles, unless they had been published in the most recent 3 years. Citation counts also seemed to be stronger indicators than Mendeley readership altmetrics. Hence, although presence in guidelines may be particularly useful to highlight the contributions of recently published articles, for older articles citation counts may already be sufficient to recognize their contributions to health in society.
    Date
    19. 3.2016 12:22:00
  5. Didegah, F.; Thelwall, M.: Co-saved, co-tweeted, and co-cited networks (2018) 0.03
    0.02802826 = product of:
      0.11211304 = sum of:
        0.11211304 = sum of:
          0.07479446 = weight(_text_:source in 4291) [ClassicSimilarity], result of:
            0.07479446 = score(doc=4291,freq=2.0), product of:
              0.22758624 = queryWeight, product of:
                4.9575505 = idf(docFreq=844, maxDocs=44218)
                0.045906994 = queryNorm
              0.32864225 = fieldWeight in 4291, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.9575505 = idf(docFreq=844, maxDocs=44218)
                0.046875 = fieldNorm(doc=4291)
          0.037318584 = weight(_text_:22 in 4291) [ClassicSimilarity], result of:
            0.037318584 = score(doc=4291,freq=2.0), product of:
              0.16075848 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045906994 = queryNorm
              0.23214069 = fieldWeight in 4291, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4291)
      0.25 = coord(1/4)
    
    Abstract
    Counts of tweets and Mendeley user libraries have been proposed as altmetric alternatives to citation counts for the impact assessment of articles. Although both have been investigated to discover whether they correlate with article citations, it is not known whether users tend to tweet or save (in Mendeley) the same kinds of articles that they cite. In response, this article compares pairs of articles that are tweeted, saved to a Mendeley library, or cited by the same user, but possibly a different user for each source. The study analyzes 1,131,318 articles published in 2012, with minimum tweeted (10), saved to Mendeley (100), and cited (10) thresholds. The results show surprisingly minor overall overlaps between the three phenomena. The importance of journals for Twitter and the presence of many bots at different levels of activity suggest that this site has little value for impact altmetrics. The moderate differences between patterns of saving and citation suggest that Mendeley can be used for some types of impact assessments, but sensitivity is needed for underlying differences.
    Date
    28. 7.2018 10:00:22
  6. Harries, G.; Wilkinson, D.; Price, L.; Fairclough, R.; Thelwall, M.: Hyperlinks as a data source for science mapping : making sense of it all (2005) 0.02
    0.018698614 = product of:
      0.07479446 = sum of:
        0.07479446 = product of:
          0.14958891 = sum of:
            0.14958891 = weight(_text_:source in 4654) [ClassicSimilarity], result of:
              0.14958891 = score(doc=4654,freq=2.0), product of:
                0.22758624 = queryWeight, product of:
                  4.9575505 = idf(docFreq=844, maxDocs=44218)
                  0.045906994 = queryNorm
                0.6572845 = fieldWeight in 4654, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.9575505 = idf(docFreq=844, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4654)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  7. Shema, H.; Bar-Ilan, J.; Thelwall, M.: Do blog citations correlate with a higher number of future citations? : Research blogs as a potential source for alternative metrics (2014) 0.02
    0.018698614 = product of:
      0.07479446 = sum of:
        0.07479446 = product of:
          0.14958891 = sum of:
            0.14958891 = weight(_text_:source in 1258) [ClassicSimilarity], result of:
              0.14958891 = score(doc=1258,freq=8.0), product of:
                0.22758624 = queryWeight, product of:
                  4.9575505 = idf(docFreq=844, maxDocs=44218)
                  0.045906994 = queryNorm
                0.6572845 = fieldWeight in 1258, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.9575505 = idf(docFreq=844, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1258)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Journal-based citations are an important source of data for impact indices. However, the impact of journal articles extends beyond formal scholarly discourse. Measuring online scholarly impact calls for new indices, complementary to the older ones. This article examines a possible alternative metric source, blog posts aggregated at ResearchBlogging.org, which discuss peer-reviewed articles and provide full bibliographic references. Articles reviewed in these blogs therefore receive "blog citations." We hypothesized that articles receiving blog citations close to their publication time receive more journal citations later than the articles in the same journal published in the same year that did not receive such blog citations. Statistically significant evidence for articles published in 2009 and 2010 support this hypothesis for seven of 12 journals (58%) in 2009 and 13 of 19 journals (68%) in 2010. We suggest, based on these results, that blog citations can be used as an alternative metric source.
  8. Kousha, K.; Thelwall, M.: Google Scholar citations and Google Web/URL citations : a multi-discipline exploratory analysis (2007) 0.01
    0.012856911 = product of:
      0.051427644 = sum of:
        0.051427644 = weight(_text_:open in 337) [ClassicSimilarity], result of:
          0.051427644 = score(doc=337,freq=2.0), product of:
            0.20672844 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.045906994 = queryNorm
            0.24876907 = fieldWeight in 337, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=337)
      0.25 = coord(1/4)
    
    Abstract
    We use a new data gathering method, "Web/URL citation," Web/URL and Google Scholar to compare traditional and Web-based citation patterns across multiple disciplines (biology, chemistry, physics, computing, sociology, economics, psychology, and education) based upon a sample of 1,650 articles from 108 open access (OA) journals published in 2001. A Web/URL citation of an online journal article is a Web mention of its title, URL, or both. For each discipline, except psychology, we found significant correlations between Thomson Scientific (formerly Thomson ISI, here: ISI) citations and both Google Scholar and Google Web/URL citations. Google Scholar citations correlated more highly with ISI citations than did Google Web/URL citations, indicating that the Web/URL method measures a broader type of citation phenomenon. Google Scholar citations were more numerous than ISI citations in computer science and the four social science disciplines, suggesting that Google Scholar is more comprehensive for social sciences and perhaps also when conference articles are valued and published online. We also found large disciplinary differences in the percentage overlap between ISI and Google Scholar citation sources. Finally, although we found many significant trends, there were also numerous exceptions, suggesting that replacing traditional citation sources with the Web or Google Scholar for research impact calculations would be problematic.
  9. Kousha, K.; Thelwall, M.: Disseminating research with web CV hyperlinks (2014) 0.01
    0.012856911 = product of:
      0.051427644 = sum of:
        0.051427644 = weight(_text_:open in 1331) [ClassicSimilarity], result of:
          0.051427644 = score(doc=1331,freq=2.0), product of:
            0.20672844 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.045906994 = queryNorm
            0.24876907 = fieldWeight in 1331, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1331)
      0.25 = coord(1/4)
    
    Abstract
    Some curricula vitae (web CVs) of academics on the web, including homepages and publication lists, link to open-access (OA) articles, resources, abstracts in publishers' websites, or academic discussions, helping to disseminate research. To assess how common such practices are and whether they vary by discipline, gender, and country, the authors conducted a large-scale e-mail survey of astronomy and astrophysics, public health, environmental engineering, and philosophy across 15 European countries and analyzed hyperlinks from web CVs of academics. About 60% of the 2,154 survey responses reported having a web CV or something similar, and there were differences between disciplines, genders, and countries. A follow-up outlink analysis of 2,700 web CVs found that a third had at least one outlink to an OA target, typically a public eprint archive or an individual self-archived file. This proportion was considerably higher in astronomy (48%) and philosophy (37%) than in environmental engineering (29%) and public health (21%). There were also differences in linking to publishers' websites, resources, and discussions. Perhaps most important, however, the amount of linking to OA publications seems to be much lower than allowed by publishers and journals, suggesting that many opportunities for disseminating full-text research online are being missed, especially in disciplines without established repositories. Moreover, few academics seem to be exploiting their CVs to link to discussions, resources, or article abstracts, which seems to be another missed opportunity for publicizing research.
  10. Thelwall, M.; Prabowo, R.; Fairclough, R.: Are raw RSS feeds suitable for broad issue scanning? : a science concern case study (2006) 0.01
    0.011018264 = product of:
      0.044073056 = sum of:
        0.044073056 = product of:
          0.08814611 = sum of:
            0.08814611 = weight(_text_:source in 6116) [ClassicSimilarity], result of:
              0.08814611 = score(doc=6116,freq=4.0), product of:
                0.22758624 = queryWeight, product of:
                  4.9575505 = idf(docFreq=844, maxDocs=44218)
                  0.045906994 = queryNorm
                0.38730863 = fieldWeight in 6116, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.9575505 = idf(docFreq=844, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6116)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Broad issue scanning is the task of identifying important public debates arising in a given broad issue; really simple syndication (RSS) feeds are a natural information source for investigating broad issues. RSS, as originally conceived, is a method for publishing timely and concise information on the Internet, for example, about the main stories in a news site or the latest postings in a blog. RSS feeds are potentially a nonintrusive source of high-quality data about public opinion: Monitoring a large number may allow quantitative methods to extract information relevant to a given need. In this article we describe an RSS feed-based coword frequency method to identify bursts of discussion relevant to a given broad issue. A case study of public science concerns is used to demonstrate the method and assess the suitability of raw RSS feeds for broad issue scanning (i.e., without data cleansing). An attempt to identify genuine science concern debates from the corpus through investigating the top 1,000 "burst" words found only two genuine debates, however. The low success rate was mainly caused by a few pathological feeds that dominated the results and obscured any significant debates. The results point to the need to develop effective data cleansing procedures for RSS feeds, particularly if there is not a large quantity of discussion about the broad issue, and a range of potential techniques is suggested. Finally, the analysis confirmed that the time series information generated by real-time monitoring of RSS feeds could usefully illustrate the evolution of new debates relevant to a broad issue.
  11. Kousha, K.; Thelwall, M.: Assessing the impact of disciplinary research on teaching : an automatic analysis of online syllabuses (2008) 0.01
    0.011018264 = product of:
      0.044073056 = sum of:
        0.044073056 = product of:
          0.08814611 = sum of:
            0.08814611 = weight(_text_:source in 2383) [ClassicSimilarity], result of:
              0.08814611 = score(doc=2383,freq=4.0), product of:
                0.22758624 = queryWeight, product of:
                  4.9575505 = idf(docFreq=844, maxDocs=44218)
                  0.045906994 = queryNorm
                0.38730863 = fieldWeight in 2383, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.9575505 = idf(docFreq=844, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2383)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The impact of published academic research in the sciences and social sciences, when measured, is commonly estimated by counting citations from journal articles. The Web has now introduced new potential sources of quantitative data online that could be used to measure aspects of research impact. In this article we assess the extent to which citations from online syllabuses could be a valuable source of evidence about the educational utility of research. An analysis of online syllabus citations to 70,700 articles published in 2003 in the journals of 12 subjects indicates that online syllabus citations were sufficiently numerous to be a useful impact indictor in some social sciences, including political science and information and library science, but not in others, nor in any sciences. This result was consistent with current social science research having, in general, more educational value than current science research. Moreover, articles frequently cited in online syllabuses were not necessarily highly cited by other articles. Hence it seems that online syllabus citations provide a valuable additional source of evidence about the impact of journals, scholars, and research articles in some social sciences.
  12. Kousha, K.; Thelwall, M.; Abdoli, M.: Goodreads reviews to assess the wider impacts of books (2017) 0.01
    0.011018264 = product of:
      0.044073056 = sum of:
        0.044073056 = product of:
          0.08814611 = sum of:
            0.08814611 = weight(_text_:source in 3768) [ClassicSimilarity], result of:
              0.08814611 = score(doc=3768,freq=4.0), product of:
                0.22758624 = queryWeight, product of:
                  4.9575505 = idf(docFreq=844, maxDocs=44218)
                  0.045906994 = queryNorm
                0.38730863 = fieldWeight in 3768, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.9575505 = idf(docFreq=844, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3768)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Although peer-review and citation counts are commonly used to help assess the scholarly impact of published research, informal reader feedback might also be exploited to help assess the wider impacts of books, such as their educational or cultural value. The social website Goodreads seems to be a reasonable source for this purpose because it includes a large number of book reviews and ratings by many users inside and outside of academia. To check this, Goodreads book metrics were compared with different book-based impact indicators for 15,928 academic books across broad fields. Goodreads engagements were numerous enough in the arts (85% of books had at least one), humanities (80%), and social sciences (67%) for use as a source of impact evidence. Low and moderate correlations between Goodreads book metrics and scholarly or non-scholarly indicators suggest that reader feedback in Goodreads reflects the many purposes of books rather than a single type of impact. Although Goodreads book metrics can be manipulated, they could be used guardedly by academics, authors, and publishers in evaluations.
  13. Thelwall, M.; Wouters, P.; Fry, J.: Information-centered research for large-scale analyses of new information sources (2008) 0.01
    0.010907525 = product of:
      0.0436301 = sum of:
        0.0436301 = product of:
          0.0872602 = sum of:
            0.0872602 = weight(_text_:source in 1969) [ClassicSimilarity], result of:
              0.0872602 = score(doc=1969,freq=2.0), product of:
                0.22758624 = queryWeight, product of:
                  4.9575505 = idf(docFreq=844, maxDocs=44218)
                  0.045906994 = queryNorm
                0.38341597 = fieldWeight in 1969, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.9575505 = idf(docFreq=844, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1969)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    New mass publishing genres, such as blogs and personal home pages provide a rich source of social data that is yet to be fully exploited by the social sciences and humanities. Information-centered research (ICR) not only provides a genuinely new and useful information science research model for this type of data, but can also contribute to the emerging e-research infrastructure. Nevertheless, ICR should not be conducted on a purely abstract level, but should relate to potentially relevant problems.
  14. Thelwall, M.: ¬A comparison of sources of links for academic Web impact factor calculations (2002) 0.01
    0.009349307 = product of:
      0.03739723 = sum of:
        0.03739723 = product of:
          0.07479446 = sum of:
            0.07479446 = weight(_text_:source in 4474) [ClassicSimilarity], result of:
              0.07479446 = score(doc=4474,freq=2.0), product of:
                0.22758624 = queryWeight, product of:
                  4.9575505 = idf(docFreq=844, maxDocs=44218)
                  0.045906994 = queryNorm
                0.32864225 = fieldWeight in 4474, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.9575505 = idf(docFreq=844, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4474)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    There has been much recent interest in extracting information from collections of Web links. One tool that has been used is Ingwersen's Web impact factor. It has been demonstrated that several versions of this metric can produce results that correlate with research ratings of British universities showing that, despite being a measure of a purely Internet phenomenon, the results are susceptible to a wider interpretation. This paper addresses the question of which is the best possible domain to count backlinks from, if research is the focus of interest. WIFs for British universities calculated from several different source domains are compared, primarily the .edu, .ac.uk and .uk domains, and the entire Web. The results show that all four areas produce WIFs that correlate strongly with research ratings, but that none produce incontestably superior figures. It was also found that the WIF was less able to differentiate in more homogeneous subsets of universities, although positive results are still possible.
  15. Thelwall, M.: Conceptualizing documentation on the Web : an evaluation of different heuristic-based models for counting links between university Web sites (2002) 0.01
    0.0077910894 = product of:
      0.031164357 = sum of:
        0.031164357 = product of:
          0.062328715 = sum of:
            0.062328715 = weight(_text_:source in 978) [ClassicSimilarity], result of:
              0.062328715 = score(doc=978,freq=2.0), product of:
                0.22758624 = queryWeight, product of:
                  4.9575505 = idf(docFreq=844, maxDocs=44218)
                  0.045906994 = queryNorm
                0.27386856 = fieldWeight in 978, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.9575505 = idf(docFreq=844, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=978)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    All known previous Web link studies have used the Web page as the primary indivisible source document for counting purposes. Arguments are presented to explain why this is not necessarily optimal and why other alternatives have the potential to produce better results. This is despite the fact that individual Web files are often the only choice if search engines are used for raw data and are the easiest basic Web unit to identify. The central issue is of defining the Web "document": that which should comprise the single indissoluble unit of coherent material. Three alternative heuristics are defined for the educational arena based upon the directory, the domain and the whole university site. These are then compared by implementing them an a set of 108 UK university institutional Web sites under the assumption that a more effective heuristic will tend to produce results that correlate more highly with institutional research productivity. It was discovered that the domain and directory models were able to successfully reduce the impact of anomalous linking behavior between pairs of Web sites, with the latter being the method of choice. Reasons are then given as to why a document model an its own cannot eliminate all anomalies in Web linking behavior. Finally, the results from all models give a clear confirmation of the very strong association between the research productivity of a UK university and the number of incoming links from its peers' Web sites.
  16. Kousha, K.; Thelwall, M.: Google book search : citation analysis for social science and the humanities (2009) 0.01
    0.0077910894 = product of:
      0.031164357 = sum of:
        0.031164357 = product of:
          0.062328715 = sum of:
            0.062328715 = weight(_text_:source in 2946) [ClassicSimilarity], result of:
              0.062328715 = score(doc=2946,freq=2.0), product of:
                0.22758624 = queryWeight, product of:
                  4.9575505 = idf(docFreq=844, maxDocs=44218)
                  0.045906994 = queryNorm
                0.27386856 = fieldWeight in 2946, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.9575505 = idf(docFreq=844, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2946)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In both the social sciences and the humanities, books and monographs play significant roles in research communication. The absence of citations from most books and monographs from the Thomson Reuters/Institute for Scientific Information databases (ISI) has been criticized, but attempts to include citations from or to books in the research evaluation of the social sciences and humanities have not led to widespread adoption. This article assesses whether Google Book Search (GBS) can partially fill this gap by comparing citations from books with citations from journal articles to journal articles in 10 science, social science, and humanities disciplines. Book citations were 31% to 212% of ISI citations and, hence, numerous enough to supplement ISI citations in the social sciences and humanities covered, but not in the sciences (3%-5%), except for computing (46%), due to numerous published conference proceedings. A case study was also made of all 1,923 articles in the 51 information science and library science ISI-indexed journals published in 2003. Within this set, highly book-cited articles tended to receive many ISI citations, indicating a significant relationship between the two types of citation data, but with important exceptions that point to the additional information provided by book citations. In summary, GBS is clearly a valuable new source of citation data for the social sciences and humanities. One practical implication is that book-oriented scholars should consult it for additional citations to their work when applying for promotion and tenure.
  17. Kousha, K.; Thelwall, M.: ¬An automatic method for extracting citations from Google Books (2015) 0.01
    0.0077910894 = product of:
      0.031164357 = sum of:
        0.031164357 = product of:
          0.062328715 = sum of:
            0.062328715 = weight(_text_:source in 1658) [ClassicSimilarity], result of:
              0.062328715 = score(doc=1658,freq=2.0), product of:
                0.22758624 = queryWeight, product of:
                  4.9575505 = idf(docFreq=844, maxDocs=44218)
                  0.045906994 = queryNorm
                0.27386856 = fieldWeight in 1658, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.9575505 = idf(docFreq=844, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1658)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Recent studies have shown that counting citations from books can help scholarly impact assessment and that Google Books (GB) is a useful source of such citation counts, despite its lack of a public citation index. Searching GB for citations produces approximate matches, however, and so its raw results need time-consuming human filtering. In response, this article introduces a method to automatically remove false and irrelevant matches from GB citation searches in addition to introducing refinements to a previous GB manual citation extraction method. The method was evaluated by manual checking of sampled GB results and comparing citations to about 14,500 monographs in the Thomson Reuters Book Citation Index (BKCI) against automatically extracted citations from GB across 24 subject areas. GB citations were 103% to 137% as numerous as BKCI citations in the humanities, except for tourism (72%) and linguistics (91%), 46% to 85% in social sciences, but only 8% to 53% in the sciences. In all cases, however, GB had substantially more citing books than did BKCI, with BKCI's results coming predominantly from journal articles. Moderate correlations between the GB and BKCI citation counts in social sciences and humanities, with most BKCI results coming from journal articles rather than books, suggests that they could measure the different aspects of impact, however.
  18. Shema, H.; Bar-Ilan, J.; Thelwall, M.: How is research blogged? : A content analysis approach (2015) 0.01
    0.0077910894 = product of:
      0.031164357 = sum of:
        0.031164357 = product of:
          0.062328715 = sum of:
            0.062328715 = weight(_text_:source in 1863) [ClassicSimilarity], result of:
              0.062328715 = score(doc=1863,freq=2.0), product of:
                0.22758624 = queryWeight, product of:
                  4.9575505 = idf(docFreq=844, maxDocs=44218)
                  0.045906994 = queryNorm
                0.27386856 = fieldWeight in 1863, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.9575505 = idf(docFreq=844, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1863)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Blogs that cite academic articles have emerged as a potential source of alternative impact metrics for the visibility of the blogged articles. Nevertheless, to evaluate more fully the value of blog citations, it is necessary to investigate whether research blogs focus on particular types of articles or give new perspectives on scientific discourse. Therefore, we studied the characteristics of peer-reviewed references in blogs and the typical content of blog posts to gain insight into bloggers' motivations. The sample consisted of 391 blog posts from 2010 to 2012 in Researchblogging.org's health category. The bloggers mostly cited recent research articles or reviews from top multidisciplinary and general medical journals. Using content analysis methods, we created a general classification scheme for blog post content with 10 major topic categories, each with several subcategories. The results suggest that health research bloggers rarely self-cite and that the vast majority of their blog posts (90%) include a general discussion of the issue covered in the article, with more than one quarter providing health-related advice based on the article(s) covered. These factors suggest a genuine attempt to engage with a wider, nonacademic audience. Nevertheless, almost 30% of the posts included some criticism of the issues being discussed.
  19. Kousha, K.; Thelwall, M.: News stories as evidence for research? : BBC citations from articles, Books, and Wikipedia (2017) 0.01
    0.0077910894 = product of:
      0.031164357 = sum of:
        0.031164357 = product of:
          0.062328715 = sum of:
            0.062328715 = weight(_text_:source in 3760) [ClassicSimilarity], result of:
              0.062328715 = score(doc=3760,freq=2.0), product of:
                0.22758624 = queryWeight, product of:
                  4.9575505 = idf(docFreq=844, maxDocs=44218)
                  0.045906994 = queryNorm
                0.27386856 = fieldWeight in 3760, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.9575505 = idf(docFreq=844, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3760)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Although news stories target the general public and are sometimes inaccurate, they can serve as sources of real-world information for researchers. This article investigates the extent to which academics exploit journalism using content and citation analyses of online BBC News stories cited by Scopus articles. A total of 27,234 Scopus-indexed publications have cited at least one BBC News story, with a steady annual increase. Citations from the arts and humanities (2.8% of publications in 2015) and social sciences (1.5%) were more likely than citations from medicine (0.1%) and science (<0.1%). Surprisingly, half of the sampled Scopus-cited science and technology (53%) and medicine and health (47%) stories were based on academic research, rather than otherwise unpublished information, suggesting that researchers have chosen a lower-quality secondary source for their citations. Nevertheless, the BBC News stories that were most frequently cited by Scopus, Google Books, and Wikipedia introduced new information from many different topics, including politics, business, economics, statistics, and reports about events. Thus, news stories are mediating real-world knowledge into the academic domain, a potential cause for concern.
  20. Thelwall, M.; Kousha, K.: SlideShare presentations, citations, users, and trends : a professional site with academic and educational uses (2017) 0.01
    0.0077910894 = product of:
      0.031164357 = sum of:
        0.031164357 = product of:
          0.062328715 = sum of:
            0.062328715 = weight(_text_:source in 3766) [ClassicSimilarity], result of:
              0.062328715 = score(doc=3766,freq=2.0), product of:
                0.22758624 = queryWeight, product of:
                  4.9575505 = idf(docFreq=844, maxDocs=44218)
                  0.045906994 = queryNorm
                0.27386856 = fieldWeight in 3766, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.9575505 = idf(docFreq=844, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3766)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    SlideShare is a free social website that aims to help users distribute and find presentations. Owned by LinkedIn since 2012, it targets a professional audience but may give value to scholarship through creating a long-term record of the content of talks. This article tests this hypothesis by analyzing sets of general and scholarly related SlideShare documents using content and citation analysis and popularity statistics reported on the site. The results suggest that academics, students, and teachers are a minority of SlideShare uploaders, especially since 2010, with most documents not being directly related to scholarship or teaching. About two thirds of uploaded SlideShare documents are presentation slides, with the remainder often being files associated with presentations or video recordings of talks. SlideShare is therefore a presentation-centered site with a predominantly professional user base. Although a minority of the uploaded SlideShare documents are cited by, or cite, academic publications, probably too few articles are cited by SlideShare to consider extracting SlideShare citations for research evaluation. Nevertheless, scholars should consider SlideShare to be a potential source of academic and nonacademic information, particularly in library and information science, education, and business.