Search (28 results, page 1 of 2)

  • × author_ss:"Thelwall, M."
  1. Thelwall, M.; Sud, P.: Mendeley readership counts : an investigation of temporal and disciplinary differences (2016) 0.10
    0.09744447 = product of:
      0.19488893 = sum of:
        0.14405231 = weight(_text_:assess in 3211) [ClassicSimilarity], result of:
          0.14405231 = score(doc=3211,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.39077166 = fieldWeight in 3211, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.046875 = fieldNorm(doc=3211)
        0.050836623 = weight(_text_:22 in 3211) [ClassicSimilarity], result of:
          0.050836623 = score(doc=3211,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.23214069 = fieldWeight in 3211, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=3211)
      0.5 = coord(2/4)
    
    Abstract
    Scientists and managers using citation-based indicators to help evaluate research cannot evaluate recent articles because of the time needed for citations to accrue. Reading occurs before citing, however, and so it makes sense to count readers rather than citations for recent publications. To assess this, Mendeley readers and citations were obtained for articles from 2004 to late 2014 in five broad categories (agriculture, business, decision science, pharmacy, and the social sciences) and 50 subcategories. In these areas, citation counts tended to increase with every extra year since publication, and readership counts tended to increase faster initially but then stabilize after about 5 years. The correlation between citations and readers was also higher for longer time periods, stabilizing after about 5 years. Although there were substantial differences between broad fields and smaller differences between subfields, the results confirm the value of Mendeley reader counts as early scientific impact indicators.
    Date
    16.11.2016 11:07:22
  2. Thelwall, M.: Are Mendeley reader counts high enough for research evaluations when articles are published? (2017) 0.08
    0.08120373 = product of:
      0.16240746 = sum of:
        0.120043606 = weight(_text_:assess in 3806) [ClassicSimilarity], result of:
          0.120043606 = score(doc=3806,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.32564306 = fieldWeight in 3806, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3806)
        0.042363856 = weight(_text_:22 in 3806) [ClassicSimilarity], result of:
          0.042363856 = score(doc=3806,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.19345059 = fieldWeight in 3806, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3806)
      0.5 = coord(2/4)
    
    Abstract
    Purpose Mendeley reader counts have been proposed as early indicators for the impact of academic publications. The purpose of this paper is to assess whether there are enough Mendeley readers for research evaluation purposes during the month when an article is first published. Design/methodology/approach Average Mendeley reader counts were compared to the average Scopus citation counts for 104,520 articles from ten disciplines during the second half of 2016. Findings Articles attracted, on average, between 0.1 and 0.8 Mendeley readers per article in the month in which they first appeared in Scopus. This is about ten times more than the average Scopus citation count. Research limitations/implications Other disciplines may use Mendeley more or less than the ten investigated here. The results are dependent on Scopus's indexing practices, and Mendeley reader counts can be manipulated and have national and seniority biases. Practical implications Mendeley reader counts during the month of publication are more powerful than Scopus citations for comparing the average impacts of groups of documents but are not high enough to differentiate between the impacts of typical individual articles. Originality/value This is the first multi-disciplinary and systematic analysis of Mendeley reader counts from the publication month of an article.
    Date
    20. 1.2015 18:30:22
  3. Kousha, K.; Thelwall, M.; Abdoli, M.: Goodreads reviews to assess the wider impacts of books (2017) 0.05
    0.051980402 = product of:
      0.20792161 = sum of:
        0.20792161 = weight(_text_:assess in 3768) [ClassicSimilarity], result of:
          0.20792161 = score(doc=3768,freq=6.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.5640303 = fieldWeight in 3768, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3768)
      0.25 = coord(1/4)
    
    Abstract
    Although peer-review and citation counts are commonly used to help assess the scholarly impact of published research, informal reader feedback might also be exploited to help assess the wider impacts of books, such as their educational or cultural value. The social website Goodreads seems to be a reasonable source for this purpose because it includes a large number of book reviews and ratings by many users inside and outside of academia. To check this, Goodreads book metrics were compared with different book-based impact indicators for 15,928 academic books across broad fields. Goodreads engagements were numerous enough in the arts (85% of books had at least one), humanities (80%), and social sciences (67%) for use as a source of impact evidence. Low and moderate correlations between Goodreads book metrics and scholarly or non-scholarly indicators suggest that reader feedback in Goodreads reflects the many purposes of books rather than a single type of impact. Although Goodreads book metrics can be manipulated, they could be used guardedly by academics, authors, and publishers in evaluations.
  4. Thelwall, M.; Wilkinson, D.: Finding similar academic Web sites with links, bibliometric couplings and colinks (2004) 0.04
    0.036013078 = product of:
      0.14405231 = sum of:
        0.14405231 = weight(_text_:assess in 2571) [ClassicSimilarity], result of:
          0.14405231 = score(doc=2571,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.39077166 = fieldWeight in 2571, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.046875 = fieldNorm(doc=2571)
      0.25 = coord(1/4)
    
    Abstract
    A common task in both Webmetrics and Web information retrieval is to identify a set of Web pages or sites that are similar in content. In this paper we assess the extent to which links, colinks and couplings can be used to identify similar Web sites. As an experiment, a random sample of 500 pairs of domains from the UK academic Web were taken and human assessments of site similarity, based upon content type, were compared against ratings for the three concepts. The results show that using a combination of all three gives the highest probability of identifying similar sites, but surprisingly this was only a marginal improvement over using links alone. Another unexpected result was that high values for either colink counts or couplings were associated with only a small increased likelihood of similarity. The principal advantage of using couplings and colinks was found to be greater coverage in terms of a much larger number of pairs of sites being connected by these measures, instead of increased probability of similarity. In information retrieval terminology, this is improved recall rather than improved precision.
  5. Thelwall, M.: Extracting accurate and complete results from search engines : case study windows live (2008) 0.04
    0.036013078 = product of:
      0.14405231 = sum of:
        0.14405231 = weight(_text_:assess in 1338) [ClassicSimilarity], result of:
          0.14405231 = score(doc=1338,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.39077166 = fieldWeight in 1338, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.046875 = fieldNorm(doc=1338)
      0.25 = coord(1/4)
    
    Abstract
    Although designed for general Web searching, Webometrics and related research commercial search engines are also used to produce estimated hit counts or lists of URLs matching a query. Unfortunately, however, they do not return all matching URLs for a search and their hit count estimates are unreliable. In this article, we assess whether it is possible to obtain complete lists of matching URLs from Windows Live, and whether any of its hit count estimates are robust. As part of this, we introduce two new methods to extract extra URLs from search engines: automated query splitting and automated domain and TLD searching. Both methods successfully identify additional matching URLs but the findings suggest that there is no way to get complete lists of matching URLs or accurate hit counts from Windows Live, although some estimating suggestions are provided.
  6. Shifman, L.; Thelwall, M.: Assessing global diffusion with Web memetics : the spread and evolution of a popular joke (2009) 0.04
    0.036013078 = product of:
      0.14405231 = sum of:
        0.14405231 = weight(_text_:assess in 3303) [ClassicSimilarity], result of:
          0.14405231 = score(doc=3303,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.39077166 = fieldWeight in 3303, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.046875 = fieldNorm(doc=3303)
      0.25 = coord(1/4)
    
    Abstract
    Memes are small units of culture, analogous to genes, which flow from person to person by copying or imitation. More than any previous medium, the Internet has the technical capabilities for global meme diffusion. Yet, to spread globally, memes need to negotiate their way through cultural and linguistic borders. This article introduces a new broad method, Web memetics, comprising extensive Web searches and combined quantitative and qualitative analyses, to identify and assess: (a) the different versions of a meme, (b) its evolution online, and (c) its Web presence and translation into common Internet languages. This method is demonstrated through one extensively circulated joke about men, women, and computers. The results show that the joke has mutated into several different versions and is widely translated, and that translations incorporate small, local adaptations while retaining the English versions' fundamental components. In conclusion, Web memetics has demonstrated its ability to identify and track the evolution and spread of memes online, with interesting results, albeit for only one case study.
  7. Thelwall, M.: Webometrics (2009) 0.04
    0.036013078 = product of:
      0.14405231 = sum of:
        0.14405231 = weight(_text_:assess in 3906) [ClassicSimilarity], result of:
          0.14405231 = score(doc=3906,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.39077166 = fieldWeight in 3906, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.046875 = fieldNorm(doc=3906)
      0.25 = coord(1/4)
    
    Abstract
    Webometrics is an information science field concerned with measuring aspects of the World Wide Web (WWW) for a variety of information science research goals. It came into existence about five years after the Web was formed and has since grown to become a significant aspect of information science, at least in terms of published research. Although some webometrics research has focused on the structure or evolution of the Web itself or the performance of commercial search engines, most has used data from the Web to shed light on information provision or online communication in various contexts. Most prominently, techniques have been developed to track, map, and assess Web-based informal scholarly communication, for example, in terms of the hyperlinks between academic Web sites or the online impact of digital repositories. In addition, a range of nonacademic issues and groups of Web users have also been analyzed.
  8. Thelwall, M.: Assessing web search engines : a webometric approach (2011) 0.04
    0.036013078 = product of:
      0.14405231 = sum of:
        0.14405231 = weight(_text_:assess in 10) [ClassicSimilarity], result of:
          0.14405231 = score(doc=10,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.39077166 = fieldWeight in 10, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.046875 = fieldNorm(doc=10)
      0.25 = coord(1/4)
    
    Abstract
    Information Retrieval (IR) research typically evaluates search systems in terms of the standard precision, recall and F-measures to weight the relative importance of precision and recall (e.g. van Rijsbergen, 1979). All of these assess the extent to which the system returns good matches for a query. In contrast, webometric measures are designed specifically for web search engines and are designed to monitor changes in results over time and various aspects of the internal logic of the way in which search engine select the results to be returned. This chapter introduces a range of webometric measurements and illustrates them with case studies of Google, Bing and Yahoo! This is a very fertile area for simple and complex new investigations into search engine results.
  9. Kousha, K.; Thelwall, M.: Can Amazon.com reviews help to assess the wider impacts of books? (2016) 0.04
    0.036013078 = product of:
      0.14405231 = sum of:
        0.14405231 = weight(_text_:assess in 2768) [ClassicSimilarity], result of:
          0.14405231 = score(doc=2768,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.39077166 = fieldWeight in 2768, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.046875 = fieldNorm(doc=2768)
      0.25 = coord(1/4)
    
  10. Thelwall, M.; Harries, G.: Do the Web Sites of Higher Rated Scholars Have Significantly More Online Impact? (2004) 0.03
    0.030010901 = product of:
      0.120043606 = sum of:
        0.120043606 = weight(_text_:assess in 2123) [ClassicSimilarity], result of:
          0.120043606 = score(doc=2123,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.32564306 = fieldWeight in 2123, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2123)
      0.25 = coord(1/4)
    
    Abstract
    The quality and impact of academic Web sites is of interest to many audiences, including the scholars who use them and Web educators who need to identify best practice. Several large-scale European Union research projects have been funded to build new indicators for online scientific activity, reflecting recognition of the importance of the Web for scholarly communication. In this paper we address the key question of whether higher rated scholars produce higher impact Web sites, using the United Kingdom as a case study and measuring scholars' quality in terms of university-wide average research ratings. Methodological issues concerning the measurement of the online impact are discussed, leading to the adoption of counts of links to a university's constituent single domain Web sites from an aggregated counting metric. The findings suggest that universities with higher rated scholars produce significantly more Web content but with a similar average online impact. Higher rated scholars therefore attract more total links from their peers, but only by being more prolific, refuting earlier suggestions. It can be surmised that general Web publications are very different from scholarly journal articles and conference papers, for which scholarly quality does associate with citation impact. This has important implications for the construction of new Web indicators, for example that online impact should not be used to assess the quality of small groups of scholars, even within a single discipline.
  11. Thelwall, M.; Prabowo, R.; Fairclough, R.: Are raw RSS feeds suitable for broad issue scanning? : a science concern case study (2006) 0.03
    0.030010901 = product of:
      0.120043606 = sum of:
        0.120043606 = weight(_text_:assess in 6116) [ClassicSimilarity], result of:
          0.120043606 = score(doc=6116,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.32564306 = fieldWeight in 6116, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6116)
      0.25 = coord(1/4)
    
    Abstract
    Broad issue scanning is the task of identifying important public debates arising in a given broad issue; really simple syndication (RSS) feeds are a natural information source for investigating broad issues. RSS, as originally conceived, is a method for publishing timely and concise information on the Internet, for example, about the main stories in a news site or the latest postings in a blog. RSS feeds are potentially a nonintrusive source of high-quality data about public opinion: Monitoring a large number may allow quantitative methods to extract information relevant to a given need. In this article we describe an RSS feed-based coword frequency method to identify bursts of discussion relevant to a given broad issue. A case study of public science concerns is used to demonstrate the method and assess the suitability of raw RSS feeds for broad issue scanning (i.e., without data cleansing). An attempt to identify genuine science concern debates from the corpus through investigating the top 1,000 "burst" words found only two genuine debates, however. The low success rate was mainly caused by a few pathological feeds that dominated the results and obscured any significant debates. The results point to the need to develop effective data cleansing procedures for RSS feeds, particularly if there is not a large quantity of discussion about the broad issue, and a range of potential techniques is suggested. Finally, the analysis confirmed that the time series information generated by real-time monitoring of RSS feeds could usefully illustrate the evolution of new debates relevant to a broad issue.
  12. Thelwall, M.; Li, X.; Barjak, F.; Robinson, S.: Assessing the international web connectivity of research groups (2008) 0.03
    0.030010901 = product of:
      0.120043606 = sum of:
        0.120043606 = weight(_text_:assess in 1401) [ClassicSimilarity], result of:
          0.120043606 = score(doc=1401,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.32564306 = fieldWeight in 1401, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1401)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The purpose of this paper is to claim that it is useful to assess the web connectivity of research groups, describe hyperlink-based techniques to achieve this and present brief details of European life sciences research groups as a case study. Design/methodology/approach - A commercial search engine was harnessed to deliver hyperlink data via its automatic query submission interface. A special purpose link analysis tool, LexiURL, then summarised and graphed the link data in appropriate ways. Findings - Webometrics can provide a wide range of descriptive information about the international connectivity of research groups. Research limitations/implications - Only one field was analysed, data was taken from only one search engine, and the results were not validated. Practical implications - Web connectivity seems to be particularly important for attracting overseas job applicants and to promote research achievements and capabilities, and hence we contend that it can be useful for national and international governments to use webometrics to ensure that the web is being used effectively by research groups. Originality/value - This is the first paper to make a case for the value of using a range of webometric techniques to evaluate the web presences of research groups within a field, and possibly the first "applied" webometrics study produced for an external contract.
  13. Kousha, K.; Thelwall, M.: Assessing the impact of disciplinary research on teaching : an automatic analysis of online syllabuses (2008) 0.03
    0.030010901 = product of:
      0.120043606 = sum of:
        0.120043606 = weight(_text_:assess in 2383) [ClassicSimilarity], result of:
          0.120043606 = score(doc=2383,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.32564306 = fieldWeight in 2383, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2383)
      0.25 = coord(1/4)
    
    Abstract
    The impact of published academic research in the sciences and social sciences, when measured, is commonly estimated by counting citations from journal articles. The Web has now introduced new potential sources of quantitative data online that could be used to measure aspects of research impact. In this article we assess the extent to which citations from online syllabuses could be a valuable source of evidence about the educational utility of research. An analysis of online syllabus citations to 70,700 articles published in 2003 in the journals of 12 subjects indicates that online syllabus citations were sufficiently numerous to be a useful impact indictor in some social sciences, including political science and information and library science, but not in others, nor in any sciences. This result was consistent with current social science research having, in general, more educational value than current science research. Moreover, articles frequently cited in online syllabuses were not necessarily highly cited by other articles. Hence it seems that online syllabus citations provide a valuable additional source of evidence about the impact of journals, scholars, and research articles in some social sciences.
  14. Thelwall, M.: ¬A comparison of link and URL citation counting (2011) 0.03
    0.030010901 = product of:
      0.120043606 = sum of:
        0.120043606 = weight(_text_:assess in 4533) [ClassicSimilarity], result of:
          0.120043606 = score(doc=4533,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.32564306 = fieldWeight in 4533, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4533)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - Link analysis is an established topic within webometrics. It normally uses counts of links between sets of web sites or to sets of web sites. These link counts are derived from web crawlers or commercial search engines with the latter being the only alternative for some investigations. This paper compares link counts with URL citation counts in order to assess whether the latter could be a replacement for the former if the major search engines withdraw their advanced hyperlink search facilities. Design/methodology/approach - URL citation counts are compared with link counts for a variety of data sets used in previous webometric studies. Findings - The results show a high degree of correlation between the two but with URL citations being much less numerous, at least outside academia and business. Research limitations/implications - The results cover a small selection of 15 case studies and so the findings are only indicative. Significant differences between results indicate that the difference between link counts and URL citation counts will vary between webometric studies. Practical implications - Should link searches be withdrawn, then link analyses of less well linked non-academic, non-commercial sites would be seriously weakened, although citations based on e-mail addresses could help to make citations more numerous than links for some business and academic contexts. Originality/value - This is the first systematic study of the difference between link counts and URL citation counts in a variety of contexts and it shows that there are significant differences between the two.
  15. Kousha, K.; Thelwall, M.: Disseminating research with web CV hyperlinks (2014) 0.03
    0.030010901 = product of:
      0.120043606 = sum of:
        0.120043606 = weight(_text_:assess in 1331) [ClassicSimilarity], result of:
          0.120043606 = score(doc=1331,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.32564306 = fieldWeight in 1331, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1331)
      0.25 = coord(1/4)
    
    Abstract
    Some curricula vitae (web CVs) of academics on the web, including homepages and publication lists, link to open-access (OA) articles, resources, abstracts in publishers' websites, or academic discussions, helping to disseminate research. To assess how common such practices are and whether they vary by discipline, gender, and country, the authors conducted a large-scale e-mail survey of astronomy and astrophysics, public health, environmental engineering, and philosophy across 15 European countries and analyzed hyperlinks from web CVs of academics. About 60% of the 2,154 survey responses reported having a web CV or something similar, and there were differences between disciplines, genders, and countries. A follow-up outlink analysis of 2,700 web CVs found that a third had at least one outlink to an OA target, typically a public eprint archive or an individual self-archived file. This proportion was considerably higher in astronomy (48%) and philosophy (37%) than in environmental engineering (29%) and public health (21%). There were also differences in linking to publishers' websites, resources, and discussions. Perhaps most important, however, the amount of linking to OA publications seems to be much lower than allowed by publishers and journals, suggesting that many opportunities for disseminating full-text research online are being missed, especially in disciplines without established repositories. Moreover, few academics seem to be exploiting their CVs to link to discussions, resources, or article abstracts, which seems to be another missed opportunity for publicizing research.
  16. Thelwall, M.: Web indicators for research evaluation : a practical guide (2016) 0.03
    0.030010901 = product of:
      0.120043606 = sum of:
        0.120043606 = weight(_text_:assess in 3384) [ClassicSimilarity], result of:
          0.120043606 = score(doc=3384,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.32564306 = fieldWeight in 3384, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3384)
      0.25 = coord(1/4)
    
    Abstract
    In recent years there has been an increasing demand for research evaluation within universities and other research-based organisations. In parallel, there has been an increasing recognition that traditional citation-based indicators are not able to reflect the societal impacts of research and are slow to appear. This has led to the creation of new indicators for different types of research impact as well as timelier indicators, mainly derived from the Web. These indicators have been called altmetrics, webometrics or just web metrics. This book describes and evaluates a range of web indicators for aspects of societal or scholarly impact, discusses the theory and practice of using and evaluating web indicators for research assessment and outlines practical strategies for obtaining many web indicators. In addition to describing impact indicators for traditional scholarly outputs, such as journal articles and monographs, it also covers indicators for videos, datasets, software and other non-standard scholarly outputs. The book describes strategies to analyse web indicators for individual publications as well as to compare the impacts of groups of publications. The practical part of the book includes descriptions of how to use the free software Webometric Analyst to gather and analyse web data. This book is written for information science undergraduate and Master?s students that are learning about alternative indicators or scientometrics as well as Ph.D. students and other researchers and practitioners using indicators to help assess research impact or to study scholarly communication.
  17. Kousha, K.; Thelwall, M.: Are wikipedia citations important evidence of the impact of scholarly articles and books? (2017) 0.03
    0.030010901 = product of:
      0.120043606 = sum of:
        0.120043606 = weight(_text_:assess in 3440) [ClassicSimilarity], result of:
          0.120043606 = score(doc=3440,freq=2.0), product of:
            0.36863554 = queryWeight, product of:
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.062536046 = queryNorm
            0.32564306 = fieldWeight in 3440, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8947687 = idf(docFreq=330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3440)
      0.25 = coord(1/4)
    
    Abstract
    Individual academics and research evaluators often need to assess the value of published research. Although citation counts are a recognized indicator of scholarly impact, alternative data is needed to provide evidence of other types of impact, including within education and wider society. Wikipedia is a logical choice for both of these because the role of a general encyclopaedia is to be an understandable repository of facts about a diverse array of topics and hence it may cite research to support its claims. To test whether Wikipedia could provide new evidence about the impact of scholarly research, this article counted citations to 302,328 articles and 18,735 monographs in English indexed by Scopus in the period 2005 to 2012. The results show that citations from Wikipedia to articles are too rare for most research evaluation purposes, with only 5% of articles being cited in all fields. In contrast, a third of monographs have at least one citation from Wikipedia, with the most in the arts and humanities. Hence, Wikipedia citations can provide extra impact evidence for academic monographs. Nevertheless, the results may be relatively easily manipulated and so Wikipedia is not recommended for evaluations affecting stakeholder interests.
  18. Thelwall, M.; Ruschenburg, T.: Grundlagen und Forschungsfelder der Webometrie (2006) 0.02
    0.016945543 = product of:
      0.06778217 = sum of:
        0.06778217 = weight(_text_:22 in 77) [ClassicSimilarity], result of:
          0.06778217 = score(doc=77,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.30952093 = fieldWeight in 77, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=77)
      0.25 = coord(1/4)
    
    Date
    4.12.2006 12:12:22
  19. Levitt, J.M.; Thelwall, M.: Citation levels and collaboration within library and information science (2009) 0.01
    0.0149778845 = product of:
      0.059911538 = sum of:
        0.059911538 = weight(_text_:22 in 2734) [ClassicSimilarity], result of:
          0.059911538 = score(doc=2734,freq=4.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.27358043 = fieldWeight in 2734, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2734)
      0.25 = coord(1/4)
    
    Abstract
    Collaboration is a major research policy objective, but does it deliver higher quality research? This study uses citation analysis to examine the Web of Science (WoS) Information Science & Library Science subject category (IS&LS) to ascertain whether, in general, more highly cited articles are more highly collaborative than other articles. It consists of two investigations. The first investigation is a longitudinal comparison of the degree and proportion of collaboration in five strata of citation; it found that collaboration in the highest four citation strata (all in the most highly cited 22%) increased in unison over time, whereas collaboration in the lowest citation strata (un-cited articles) remained low and stable. Given that over 40% of the articles were un-cited, it seems important to take into account the differences found between un-cited articles and relatively highly cited articles when investigating collaboration in IS&LS. The second investigation compares collaboration for 35 influential information scientists; it found that their more highly cited articles on average were not more highly collaborative than their less highly cited articles. In summary, although collaborative research is conducive to high citation in general, collaboration has apparently not tended to be essential to the success of current and former elite information scientists.
    Date
    22. 3.2009 12:43:51
  20. Thelwall, M.; Buckley, K.; Paltoglou, G.: Sentiment in Twitter events (2011) 0.01
    0.012709156 = product of:
      0.050836623 = sum of:
        0.050836623 = weight(_text_:22 in 4345) [ClassicSimilarity], result of:
          0.050836623 = score(doc=4345,freq=2.0), product of:
            0.21899058 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.062536046 = queryNorm
            0.23214069 = fieldWeight in 4345, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=4345)
      0.25 = coord(1/4)
    
    Date
    22. 1.2011 14:27:06