Search (117 results, page 1 of 6)

  • × author_ss:"Thelwall, M."
  1. Thelwall, M.: Bibliometrics to webometrics (2009) 0.02
    0.017707217 = product of:
      0.07968248 = sum of:
        0.05872617 = weight(_text_:applications in 4239) [ClassicSimilarity], result of:
          0.05872617 = score(doc=4239,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.34048924 = fieldWeight in 4239, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4239)
        0.020956306 = weight(_text_:of in 4239) [ClassicSimilarity], result of:
          0.020956306 = score(doc=4239,freq=16.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.34207192 = fieldWeight in 4239, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4239)
      0.22222222 = coord(2/9)
    
    Abstract
    Bibliometrics has changed out of all recognition since 1958; becoming established as a field, being taught widely in library and information science schools, and being at the core of a number of science evaluation research groups around the world. This was all made possible by the work of Eugene Garfield and his Science Citation Index. This article reviews the distance that bibliometrics has travelled since 1958 by comparing early bibliometrics with current practice, and by giving an overview of a range of recent developments, such as patent analysis, national research evaluation exercises, visualization techniques, new applications, online citation indexes, and the creation of digital libraries. Webometrics, a modern, fast-growing offshoot of bibliometrics, is reviewed in detail. Finally, future prospects are discussed with regard to both bibliometrics and webometrics.
  2. Angus, E.; Thelwall, M.; Stuart, D.: General patterns of tag usage among university groups in Flickr (2008) 0.02
    0.015648767 = product of:
      0.07041945 = sum of:
        0.050336715 = weight(_text_:applications in 2554) [ClassicSimilarity], result of:
          0.050336715 = score(doc=2554,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2918479 = fieldWeight in 2554, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=2554)
        0.020082738 = weight(_text_:of in 2554) [ClassicSimilarity], result of:
          0.020082738 = score(doc=2554,freq=20.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.32781258 = fieldWeight in 2554, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2554)
      0.22222222 = coord(2/9)
    
    Abstract
    Purpose - The purpose of this research is to investigate general patterns of tag usage and determines the usefulness of the tags used within university image groups to the wider Flickr community. There has been a significant rise in the use of Web 2.0 social network web sites and online applications in recent years. One of the most popular is Flickr, an online image management application. Design/methodology/approach - This study uses a webometric data collection, classification and informetric analysis. Findings - The results show that members of university image groups tend to tag in a manner that is of use to users of the system as a whole rather than merely for the tag creator. Originality/value - This paper gives a valuable insight into the tagging practices of image groups in Flickr.
  3. Thelwall, M.: Text characteristics of English language university Web sites (2005) 0.02
    0.01541975 = product of:
      0.069388874 = sum of:
        0.050336715 = weight(_text_:applications in 3463) [ClassicSimilarity], result of:
          0.050336715 = score(doc=3463,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2918479 = fieldWeight in 3463, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=3463)
        0.019052157 = weight(_text_:of in 3463) [ClassicSimilarity], result of:
          0.019052157 = score(doc=3463,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.3109903 = fieldWeight in 3463, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3463)
      0.22222222 = coord(2/9)
    
    Abstract
    The nature of the contents of academic Web sites is of direct relevance to the new field of scientific Web intelligence, and for search engine and topic-specific crawler designers. We analyze word frequencies in national academic Webs using the Web sites of three Englishspeaking nations: Australia, New Zealand, and the United Kingdom. Strong regularities were found in page size and word frequency distributions, but with significant anomalies. At least 26% of pages contain no words. High frequency words include university names and acronyms, Internet terminology, and computing product names: not always words in common usage away from the Web. A minority of low frequency words are spelling mistakes, with other common types including nonwords, proper names, foreign language terms or computer science variable names. Based upon these findings, recommendations for data cleansing and filtering are made, particularly for clustering applications.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.6, S.609-619
  4. Thelwall, M.: Web indicators for research evaluation : a practical guide (2016) 0.01
    0.014423447 = product of:
      0.06490551 = sum of:
        0.016735615 = weight(_text_:of in 3384) [ClassicSimilarity], result of:
          0.016735615 = score(doc=3384,freq=20.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.27317715 = fieldWeight in 3384, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3384)
        0.048169892 = weight(_text_:software in 3384) [ClassicSimilarity], result of:
          0.048169892 = score(doc=3384,freq=4.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.30993375 = fieldWeight in 3384, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3384)
      0.22222222 = coord(2/9)
    
    Abstract
    In recent years there has been an increasing demand for research evaluation within universities and other research-based organisations. In parallel, there has been an increasing recognition that traditional citation-based indicators are not able to reflect the societal impacts of research and are slow to appear. This has led to the creation of new indicators for different types of research impact as well as timelier indicators, mainly derived from the Web. These indicators have been called altmetrics, webometrics or just web metrics. This book describes and evaluates a range of web indicators for aspects of societal or scholarly impact, discusses the theory and practice of using and evaluating web indicators for research assessment and outlines practical strategies for obtaining many web indicators. In addition to describing impact indicators for traditional scholarly outputs, such as journal articles and monographs, it also covers indicators for videos, datasets, software and other non-standard scholarly outputs. The book describes strategies to analyse web indicators for individual publications as well as to compare the impacts of groups of publications. The practical part of the book includes descriptions of how to use the free software Webometric Analyst to gather and analyse web data. This book is written for information science undergraduate and Master?s students that are learning about alternative indicators or scientometrics as well as Ph.D. students and other researchers and practitioners using indicators to help assess research impact or to study scholarly communication.
  5. Thelwall, M.: Mendeley readership altmetrics for medical articles : an analysis of 45 fields (2016) 0.01
    0.014008479 = product of:
      0.063038155 = sum of:
        0.050336715 = weight(_text_:applications in 3055) [ClassicSimilarity], result of:
          0.050336715 = score(doc=3055,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2918479 = fieldWeight in 3055, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=3055)
        0.012701439 = weight(_text_:of in 3055) [ClassicSimilarity], result of:
          0.012701439 = score(doc=3055,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.20732689 = fieldWeight in 3055, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3055)
      0.22222222 = coord(2/9)
    
    Abstract
    Medical research is highly funded and often expensive and so is particularly important to evaluate effectively. Nevertheless, citation counts may accrue too slowly for use in some formal and informal evaluations. It is therefore important to investigate whether alternative metrics could be used as substitutes. This article assesses whether one such altmetric, Mendeley readership counts, correlates strongly with citation counts across all medical fields, whether the relationship is stronger if student readers are excluded, and whether they are distributed similarly to citation counts. Based on a sample of 332,975 articles from 2009 in 45 medical fields in Scopus, citation counts correlated strongly (about 0.7; 78% of articles had at least one reader) with Mendeley readership counts (from the new version 1 applications programming interface [API]) in almost all fields, with one minor exception, and the correlations tended to decrease slightly when student readers were excluded. Readership followed either a lognormal or a hooked power law distribution, whereas citations always followed a hooked power law, showing that the two may have underlying differences.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.8, S.1962-1972
  6. Thelwall, M.: Quantitative comparisons of search engine results (2008) 0.01
    0.013561956 = product of:
      0.0610288 = sum of:
        0.041947264 = weight(_text_:applications in 2350) [ClassicSimilarity], result of:
          0.041947264 = score(doc=2350,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2432066 = fieldWeight in 2350, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2350)
        0.019081537 = weight(_text_:of in 2350) [ClassicSimilarity], result of:
          0.019081537 = score(doc=2350,freq=26.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.31146988 = fieldWeight in 2350, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2350)
      0.22222222 = coord(2/9)
    
    Abstract
    Search engines are normally used to find information or Web sites, but Webometric investigations use them for quantitative data such as the number of pages matching a query and the international spread of those pages. For this type of application, the accuracy of the hit count estimates and range of URLs in the full results are important. Here, we compare the applications programming interfaces of Google, Yahoo!, and Live Search for 1,587 single word searches. The hit count estimates were broadly consistent but with Yahoo! and Google, reporting 5-6 times more hits than Live Search. Yahoo! tended to return slightly more matching URLs than Google, with Live Search returning significantly fewer. Yahoo!'s result URLs included a significantly wider range of domains and sites than the other two, and there was little consistency between the three engines in the number of different domains. In contrast, the three engines were reasonably consistent in the number of different top-level domains represented in the result URLs, although Yahoo! tended to return the most. In conclusion, quantitative results from the three search engines are mostly consistent but with unexpected types of inconsistency that users should be aware of. Google is recommended for hit count estimates but Yahoo! is recommended for all other Webometric purposes.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.11, S.1702-1710
  7. Thelwall, M.; Prabowo, R.: Identifying and characterizing public science-related fears from RSS feeds (2007) 0.01
    0.013545832 = product of:
      0.060956243 = sum of:
        0.020082738 = weight(_text_:of in 137) [ClassicSimilarity], result of:
          0.020082738 = score(doc=137,freq=20.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.32781258 = fieldWeight in 137, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=137)
        0.040873505 = weight(_text_:software in 137) [ClassicSimilarity], result of:
          0.040873505 = score(doc=137,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.2629875 = fieldWeight in 137, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=137)
      0.22222222 = coord(2/9)
    
    Abstract
    A feature of modern democracies is public mistrust of scientists and the politicization of science policy, e.g., concerning stem cell research and genetically modified food. While the extent of this mistrust is debatable, its political influence is tangible. Hence, science policy researchers and science policy makers need early warning of issues that resonate with a wide public so that they can make timely and informed decisions. In this article, a semi-automatic method for identifying significant public science-related concerns from a corpus of Internet-based RSS (Really Simple Syndication) feeds is described and shown to be an improvement on a previous similar system because of the introduction of feedbased aggregation. In addition, both the RSS corpus and the concept of public science-related fears are deconstructed, revealing hidden complexity. This article also provides evidence that genetically modified organisms and stem cell research were the two major policyrelevant science concern issues, although mobile phone radiation and software security also generated significant interest.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.3, S.379-390
  8. Thelwall, M.; Harries, G.: ¬The connection between the research of a university and counts of links to its Web pages : an investigation based upon a classification of the relationships of pages to the research of the host university (2003) 0.01
    0.0118198255 = product of:
      0.053189214 = sum of:
        0.024573447 = weight(_text_:of in 1676) [ClassicSimilarity], result of:
          0.024573447 = score(doc=1676,freq=22.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.40111488 = fieldWeight in 1676, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1676)
        0.028615767 = weight(_text_:systems in 1676) [ClassicSimilarity], result of:
          0.028615767 = score(doc=1676,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.23767869 = fieldWeight in 1676, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1676)
      0.22222222 = coord(2/9)
    
    Abstract
    Results from recent advances in link metrics have demonstrated that the hyperlink structure of national university systems can be strongly related to the research productivity of the individual institutions. This paper uses a page categorization to show that restricting the metrics to subsets more closely related to the research of the host university can produce even stronger associations. A partial overlap was also found between the effects of applying advanced document models and separating page types, but the best results were achieved through a combination of the two.
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.7, S.594-602
  9. Thelwall, M.: Assessing web search engines : a webometric approach (2011) 0.01
    0.009184495 = product of:
      0.041330226 = sum of:
        0.016802425 = weight(_text_:of in 10) [ClassicSimilarity], result of:
          0.016802425 = score(doc=10,freq=14.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2742677 = fieldWeight in 10, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=10)
        0.0245278 = weight(_text_:systems in 10) [ClassicSimilarity], result of:
          0.0245278 = score(doc=10,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2037246 = fieldWeight in 10, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=10)
      0.22222222 = coord(2/9)
    
    Abstract
    Information Retrieval (IR) research typically evaluates search systems in terms of the standard precision, recall and F-measures to weight the relative importance of precision and recall (e.g. van Rijsbergen, 1979). All of these assess the extent to which the system returns good matches for a query. In contrast, webometric measures are designed specifically for web search engines and are designed to monitor changes in results over time and various aspects of the internal logic of the way in which search engine select the results to be returned. This chapter introduces a range of webometric measurements and illustrates them with case studies of Google, Bing and Yahoo! This is a very fertile area for simple and complex new investigations into search engine results.
  10. Thelwall, M.: Can Google's PageRank be used to find the most important academic Web pages? (2003) 0.01
    0.008907516 = product of:
      0.04008382 = sum of:
        0.015556021 = weight(_text_:of in 4457) [ClassicSimilarity], result of:
          0.015556021 = score(doc=4457,freq=12.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.25392252 = fieldWeight in 4457, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4457)
        0.0245278 = weight(_text_:systems in 4457) [ClassicSimilarity], result of:
          0.0245278 = score(doc=4457,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2037246 = fieldWeight in 4457, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=4457)
      0.22222222 = coord(2/9)
    
    Abstract
    Google's PageRank is an influential algorithm that uses a model of Web use that is dominated by its link structure in order to rank pages by their estimated value to the Web community. This paper reports on the outcome of applying the algorithm to the Web sites of three national university systems in order to test whether it is capable of identifying the most important Web pages. The results are also compared with simple inlink counts. It was discovered that the highest inlinked pages do not always have the highest PageRank, indicating that the two metrics are genuinely different, even for the top pages. More significantly, however, internal links dominated external links for the high ranks in either method and superficial reasons accounted for high scores in both cases. It is concluded that PageRank is not useful for identifying the top pages in a site and that it must be combined with a powerful text matching techniques in order to get the quality of information retrieval results provided by Google.
    Source
    Journal of documentation. 59(2003) no.2, S.205-217
  11. Thelwall, M.; Buckley, K.; Paltoglou, G.: Sentiment in Twitter events (2011) 0.01
    0.008001433 = product of:
      0.036006447 = sum of:
        0.020082738 = weight(_text_:of in 4345) [ClassicSimilarity], result of:
          0.020082738 = score(doc=4345,freq=20.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.32781258 = fieldWeight in 4345, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4345)
        0.015923709 = product of:
          0.031847417 = sum of:
            0.031847417 = weight(_text_:22 in 4345) [ClassicSimilarity], result of:
              0.031847417 = score(doc=4345,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.23214069 = fieldWeight in 4345, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4345)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    The microblogging site Twitter generates a constant stream of communication, some of which concerns events of general interest. An analysis of Twitter may, therefore, give insights into why particular events resonate with the population. This article reports a study of a month of English Twitter posts, assessing whether popular events are typically associated with increases in sentiment strength, as seems intuitively likely. Using the top 30 events, determined by a measure of relative increase in (general) term usage, the results give strong evidence that popular events are normally associated with increases in negative sentiment strength and some evidence that peaks of interest in events have stronger positive sentiment than the time before the peak. It seems that many positive events, such as the Oscars, are capable of generating increased negative sentiment in reaction to them. Nevertheless, the surprisingly small average change in sentiment associated with popular events (typically 1% and only 6% for Tiger Woods' confessions) is consistent with events affording posters opportunities to satisfy pre-existing personal goals more often than eliciting instinctive reactions.
    Date
    22. 1.2011 14:27:06
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.2, S.406-418
  12. Thelwall, M.; Maflahi, N.: Guideline references and academic citations as evidence of the clinical value of health research (2016) 0.01
    0.008001433 = product of:
      0.036006447 = sum of:
        0.020082738 = weight(_text_:of in 2856) [ClassicSimilarity], result of:
          0.020082738 = score(doc=2856,freq=20.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.32781258 = fieldWeight in 2856, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2856)
        0.015923709 = product of:
          0.031847417 = sum of:
            0.031847417 = weight(_text_:22 in 2856) [ClassicSimilarity], result of:
              0.031847417 = score(doc=2856,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.23214069 = fieldWeight in 2856, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2856)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    This article introduces a new source of evidence of the value of medical-related research: citations from clinical guidelines. These give evidence that research findings have been used to inform the day-to-day practice of medical staff. To identify whether citations from guidelines can give different information from that of traditional citation counts, this article assesses the extent to which references in clinical guidelines tend to be highly cited in the academic literature and highly read in Mendeley. Using evidence from the United Kingdom, references associated with the UK's National Institute of Health and Clinical Excellence (NICE) guidelines tended to be substantially more cited than comparable articles, unless they had been published in the most recent 3 years. Citation counts also seemed to be stronger indicators than Mendeley readership altmetrics. Hence, although presence in guidelines may be particularly useful to highlight the contributions of recently published articles, for older articles citation counts may already be sufficient to recognize their contributions to health in society.
    Date
    19. 3.2016 12:22:00
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.4, S.960-966
  13. Didegah, F.; Thelwall, M.: Co-saved, co-tweeted, and co-cited networks (2018) 0.01
    0.008001433 = product of:
      0.036006447 = sum of:
        0.020082738 = weight(_text_:of in 4291) [ClassicSimilarity], result of:
          0.020082738 = score(doc=4291,freq=20.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.32781258 = fieldWeight in 4291, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4291)
        0.015923709 = product of:
          0.031847417 = sum of:
            0.031847417 = weight(_text_:22 in 4291) [ClassicSimilarity], result of:
              0.031847417 = score(doc=4291,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.23214069 = fieldWeight in 4291, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4291)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Counts of tweets and Mendeley user libraries have been proposed as altmetric alternatives to citation counts for the impact assessment of articles. Although both have been investigated to discover whether they correlate with article citations, it is not known whether users tend to tweet or save (in Mendeley) the same kinds of articles that they cite. In response, this article compares pairs of articles that are tweeted, saved to a Mendeley library, or cited by the same user, but possibly a different user for each source. The study analyzes 1,131,318 articles published in 2012, with minimum tweeted (10), saved to Mendeley (100), and cited (10) thresholds. The results show surprisingly minor overall overlaps between the three phenomena. The importance of journals for Twitter and the presence of many bots at different levels of activity suggest that this site has little value for impact altmetrics. The moderate differences between patterns of saving and citation suggest that Mendeley can be used for some types of impact assessments, but sensitivity is needed for underlying differences.
    Date
    28. 7.2018 10:00:22
    Source
    Journal of the Association for Information Science and Technology. 69(2018) no.8, S.959-973
  14. Levitt, J.M.; Thelwall, M.: Citation levels and collaboration within library and information science (2009) 0.01
    0.0074966806 = product of:
      0.033735063 = sum of:
        0.014968789 = weight(_text_:of in 2734) [ClassicSimilarity], result of:
          0.014968789 = score(doc=2734,freq=16.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.24433708 = fieldWeight in 2734, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2734)
        0.018766273 = product of:
          0.037532546 = sum of:
            0.037532546 = weight(_text_:22 in 2734) [ClassicSimilarity], result of:
              0.037532546 = score(doc=2734,freq=4.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.27358043 = fieldWeight in 2734, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2734)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Collaboration is a major research policy objective, but does it deliver higher quality research? This study uses citation analysis to examine the Web of Science (WoS) Information Science & Library Science subject category (IS&LS) to ascertain whether, in general, more highly cited articles are more highly collaborative than other articles. It consists of two investigations. The first investigation is a longitudinal comparison of the degree and proportion of collaboration in five strata of citation; it found that collaboration in the highest four citation strata (all in the most highly cited 22%) increased in unison over time, whereas collaboration in the lowest citation strata (un-cited articles) remained low and stable. Given that over 40% of the articles were un-cited, it seems important to take into account the differences found between un-cited articles and relatively highly cited articles when investigating collaboration in IS&LS. The second investigation compares collaboration for 35 influential information scientists; it found that their more highly cited articles on average were not more highly collaborative than their less highly cited articles. In summary, although collaborative research is conducive to high citation in general, collaboration has apparently not tended to be essential to the success of current and former elite information scientists.
    Date
    22. 3.2009 12:43:51
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.3, S.434-442
  15. Kousha, K.; Thelwall, M.: How is science cited on the Web? : a classification of google unique Web citations (2007) 0.01
    0.007189177 = product of:
      0.032351296 = sum of:
        0.019081537 = weight(_text_:of in 586) [ClassicSimilarity], result of:
          0.019081537 = score(doc=586,freq=26.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.31146988 = fieldWeight in 586, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=586)
        0.013269759 = product of:
          0.026539518 = sum of:
            0.026539518 = weight(_text_:22 in 586) [ClassicSimilarity], result of:
              0.026539518 = score(doc=586,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.19345059 = fieldWeight in 586, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=586)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Although the analysis of citations in the scholarly literature is now an established and relatively well understood part of information science, not enough is known about citations that can be found on the Web. In particular, are there new Web types, and if so, are these trivial or potentially useful for studying or evaluating research communication? We sought evidence based upon a sample of 1,577 Web citations of the URLs or titles of research articles in 64 open-access journals from biology, physics, chemistry, and computing. Only 25% represented intellectual impact, from references of Web documents (23%) and other informal scholarly sources (2%). Many of the Web/URL citations were created for general or subject-specific navigation (45%) or for self-publicity (22%). Additional analyses revealed significant disciplinary differences in the types of Google unique Web/URL citations as well as some characteristics of scientific open-access publishing on the Web. We conclude that the Web provides access to a new and different type of citation information, one that may therefore enable us to measure different aspects of research, and the research process in particular; but to obtain good information, the different types should be separated.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.11, S.1631-1644
  16. Thelwall, M.; Sud, P.; Wilkinson, D.: Link and co-inlink network diagrams with URL citations or title mentions (2012) 0.01
    0.006667861 = product of:
      0.030005373 = sum of:
        0.016735615 = weight(_text_:of in 57) [ClassicSimilarity], result of:
          0.016735615 = score(doc=57,freq=20.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.27317715 = fieldWeight in 57, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=57)
        0.013269759 = product of:
          0.026539518 = sum of:
            0.026539518 = weight(_text_:22 in 57) [ClassicSimilarity], result of:
              0.026539518 = score(doc=57,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.19345059 = fieldWeight in 57, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=57)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Webometric network analyses have been used to map the connectivity of groups of websites to identify clusters, important sites or overall structure. Such analyses have mainly been based upon hyperlink counts, the number of hyperlinks between a pair of websites, although some have used title mentions or URL citations instead. The ability to automatically gather hyperlink counts from Yahoo! ceased in April 2011 and the ability to manually gather such counts was due to cease by early 2012, creating a need for alternatives. This article assesses URL citations and title mentions as possible replacements for hyperlinks in both binary and weighted direct link and co-inlink network diagrams. It also assesses three different types of data for the network connections: hit count estimates, counts of matching URLs, and filtered counts of matching URLs. Results from analyses of U.S. library and information science departments and U.K. universities give evidence that metrics based upon URLs or titles can be appropriate replacements for metrics based upon hyperlinks for both binary and weighted networks, although filtered counts of matching URLs are necessary to give the best results for co-title mention and co-URL citation network diagrams.
    Date
    6. 4.2012 18:16:22
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.4, S.805-816
  17. Li, X.; Thelwall, M.; Kousha, K.: ¬The role of arXiv, RePEc, SSRN and PMC in formal scholarly communication (2015) 0.01
    0.006667861 = product of:
      0.030005373 = sum of:
        0.016735615 = weight(_text_:of in 2593) [ClassicSimilarity], result of:
          0.016735615 = score(doc=2593,freq=20.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.27317715 = fieldWeight in 2593, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2593)
        0.013269759 = product of:
          0.026539518 = sum of:
            0.026539518 = weight(_text_:22 in 2593) [ClassicSimilarity], result of:
              0.026539518 = score(doc=2593,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.19345059 = fieldWeight in 2593, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2593)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Purpose The four major Subject Repositories (SRs), arXiv, Research Papers in Economics (RePEc), Social Science Research Network (SSRN) and PubMed Central (PMC), are all important within their disciplines but no previous study has systematically compared how often they are cited in academic publications. In response, the purpose of this paper is to report an analysis of citations to SRs from Scopus publications, 2000-2013. Design/methodology/approach Scopus searches were used to count the number of documents citing the four SRs in each year. A random sample of 384 documents citing the four SRs was then visited to investigate the nature of the citations. Findings Each SR was most cited within its own subject area but attracted substantial citations from other subject areas, suggesting that they are open to interdisciplinary uses. The proportion of documents citing each SR is continuing to increase rapidly, and the SRs all seem to attract substantial numbers of citations from more than one discipline. Research limitations/implications Scopus does not cover all publications, and most citations to documents found in the four SRs presumably cite the published version, when one exists, rather than the repository version. Practical implications SRs are continuing to grow and do not seem to be threatened by institutional repositories and so research managers should encourage their continued use within their core disciplines, including for research that aims at an audience in other disciplines. Originality/value This is the first simultaneous analysis of Scopus citations to the four most popular SRs.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 67(2015) no.6, S.614-635
  18. Thelwall, M.: Are Mendeley reader counts high enough for research evaluations when articles are published? (2017) 0.01
    0.006667861 = product of:
      0.030005373 = sum of:
        0.016735615 = weight(_text_:of in 3806) [ClassicSimilarity], result of:
          0.016735615 = score(doc=3806,freq=20.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.27317715 = fieldWeight in 3806, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3806)
        0.013269759 = product of:
          0.026539518 = sum of:
            0.026539518 = weight(_text_:22 in 3806) [ClassicSimilarity], result of:
              0.026539518 = score(doc=3806,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.19345059 = fieldWeight in 3806, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3806)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Purpose Mendeley reader counts have been proposed as early indicators for the impact of academic publications. The purpose of this paper is to assess whether there are enough Mendeley readers for research evaluation purposes during the month when an article is first published. Design/methodology/approach Average Mendeley reader counts were compared to the average Scopus citation counts for 104,520 articles from ten disciplines during the second half of 2016. Findings Articles attracted, on average, between 0.1 and 0.8 Mendeley readers per article in the month in which they first appeared in Scopus. This is about ten times more than the average Scopus citation count. Research limitations/implications Other disciplines may use Mendeley more or less than the ten investigated here. The results are dependent on Scopus's indexing practices, and Mendeley reader counts can be manipulated and have national and seniority biases. Practical implications Mendeley reader counts during the month of publication are more powerful than Scopus citations for comparing the average impacts of groups of documents but are not high enough to differentiate between the impacts of typical individual articles. Originality/value This is the first multi-disciplinary and systematic analysis of Mendeley reader counts from the publication month of an article.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 69(2017) no.2, S.174-183
  19. Thelwall, M.; Buckley, K.; Paltoglou, G.; Cai, D.; Kappas, A.: Sentiment strength detection in short informal text (2010) 0.01
    0.0064770123 = product of:
      0.029146556 = sum of:
        0.015876798 = weight(_text_:of in 4200) [ClassicSimilarity], result of:
          0.015876798 = score(doc=4200,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.25915858 = fieldWeight in 4200, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4200)
        0.013269759 = product of:
          0.026539518 = sum of:
            0.026539518 = weight(_text_:22 in 4200) [ClassicSimilarity], result of:
              0.026539518 = score(doc=4200,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.19345059 = fieldWeight in 4200, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4200)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    A huge number of informal messages are posted every day in social network sites, blogs, and discussion forums. Emotions seem to be frequently important in these texts for expressing friendship, showing social support or as part of online arguments. Algorithms to identify sentiment and sentiment strength are needed to help understand the role of emotion in this informal communication and also to identify inappropriate or anomalous affective utterances, potentially associated with threatening behavior to the self or others. Nevertheless, existing sentiment detection algorithms tend to be commercially oriented, designed to identify opinions about products rather than user behaviors. This article partly fills this gap with a new algorithm, SentiStrength, to extract sentiment strength from informal English text, using new methods to exploit the de facto grammars and spelling styles of cyberspace. Applied to MySpace comments and with a lookup table of term sentiment strengths optimized by machine learning, SentiStrength is able to predict positive emotion with 60.6% accuracy and negative emotion with 72.8% accuracy, both based upon strength scales of 1-5. The former, but not the latter, is better than baseline and a wide range of general machine learning approaches.
    Date
    22. 1.2011 14:29:23
    Footnote
    Vgl. auch das Erratum in: Journal of the American Society for Information Science and Technology. 62(2011) no.2, S.419
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.12, S.2544-2558
  20. Thelwall, M.; Sud, P.: Mendeley readership counts : an investigation of temporal and disciplinary differences (2016) 0.01
    0.006361144 = product of:
      0.028625147 = sum of:
        0.012701439 = weight(_text_:of in 3211) [ClassicSimilarity], result of:
          0.012701439 = score(doc=3211,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.20732689 = fieldWeight in 3211, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3211)
        0.015923709 = product of:
          0.031847417 = sum of:
            0.031847417 = weight(_text_:22 in 3211) [ClassicSimilarity], result of:
              0.031847417 = score(doc=3211,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.23214069 = fieldWeight in 3211, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3211)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Scientists and managers using citation-based indicators to help evaluate research cannot evaluate recent articles because of the time needed for citations to accrue. Reading occurs before citing, however, and so it makes sense to count readers rather than citations for recent publications. To assess this, Mendeley readers and citations were obtained for articles from 2004 to late 2014 in five broad categories (agriculture, business, decision science, pharmacy, and the social sciences) and 50 subcategories. In these areas, citation counts tended to increase with every extra year since publication, and readership counts tended to increase faster initially but then stabilize after about 5 years. The correlation between citations and readers was also higher for longer time periods, stabilizing after about 5 years. Although there were substantial differences between broad fields and smaller differences between subfields, the results confirm the value of Mendeley reader counts as early scientific impact indicators.
    Date
    16.11.2016 11:07:22
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.12, S.3036-3050