Search (27 results, page 2 of 2)

  • × author_ss:"Kousha, K."
  • × type_ss:"a"
  1. Kousha, K.; Thelwall, M.: Can Amazon.com reviews help to assess the wider impacts of books? (2016) 0.00
    7.542828E-4 = product of:
      0.008297111 = sum of:
        0.0035673876 = weight(_text_:in in 2768) [ClassicSimilarity], result of:
          0.0035673876 = score(doc=2768,freq=4.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.12752387 = fieldWeight in 2768, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2768)
        0.0047297236 = product of:
          0.009459447 = sum of:
            0.009459447 = weight(_text_:science in 2768) [ClassicSimilarity], result of:
              0.009459447 = score(doc=2768,freq=2.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.17461908 = fieldWeight in 2768, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2768)
          0.5 = coord(1/2)
      0.09090909 = coord(2/22)
    
    Abstract
    Although citation counts are often used to evaluate the research impact of academic publications, they are problematic for books that aim for educational or cultural impact. To fill this gap, this article assesses whether a number of simple metrics derived from Amazon.com reviews of academic books could provide evidence of their impact. Based on a set of 2,739 academic monographs from 2008 and a set of 1,305 best-selling books in 15 Amazon.com academic subject categories, the existence of significant but low or moderate correlations between citations and numbers of reviews, combined with other evidence, suggests that online book reviews tend to reflect the wider popularity of a book rather than its academic impact, although there are substantial disciplinary differences. Metrics based on online reviews are therefore recommended for the evaluation of books that aim at a wide audience inside or outside academia when it is important to capture the broader impacts of educational or cultural activities and when they cannot be manipulated in advance of the evaluation.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.3, S.566-581
  2. Kousha, K.; Thelwall, M.: Patent citation analysis with Google (2017) 0.00
    6.9783063E-4 = product of:
      0.0076761367 = sum of:
        0.0021021033 = weight(_text_:in in 3317) [ClassicSimilarity], result of:
          0.0021021033 = score(doc=3317,freq=2.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.07514416 = fieldWeight in 3317, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3317)
        0.005574033 = product of:
          0.011148066 = sum of:
            0.011148066 = weight(_text_:science in 3317) [ClassicSimilarity], result of:
              0.011148066 = score(doc=3317,freq=4.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.20579056 = fieldWeight in 3317, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3317)
          0.5 = coord(1/2)
      0.09090909 = coord(2/22)
    
    Abstract
    Citations from patents to scientific publications provide useful evidence about the commercial impact of academic research, but automatically searchable databases are needed to exploit this connection for large-scale patent citation evaluations. Google covers multiple different international patent office databases but does not index patent citations or allow automatic searches. In response, this article introduces a semiautomatic indirect method via Bing to extract and filter patent citations from Google to academic papers with an overall precision of 98%. The method was evaluated with 322,192 science and engineering Scopus articles from every second year for the period 1996-2012. Although manual Google Patent searches give more results, especially for articles with many patent citations, the difference is not large enough to be a major problem. Within Biomedical Engineering, Biotechnology, and Pharmacology & Pharmaceutics, 7% to 10% of Scopus articles had at least one patent citation but other fields had far fewer, so patent citation analysis is only relevant for a minority of publications. Low but positive correlations between Google Patent citations and Scopus citations across all fields suggest that traditional citation counts cannot substitute for patent citations when evaluating research.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.1, S.48-61
  3. Thelwall, M.; Kousha, K.: SlideShare presentations, citations, users, and trends : a professional site with academic and educational uses (2017) 0.00
    6.9783063E-4 = product of:
      0.0076761367 = sum of:
        0.0021021033 = weight(_text_:in in 3766) [ClassicSimilarity], result of:
          0.0021021033 = score(doc=3766,freq=2.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.07514416 = fieldWeight in 3766, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3766)
        0.005574033 = product of:
          0.011148066 = sum of:
            0.011148066 = weight(_text_:science in 3766) [ClassicSimilarity], result of:
              0.011148066 = score(doc=3766,freq=4.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.20579056 = fieldWeight in 3766, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3766)
          0.5 = coord(1/2)
      0.09090909 = coord(2/22)
    
    Abstract
    SlideShare is a free social website that aims to help users distribute and find presentations. Owned by LinkedIn since 2012, it targets a professional audience but may give value to scholarship through creating a long-term record of the content of talks. This article tests this hypothesis by analyzing sets of general and scholarly related SlideShare documents using content and citation analysis and popularity statistics reported on the site. The results suggest that academics, students, and teachers are a minority of SlideShare uploaders, especially since 2010, with most documents not being directly related to scholarship or teaching. About two thirds of uploaded SlideShare documents are presentation slides, with the remainder often being files associated with presentations or video recordings of talks. SlideShare is therefore a presentation-centered site with a predominantly professional user base. Although a minority of the uploaded SlideShare documents are cited by, or cite, academic publications, probably too few articles are cited by SlideShare to consider extracting SlideShare citations for research evaluation. Nevertheless, scholars should consider SlideShare to be a potential source of academic and nonacademic information, particularly in library and information science, education, and business.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.8, S.1989-2003
  4. Thelwall, M.; Kousha, K.: ResearchGate articles : age, discipline, audience size, and impact (2017) 0.00
    6.893079E-4 = product of:
      0.007582387 = sum of:
        0.00364095 = weight(_text_:in in 3349) [ClassicSimilarity], result of:
          0.00364095 = score(doc=3349,freq=6.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.1301535 = fieldWeight in 3349, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3349)
        0.0039414368 = product of:
          0.0078828735 = sum of:
            0.0078828735 = weight(_text_:science in 3349) [ClassicSimilarity], result of:
              0.0078828735 = score(doc=3349,freq=2.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.1455159 = fieldWeight in 3349, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3349)
          0.5 = coord(1/2)
      0.09090909 = coord(2/22)
    
    Abstract
    The large multidisciplinary academic social website ResearchGate aims to help academics to connect with each other and to publicize their work. Despite its popularity, little is known about the age and discipline of the articles uploaded and viewed in the site and whether publication statistics from the site could be useful impact indicators. In response, this article assesses samples of ResearchGate articles uploaded at specific dates, comparing their views in the site to their Mendeley readers and Scopus-indexed citations. This analysis shows that ResearchGate is dominated by recent articles, which attract about three times as many views as older articles. ResearchGate has uneven coverage of scholarship, with the arts and humanities, health professions, and decision sciences poorly represented and some fields receiving twice as many views per article as others. View counts for uploaded articles have low to moderate positive correlations with both Scopus citations and Mendeley readers, which is consistent with them tending to reflect a wider audience than Scopus-publishing scholars. Hence, for articles uploaded to the site, view counts may give a genuinely new audience indicator.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.2, S.468-479
  5. Kousha, K.; Thelwall, M.; Abdoli, M.: Goodreads reviews to assess the wider impacts of books (2017) 0.00
    6.893079E-4 = product of:
      0.007582387 = sum of:
        0.00364095 = weight(_text_:in in 3768) [ClassicSimilarity], result of:
          0.00364095 = score(doc=3768,freq=6.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.1301535 = fieldWeight in 3768, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3768)
        0.0039414368 = product of:
          0.0078828735 = sum of:
            0.0078828735 = weight(_text_:science in 3768) [ClassicSimilarity], result of:
              0.0078828735 = score(doc=3768,freq=2.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.1455159 = fieldWeight in 3768, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3768)
          0.5 = coord(1/2)
      0.09090909 = coord(2/22)
    
    Abstract
    Although peer-review and citation counts are commonly used to help assess the scholarly impact of published research, informal reader feedback might also be exploited to help assess the wider impacts of books, such as their educational or cultural value. The social website Goodreads seems to be a reasonable source for this purpose because it includes a large number of book reviews and ratings by many users inside and outside of academia. To check this, Goodreads book metrics were compared with different book-based impact indicators for 15,928 academic books across broad fields. Goodreads engagements were numerous enough in the arts (85% of books had at least one), humanities (80%), and social sciences (67%) for use as a source of impact evidence. Low and moderate correlations between Goodreads book metrics and scholarly or non-scholarly indicators suggest that reader feedback in Goodreads reflects the many purposes of books rather than a single type of impact. Although Goodreads book metrics can be manipulated, they could be used guardedly by academics, authors, and publishers in evaluations.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.8, S.2004-2016
  6. Kousha, K.; Thelwall, M.; Rezaie, S.: Can the impact of scholarly images be assessed online? : an exploratory study using image identification technology (2010) 0.00
    6.285691E-4 = product of:
      0.00691426 = sum of:
        0.0029728229 = weight(_text_:in in 3966) [ClassicSimilarity], result of:
          0.0029728229 = score(doc=3966,freq=4.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.10626988 = fieldWeight in 3966, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3966)
        0.0039414368 = product of:
          0.0078828735 = sum of:
            0.0078828735 = weight(_text_:science in 3966) [ClassicSimilarity], result of:
              0.0078828735 = score(doc=3966,freq=2.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.1455159 = fieldWeight in 3966, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3966)
          0.5 = coord(1/2)
      0.09090909 = coord(2/22)
    
    Abstract
    The web contains a huge number of digital pictures. For scholars publishing such images it is important to know how well used their images are, but no method seems to have been developed for monitoring the value of academic images. In particular, can the impact of scientific or artistic images be assessed through identifying images copied or reused on the Internet? This article explores a case study of 260 NASA images to investigate whether the TinEye search engine could theoretically help to provide this information. The results show that the selected pictures had a median of 11 online copies each. However, a classification of 210 of these copies reveals that only 1.4% were explicitly used in academic publications, reflecting research impact, and the majority of the NASA pictures were used for informal scholarly (or educational) communication (37%). Additional analyses of world famous paintings and scientific images about pathology and molecular structures suggest that image contents are important for the type and extent of image use. Although it is reasonable to use statistics derived from TinEye for assessing image reuse value, the extent of its image indexing is not known.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.9, S.1734-1744
  7. Thelwall, M.; Kousha, K.: Goodreads : a social network site for book readers (2017) 0.00
    5.494128E-4 = product of:
      0.0060435403 = sum of:
        0.0021021033 = weight(_text_:in in 3534) [ClassicSimilarity], result of:
          0.0021021033 = score(doc=3534,freq=2.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.07514416 = fieldWeight in 3534, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3534)
        0.0039414368 = product of:
          0.0078828735 = sum of:
            0.0078828735 = weight(_text_:science in 3534) [ClassicSimilarity], result of:
              0.0078828735 = score(doc=3534,freq=2.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.1455159 = fieldWeight in 3534, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3534)
          0.5 = coord(1/2)
      0.09090909 = coord(2/22)
    
    Abstract
    Goodreads is an Amazon-owned book-based social web site for members to share books, read, review books, rate books, and connect with other readers. Goodreads has tens of millions of book reviews, recommendations, and ratings that may help librarians and readers to select relevant books. This article describes a first investigation of the properties of Goodreads users, using a random sample of 50,000 members. The results suggest that about three quarters of members with a public profile are female, and that there is little difference between male and female users in patterns of behavior, except for females registering more books and rating them less positively. Goodreads librarians and super-users engage extensively with most features of the site. The absence of strong correlations between book-based and social usage statistics (e.g., numbers of friends, followers, books, reviews, and ratings) suggests that members choose their own individual balance of social and book activities and rarely ignore one at the expense of the other. Goodreads is therefore neither primarily a book-based website nor primarily a social network site but is a genuine hybrid, social navigation site.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.4, S.972-983