Search (230 results, page 1 of 12)

  • × theme_ss:"Informetrie"
  1. Kousha, K.; Thelwall, M.: Google book search : citation analysis for social science and the humanities (2009) 0.11
    0.1050245 = product of:
      0.15753675 = sum of:
        0.13362148 = weight(_text_:book in 2946) [ClassicSimilarity], result of:
          0.13362148 = score(doc=2946,freq=12.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.5973039 = fieldWeight in 2946, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2946)
        0.023915261 = product of:
          0.047830522 = sum of:
            0.047830522 = weight(_text_:search in 2946) [ClassicSimilarity], result of:
              0.047830522 = score(doc=2946,freq=4.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.27153727 = fieldWeight in 2946, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2946)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In both the social sciences and the humanities, books and monographs play significant roles in research communication. The absence of citations from most books and monographs from the Thomson Reuters/Institute for Scientific Information databases (ISI) has been criticized, but attempts to include citations from or to books in the research evaluation of the social sciences and humanities have not led to widespread adoption. This article assesses whether Google Book Search (GBS) can partially fill this gap by comparing citations from books with citations from journal articles to journal articles in 10 science, social science, and humanities disciplines. Book citations were 31% to 212% of ISI citations and, hence, numerous enough to supplement ISI citations in the social sciences and humanities covered, but not in the sciences (3%-5%), except for computing (46%), due to numerous published conference proceedings. A case study was also made of all 1,923 articles in the 51 information science and library science ISI-indexed journals published in 2003. Within this set, highly book-cited articles tended to receive many ISI citations, indicating a significant relationship between the two types of citation data, but with important exceptions that point to the additional information provided by book citations. In summary, GBS is clearly a valuable new source of citation data for the social sciences and humanities. One practical implication is that book-oriented scholars should consult it for additional citations to their work when applying for promotion and tenure.
  2. Phillips, R.L.: Book citations in PhD science dissertations : an examination of commercial book publishers' influence (2018) 0.07
    0.071264796 = product of:
      0.21379438 = sum of:
        0.21379438 = weight(_text_:book in 5517) [ClassicSimilarity], result of:
          0.21379438 = score(doc=5517,freq=12.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.9556863 = fieldWeight in 5517, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0625 = fieldNorm(doc=5517)
      0.33333334 = coord(1/3)
    
    Abstract
    This case study examines the book citations of PhD dissertations from the City University of New York (CUNY). The study spans a ten-year period from 2008-2017 and includes 9,307 book citations sourced from 916 dissertations. Book citations were chosen from seven science subjects. Publishers were identified in order to examine trends and quantify the role of commercial publishers in book selections used to support dissertation research.
  3. Stuart, D.: Web metrics for library and information professionals (2014) 0.07
    0.07059233 = product of:
      0.105888486 = sum of:
        0.08538542 = weight(_text_:book in 2274) [ClassicSimilarity], result of:
          0.08538542 = score(doc=2274,freq=10.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.38168296 = fieldWeight in 2274, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2274)
        0.020503066 = product of:
          0.041006133 = sum of:
            0.041006133 = weight(_text_:search in 2274) [ClassicSimilarity], result of:
              0.041006133 = score(doc=2274,freq=6.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.23279473 = fieldWeight in 2274, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2274)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This is a practical guide to using web metrics to measure impact and demonstrate value. The web provides an opportunity to collect a host of different metrics, from those associated with social media accounts and websites to more traditional research outputs. This book is a clear guide for library and information professionals as to what web metrics are available and how to assess and use them to make informed decisions and demonstrate value. As individuals and organizations increasingly use the web in addition to traditional publishing avenues and formats, this book provides the tools to unlock web metrics and evaluate the impact of this content. The key topics covered include: bibliometrics, webometrics and web metrics; data collection tools; evaluating impact on the web; evaluating social media impact; investigating relationships between actors; exploring traditional publications in a new environment; web metrics and the web of data; the future of web metrics and the library and information professional. The book will provide a practical introduction to web metrics for a wide range of library and information professionals, from the bibliometrician wanting to demonstrate the wider impact of a researcher's work than can be demonstrated through traditional citations databases, to the reference librarian wanting to measure how successfully they are engaging with their users on Twitter. It will be a valuable tool for anyone who wants to not only understand the impact of content, but demonstrate this impact to others within the organization and beyond.
    Content
    1. Introduction. MetricsIndicators -- Web metrics and Ranganathan's laws of library science -- Web metrics for the library and information professional -- The aim of this book -- The structure of the rest of this book -- 2. Bibliometrics, webometrics and web metrics. Web metrics -- Information science metrics -- Web analytics -- Relational and evaluative metrics -- Evaluative web metrics -- Relational web metrics -- Validating the results -- 3. Data collection tools. The anatomy of a URL, web links and the structure of the web -- Search engines 1.0 -- Web crawlers -- Search engines 2.0 -- Post search engine 2.0: fragmentation -- 4. Evaluating impact on the web. Websites -- Blogs -- Wikis -- Internal metrics -- External metrics -- A systematic approach to content analysis -- 5. Evaluating social media impact. Aspects of social network sites -- Typology of social network sites -- Research and tools for specific sites and services -- Other social network sites -- URL shorteners: web analytic links on any site -- General social media impact -- Sentiment analysis -- 6. Investigating relationships between actors. Social network analysis methods -- Sources for relational network analysis -- 7. Exploring traditional publications in a new environment. More bibliographic items -- Full text analysis -- Greater context -- 8. Web metrics and the web of data. The web of data -- Building the semantic web -- Implications of the web of data for web metrics -- Investigating the web of data today -- SPARQL -- Sindice -- LDSpider: an RDF web crawler -- 9. The future of web metrics and the library and information professional. How far we have come -- The future of web metrics -- The future of the library and information professional and web metrics.
  4. Bhavnani, S.K.; Wilson, C.S.: Information scattering (2009) 0.07
    0.0666973 = product of:
      0.10004594 = sum of:
        0.076371044 = weight(_text_:book in 3816) [ClassicSimilarity], result of:
          0.076371044 = score(doc=3816,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.34138763 = fieldWeight in 3816, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3816)
        0.0236749 = product of:
          0.0473498 = sum of:
            0.0473498 = weight(_text_:search in 3816) [ClassicSimilarity], result of:
              0.0473498 = score(doc=3816,freq=2.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.2688082 = fieldWeight in 3816, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3816)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Information scattering is an often observed phenomenon related to information collections where there are a few sources that have many items of relevant information about a topic, while most sources have only a few. This entry discusses the original discovery of the phenomenon, the types of information scattering observed across many different information collections, methods that have been used to analyze the phenomenon, explanations for why and how information scattering occurs, and how these results have informed the design of systems and search strategies. The entry concludes with future challenges related to building computational models to more precisely describe the process of information scatter, and algorithms which help users to gather highly scattered information.
    Footnote
    Vgl.: http://www.tandfonline.com/doi/book/10.1081/E-ELIS3.
  5. Thelwall, M.: Webometrics (2009) 0.06
    0.05716911 = product of:
      0.085753664 = sum of:
        0.0654609 = weight(_text_:book in 3906) [ClassicSimilarity], result of:
          0.0654609 = score(doc=3906,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.29261798 = fieldWeight in 3906, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.046875 = fieldNorm(doc=3906)
        0.02029277 = product of:
          0.04058554 = sum of:
            0.04058554 = weight(_text_:search in 3906) [ClassicSimilarity], result of:
              0.04058554 = score(doc=3906,freq=2.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.230407 = fieldWeight in 3906, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3906)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Webometrics is an information science field concerned with measuring aspects of the World Wide Web (WWW) for a variety of information science research goals. It came into existence about five years after the Web was formed and has since grown to become a significant aspect of information science, at least in terms of published research. Although some webometrics research has focused on the structure or evolution of the Web itself or the performance of commercial search engines, most has used data from the Web to shed light on information provision or online communication in various contexts. Most prominently, techniques have been developed to track, map, and assess Web-based informal scholarly communication, for example, in terms of the hyperlinks between academic Web sites or the online impact of digital repositories. In addition, a range of nonacademic issues and groups of Web users have also been analyzed.
    Footnote
    Vgl.: http://www.tandfonline.com/doi/book/10.1081/E-ELIS3.
  6. Zuccala, A.; Someren, M. van; Bellen, M. van: ¬A machine-learning approach to coding book reviews as quality indicators : toward a theory of megacitation (2014) 0.05
    0.054550745 = product of:
      0.16365223 = sum of:
        0.16365223 = weight(_text_:book in 1530) [ClassicSimilarity], result of:
          0.16365223 = score(doc=1530,freq=18.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.73154485 = fieldWeight in 1530, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1530)
      0.33333334 = coord(1/3)
    
    Abstract
    A theory of "megacitation" is introduced and used in an experiment to demonstrate how a qualitative scholarly book review can be converted into a weighted bibliometric indicator. We employ a manual human-coding approach to classify book reviews in the field of history based on reviewers' assessments of a book author's scholarly credibility (SC) and writing style (WS). In total, 100 book reviews were selected from the American Historical Review and coded for their positive/negative valence on these two dimensions. Most were coded as positive (68% for SC and 47% for WS), and there was also a small positive correlation between SC and WS (r = 0.2). We then constructed a classifier, combining both manual design and machine learning, to categorize sentiment-based sentences in history book reviews. The machine classifier produced a matched accuracy (matched to the human coding) of approximately 75% for SC and 64% for WS. WS was found to be more difficult to classify by machine than SC because of the reviewers' use of more subtle language. With further training data, a machine-learning approach could be useful for automatically classifying a large number of history book reviews at once. Weighted megacitations can be especially valuable if they are used in conjunction with regular book/journal citations, and "libcitations" (i.e., library holding counts) for a comprehensive assessment of a book/monograph's scholarly impact.
  7. Shah, T.A.; Gul, S.; Gaur, R.C.: Authors self-citation behaviour in the field of Library and Information Science (2015) 0.05
    0.05210369 = product of:
      0.07815553 = sum of:
        0.06613927 = weight(_text_:book in 2597) [ClassicSimilarity], result of:
          0.06613927 = score(doc=2597,freq=6.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.2956504 = fieldWeight in 2597, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2597)
        0.01201626 = product of:
          0.02403252 = sum of:
            0.02403252 = weight(_text_:22 in 2597) [ClassicSimilarity], result of:
              0.02403252 = score(doc=2597,freq=2.0), product of:
                0.17747258 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679956 = queryNorm
                0.1354154 = fieldWeight in 2597, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2597)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose The purpose of this paper is to analyse the author self-citation behavior in the field of Library and Information Science. Various factors governing the author self-citation behavior have also been studied. Design/methodology/approach The 2012 edition of Social Science Citation Index was consulted for the selection of LIS journals. Under the subject heading "Information Science and Library Science" there were 84 journals and out of these 12 journals were selected for the study based on systematic sampling. The study was confined to original research and review articles that were published in select journals in the year 2009. The main reason to choose 2009 was to get at least five years (2009-2013) citation data from Web of Science Core Collection (excluding Book Citation Index) and SciELO Citation Index. A citation was treated as self-citation whenever one of the authors of citing and cited paper was common, i.e., the set of co-authors of the citing paper and that of the cited one are not disjoint. To minimize the risk of homonyms, spelling variances and misspelling in authors' names, the authors compared full author names in citing and cited articles. Findings A positive correlation between number of authors and total number of citations exists with no correlation between number of authors and number/share of self-citations, i.e., self-citations are not affected by the number of co-authors in a paper. Articles which are produced in collaboration attract more self-citations than articles produced by only one author. There is no statistically significant variation in citations counts (total and self-citations) in works that are result of different types of collaboration. A strong and statistically significant positive correlation exists between total citation count and frequency of self-citations. No relation could be ascertained between total citation count and proportion of self-citations. Authors tend to cite more of their recent works than the work of other authors. Total citation count and number of self-citations are positively correlated with the impact factor of source publication and correlation coefficient for total citations is much higher than that for self-citations. A negative correlation exhibits between impact factor and the share of self-citations. Of particular note is that the correlation in all the cases is of weak nature. Research limitations/implications The research provides an understanding of the author self-citations in the field of LIS. readers are encouraged to further the study by taking into account large sample, tracing citations also from Book Citation Index (WoS) and comparing results with other allied subjects so as to validate the robustness of the findings of this study. Originality/value Readers are encouraged to further the study by taking into account large sample, tracing citations also from Book Citation Index (WoS) and comparing results with other allied subjects so as to validate the robustness of the findings of this study.
    Date
    20. 1.2015 18:30:22
  8. Zuccala, A.; Leeuwen, T.van: Book reviews in humanities research evaluations (2011) 0.05
    0.051430937 = product of:
      0.1542928 = sum of:
        0.1542928 = weight(_text_:book in 4771) [ClassicSimilarity], result of:
          0.1542928 = score(doc=4771,freq=16.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.68970716 = fieldWeight in 4771, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4771)
      0.33333334 = coord(1/3)
    
    Abstract
    Bibliometric evaluations of research outputs in the social sciences and humanities are challenging due to limitations associated with Web of Science data; however, background literature has shown that scholars are interested in stimulating improvements. We give special attention to book reviews processed by Web of Sciencehistory and literature journals, focusing on two types: Type I (i.e., reference to book only) and Type II (i.e., reference to book and other scholarly sources). Bibliometric data are collected and analyzed for a large set of reviews (1981-2009) to observe general publication patterns and patterns of citedness and co-citedness with books under review. Results show that reviews giving reference only to the book (Type I) are published more frequently while reviews referencing the book and other works (Type II) are more likely to be cited. The referencing culture of the humanities makes it difficult to understand patterns of co-citedness between books and review articles without further in-depth content analyses. Overall, citation counts to book reviews are typically low, but our data showed that they are scholarly and do play a role in the scholarly communication system. In the disciplines of history and literature, where book reviews are prominent, counting the number and type of reviews that a scholar produces throughout his/her career is a positive step forward in research evaluations. We propose a new set of journal quality indicators for the purpose of monitoring their scholarly influence.
  9. Torres-Salinas, D.; Gorraiz, J.; Robinson-Garcia, N.: ¬The insoluble problems of books : what does Altmetric.com have to offer? (2018) 0.05
    0.050299995 = product of:
      0.07544999 = sum of:
        0.061717123 = weight(_text_:book in 4633) [ClassicSimilarity], result of:
          0.061717123 = score(doc=4633,freq=4.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.27588287 = fieldWeight in 4633, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.03125 = fieldNorm(doc=4633)
        0.013732869 = product of:
          0.027465738 = sum of:
            0.027465738 = weight(_text_:22 in 4633) [ClassicSimilarity], result of:
              0.027465738 = score(doc=4633,freq=2.0), product of:
                0.17747258 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050679956 = queryNorm
                0.15476047 = fieldWeight in 4633, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4633)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose The purpose of this paper is to analyze the capabilities, functionalities and appropriateness of Altmetric.com as a data source for the bibliometric analysis of books in comparison to PlumX. Design/methodology/approach The authors perform an exploratory analysis on the metrics the Altmetric Explorer for Institutions, platform offers for books. The authors use two distinct data sets of books. On the one hand, the authors analyze the Book Collection included in Altmetric.com. On the other hand, the authors use Clarivate's Master Book List, to analyze Altmetric.com's capabilities to download and merge data with external databases. Finally, the authors compare the findings with those obtained in a previous study performed in PlumX. Findings Altmetric.com combines and orderly tracks a set of data sources combined by DOI identifiers to retrieve metadata from books, being Google Books its main provider. It also retrieves information from commercial publishers and from some Open Access initiatives, including those led by university libraries, such as Harvard Library. We find issues with linkages between records and mentions or ISBN discrepancies. Furthermore, the authors find that automatic bots affect greatly Wikipedia mentions to books. The comparison with PlumX suggests that none of these tools provide a complete picture of the social attention generated by books and are rather complementary than comparable tools. Practical implications This study targets different audience which can benefit from the findings. First, bibliometricians and researchers who seek for alternative sources to develop bibliometric analyses of books, with a special focus on the Social Sciences and Humanities fields. Second, librarians and research managers who are the main clients to which these tools are directed. Third, Altmetric.com itself as well as other altmetric providers who might get a better understanding of the limitations users encounter and improve this promising tool. Originality/value This is the first study to analyze Altmetric.com's functionalities and capabilities for providing metric data for books and to compare results from this platform, with those obtained via PlumX.
    Date
    20. 1.2015 18:30:22
  10. Delgado-Quirós, L.; Aguillo, I.F.; Martín-Martín, A.; López-Cózar, E.D.; Orduña-Malea, E.; Ortega, J.L.: Why are these publications missing? : uncovering the reasons behind the exclusion of documents in free-access scholarly databases (2024) 0.05
    0.04764092 = product of:
      0.07146138 = sum of:
        0.05455074 = weight(_text_:book in 1201) [ClassicSimilarity], result of:
          0.05455074 = score(doc=1201,freq=2.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.2438483 = fieldWeight in 1201, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1201)
        0.016910642 = product of:
          0.033821285 = sum of:
            0.033821285 = weight(_text_:search in 1201) [ClassicSimilarity], result of:
              0.033821285 = score(doc=1201,freq=2.0), product of:
                0.17614716 = queryWeight, product of:
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.050679956 = queryNorm
                0.19200584 = fieldWeight in 1201, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.475677 = idf(docFreq=3718, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1201)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This study analyses the coverage of seven free-access bibliographic databases (Crossref, Dimensions-non-subscription version, Google Scholar, Lens, Microsoft Academic, Scilit, and Semantic Scholar) to identify the potential reasons that might cause the exclusion of scholarly documents and how they could influence coverage. To do this, 116 k randomly selected bibliographic records from Crossref were used as a baseline. API endpoints and web scraping were used to query each database. The results show that coverage differences are mainly caused by the way each service builds their databases. While classic bibliographic databases ingest almost the exact same content from Crossref (Lens and Scilit miss 0.1% and 0.2% of the records, respectively), academic search engines present lower coverage (Google Scholar does not find: 9.8%, Semantic Scholar: 10%, and Microsoft Academic: 12%). Coverage differences are mainly attributed to external factors, such as web accessibility and robot exclusion policies (39.2%-46%), and internal requirements that exclude secondary content (6.5%-11.6%). In the case of Dimensions, the only classic bibliographic database with the lowest coverage (7.6%), internal selection criteria such as the indexation of full books instead of book chapters (65%) and the exclusion of secondary content (15%) are the main motives of missing publications.
  11. Zuccala, A.; Guns, R.; Cornacchia, R.; Bod, R.: Can we rank scholarly book publishers? : a bibliometric experiment with the field of history (2015) 0.04
    0.044540495 = product of:
      0.13362148 = sum of:
        0.13362148 = weight(_text_:book in 2037) [ClassicSimilarity], result of:
          0.13362148 = score(doc=2037,freq=12.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.5973039 = fieldWeight in 2037, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2037)
      0.33333334 = coord(1/3)
    
    Abstract
    This is a publisher ranking study based on a citation data grant from Elsevier, specifically, book titles cited in Scopus history journals (2007-2011) and matching metadata from WorldCat® (i.e., OCLC numbers, ISBN codes, publisher records, and library holding counts). Using both resources, we have created a unique relational database designed to compare citation counts to books with international library holdings or libcitations for scholarly book publishers. First, we construct a ranking of the top 500 publishers and explore descriptive statistics at the level of publisher type (university, commercial, other) and country of origin. We then identify the top 50 university presses and commercial houses based on total citations and mean citations per book (CPB). In a third analysis, we present a map of directed citation links between journals and book publishers. American and British presses/publishing houses tend to dominate the work of library collection managers and citing scholars; however, a number of specialist publishers from Europe are included. Distinct clusters from the directed citation map indicate a certain degree of regionalism and subject specialization, where some journals produced in languages other than English tend to cite books published by the same parent press. Bibliometric rankings convey only a small part of how the actual structure of the publishing field has evolved; hence, challenges lie ahead for developers of new citation indices for books and bibliometricians interested in measuring book and publisher impacts.
  12. Ossenblok, T.L.B.; Verleysen, F.T.; Engels, T.C.E.: Coauthorship of journal articles and book chapters in the social sciences and humanities (2000-2010) (2014) 0.04
    0.0436406 = product of:
      0.1309218 = sum of:
        0.1309218 = weight(_text_:book in 1249) [ClassicSimilarity], result of:
          0.1309218 = score(doc=1249,freq=8.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.58523595 = fieldWeight in 1249, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.046875 = fieldNorm(doc=1249)
      0.33333334 = coord(1/3)
    
    Abstract
    This study analyzes coauthorship patterns in the social sciences and humanities (SSH) for the period 2000 to 2010. The basis for the analysis is the Flemish Academic Bibliographic Database for the Social Sciences and Humanities (VABB-SHW), a comprehensive bibliographic database of peer-reviewed publications in the SSH by researchers affiliated with Flemish universities. Combining data on journal articles and book chapters, our findings indicate that collaborative publishing in the SSH is increasing, though considerable differences between disciplines remain. Conversely, we did observe a sharp decline in single-author publishing. We further demonstrate that coauthored SSH articles in journals indexed in the Web of Science (WoS) generally have a higher (and growing) number of coauthors than do either those in non-WoS journals or book chapters. This illustrates the need to include non-WoS data and book chapters when studying coauthorship in the SSH.
  13. White, H.D.; Boell, S.K.; Yu, H.; Davis, M.; Wilson, C.S.; Cole, F.T.H.: Libcitations : a measure for comparative assessment of book publications in the humanities and social sciences (2009) 0.04
    0.040659726 = product of:
      0.12197917 = sum of:
        0.12197917 = weight(_text_:book in 2846) [ClassicSimilarity], result of:
          0.12197917 = score(doc=2846,freq=10.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.5452614 = fieldWeight in 2846, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2846)
      0.33333334 = coord(1/3)
    
    Abstract
    Bibliometric measures for evaluating research units in the book-oriented humanities and social sciences are underdeveloped relative to those available for journal-oriented science and technology. We therefore present a new measure designed for book-oriented fields: the libcitation count. This is a count of the libraries holding a given book, as reported in a national or international union catalog. As librarians decide what to acquire for the audiences they serve, they jointly constitute an instrument for gauging the cultural impact of books. Their decisions are informed by knowledge not only of audiences but also of the book world (e.g., the reputations of authors and the prestige of publishers). From libcitation counts, measures can be derived for comparing research units. Here, we imagine a match-up between the departments of history, philosophy, and political science at the University of New South Wales and the University of Sydney in Australia. We chose the 12 books from each department that had the highest libcitation counts in the Libraries Australia union catalog during 2000 to 2006. We present each book's raw libcitation count, its rank within its Library of Congress (LC) class, and its LC-class normalized libcitation score. The latter is patterned on the item-oriented field normalized citation score used in evaluative bibliometrics. Summary statistics based on these measures allow the departments to be compared for cultural impact. Our work has implications for programs such as Excellence in Research for Australia and the Research Assessment Exercise in the United Kingdom. It also has implications for data mining in OCLC's WorldCat.
  14. Kousha, K.; Thelwall, M.; Abdoli, M.: Goodreads reviews to assess the wider impacts of books (2017) 0.04
    0.040659726 = product of:
      0.12197917 = sum of:
        0.12197917 = weight(_text_:book in 3768) [ClassicSimilarity], result of:
          0.12197917 = score(doc=3768,freq=10.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.5452614 = fieldWeight in 3768, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3768)
      0.33333334 = coord(1/3)
    
    Abstract
    Although peer-review and citation counts are commonly used to help assess the scholarly impact of published research, informal reader feedback might also be exploited to help assess the wider impacts of books, such as their educational or cultural value. The social website Goodreads seems to be a reasonable source for this purpose because it includes a large number of book reviews and ratings by many users inside and outside of academia. To check this, Goodreads book metrics were compared with different book-based impact indicators for 15,928 academic books across broad fields. Goodreads engagements were numerous enough in the arts (85% of books had at least one), humanities (80%), and social sciences (67%) for use as a source of impact evidence. Low and moderate correlations between Goodreads book metrics and scholarly or non-scholarly indicators suggest that reader feedback in Goodreads reflects the many purposes of books rather than a single type of impact. Although Goodreads book metrics can be manipulated, they could be used guardedly by academics, authors, and publishers in evaluations.
  15. Chen, C.: Mapping scientific frontiers : the quest for knowledge visualization (2003) 0.04
    0.03848739 = product of:
      0.11546217 = sum of:
        0.11546217 = weight(_text_:book in 2213) [ClassicSimilarity], result of:
          0.11546217 = score(doc=2213,freq=14.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.5161296 = fieldWeight in 2213, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.03125 = fieldNorm(doc=2213)
      0.33333334 = coord(1/3)
    
    Footnote
    Rez. in: JASIST 55(2004) no.4, S.363-365 (J.W. Schneider): "Theories and methods for mapping scientific frontiers have existed for decades-especially within quantitative studies of science. This book investigates mapping scientific frontiers from the perspective of visual thinking and visual exploration (visual communication). The central theme is construction of visual-spatial representations that may convey insights into the dynamic structure of scientific frontiers. The author's previous book, Information Visualisation and Virtual Environments (1999), also concerns some of the ideas behind and possible benefits of visual communication. This new book takes a special focus an knowledge visualization, particularly in relation to science literature. The book is not a technical tutorial as the focus is an principles of visual communication and ways that may reveal the dynamics of scientific frontiers. The new approach to science mapping presented is the culmination of different approaches from several disciplines, such as philosophy of science, information retrieval, scientometrics, domain analysis, and information visualization. The book therefore addresses an audience with different disciplinary backgrounds and tries to stimulate interdisciplinary research. Chapter 1, The Growth of Scientific Knowledge, introduces a range of examples that illustrate fundamental issues concerning visual communication in general and science mapping in particular. Chapter 2, Mapping the Universe, focuses an the basic principles of cartography for visual communication. Chapter 3, Mapping the Mind, turns the attention inward and explores the design of mind maps, maps that represent our thoughts, experience, and knowledge. Chapter 4, Enabling Techniques for Science Mapping, essentially outlines the author's basic approach to science mapping.
    The title of Chapter 5, On the Shoulders of Giants, implies that knowledge of the structure of scientific frontiers in the immediate past holds the key to a fruitful exploration of people's intellectual assets. Chapter 6, Tracing Competing Paradigms explains how information visualization can draw upon the philosophical framework of paradigm shifts and thereby enable scientists to track the development of Competing paradigms. The final chapter, Tracking Latent Domain Knowledge, turns citation analysis upside down by looking at techniques that may reveal latent domain knowledge. Mapping Scientific Frontiers: The Quest for Knowledge Visualization is an excellent book and is highly recommended. The book convincingly outlines general theories conceming cartography, visual communication, and science mapping-especially how metaphors can make a "big picture"simple and useful. The author likewise Shows how the GSA framework is based not only an technical possibilities but indeed also an the visualization principles presented in the beginning chapters. Also, the author does a fine job of explaining why the mapping of scientific frontiers needs a combined effort from a diverse range of underlying disciplines, such as philosophy of science, sociology of science, scientometrics, domain analyses, information visualization, knowledge discovery, and data mining.
  16. H-Index auch im Web of Science (2008) 0.04
    0.03716494 = product of:
      0.111494824 = sum of:
        0.111494824 = sum of:
          0.07029622 = weight(_text_:search in 590) [ClassicSimilarity], result of:
            0.07029622 = score(doc=590,freq=6.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.39907667 = fieldWeight in 590, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.046875 = fieldNorm(doc=590)
          0.041198608 = weight(_text_:22 in 590) [ClassicSimilarity], result of:
            0.041198608 = score(doc=590,freq=2.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.23214069 = fieldWeight in 590, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=590)
      0.33333334 = coord(1/3)
    
    Content
    "Zur Kurzmitteilung "Latest enhancements in Scopus: ... h-Index incorporated in Scopus" in den letzten Online-Mitteilungen (Online-Mitteilungen 92, S.31) ist zu korrigieren, dass der h-Index sehr wohl bereits im Web of Science enthalten ist. Allerdings findet man/frau diese Information nicht in der "cited ref search", sondern neben der Trefferliste einer Quick Search, General Search oder einer Suche über den Author Finder in der rechten Navigationsleiste unter dem Titel "Citation Report". Der "Citation Report" bietet für die in der jeweiligen Trefferliste angezeigten Arbeiten: - Die Gesamtzahl der Zitierungen aller Arbeiten in der Trefferliste - Die mittlere Zitationshäufigkeit dieser Arbeiten - Die Anzahl der Zitierungen der einzelnen Arbeiten, aufgeschlüsselt nach Publikationsjahr der zitierenden Arbeiten - Die mittlere Zitationshäufigkeit dieser Arbeiten pro Jahr - Den h-Index (ein h-Index von x sagt aus, dass x Arbeiten der Trefferliste mehr als x-mal zitiert wurden; er ist gegenüber sehr hohen Zitierungen einzelner Arbeiten unempfindlicher als die mittlere Zitationshäufigkeit)."
    Date
    6. 4.2008 19:04:22
  17. Zhang, Y.; Jansen, B.J.; Spink, A.: Identification of factors predicting clickthrough in Web searching using neural network analysis (2009) 0.04
    0.03716494 = product of:
      0.111494824 = sum of:
        0.111494824 = sum of:
          0.07029622 = weight(_text_:search in 2742) [ClassicSimilarity], result of:
            0.07029622 = score(doc=2742,freq=6.0), product of:
              0.17614716 = queryWeight, product of:
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.050679956 = queryNorm
              0.39907667 = fieldWeight in 2742, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.475677 = idf(docFreq=3718, maxDocs=44218)
                0.046875 = fieldNorm(doc=2742)
          0.041198608 = weight(_text_:22 in 2742) [ClassicSimilarity], result of:
            0.041198608 = score(doc=2742,freq=2.0), product of:
              0.17747258 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050679956 = queryNorm
              0.23214069 = fieldWeight in 2742, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2742)
      0.33333334 = coord(1/3)
    
    Abstract
    In this research, we aim to identify factors that significantly affect the clickthrough of Web searchers. Our underlying goal is determine more efficient methods to optimize the clickthrough rate. We devise a clickthrough metric for measuring customer satisfaction of search engine results using the number of links visited, number of queries a user submits, and rank of clicked links. We use a neural network to detect the significant influence of searching characteristics on future user clickthrough. Our results show that high occurrences of query reformulation, lengthy searching duration, longer query length, and the higher ranking of prior clicked links correlate positively with future clickthrough. We provide recommendations for leveraging these findings for improving the performance of search engine retrieval and result ranking, along with implications for search engine marketing.
    Date
    22. 3.2009 17:49:11
  18. Kousha, K.; Thelwall, M.; Rezaie, S.: Assessing the citation impact of books : the role of Google Books, Google Scholar, and Scopus (2011) 0.04
    0.036367163 = product of:
      0.10910148 = sum of:
        0.10910148 = weight(_text_:book in 4920) [ClassicSimilarity], result of:
          0.10910148 = score(doc=4920,freq=8.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.4876966 = fieldWeight in 4920, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4920)
      0.33333334 = coord(1/3)
    
    Abstract
    Citation indictors are increasingly used in some subject areas to support peer review in the evaluation of researchers and departments. Nevertheless, traditional journal-based citation indexes may be inadequate for the citation impact assessment of book-based disciplines. This article examines whether online citations from Google Books and Google Scholar can provide alternative sources of citation evidence. To investigate this, we compared the citation counts to 1,000 books submitted to the 2008 U.K. Research Assessment Exercise (RAE) from Google Books and Google Scholar with Scopus citations across seven book-based disciplines (archaeology; law; politics and international studies; philosophy; sociology; history; and communication, cultural, and media studies). Google Books and Google Scholar citations to books were 1.4 and 3.2 times more common than were Scopus citations, and their medians were more than twice and three times as high as were Scopus median citations, respectively. This large number of citations is evidence that in book-oriented disciplines in the social sciences, arts, and humanities, online book citations may be sufficiently numerous to support peer review for research evaluation, at least in the United Kingdom.
  19. Thelwall, M.: Web indicators for research evaluation : a practical guide (2016) 0.04
    0.036367163 = product of:
      0.10910148 = sum of:
        0.10910148 = weight(_text_:book in 3384) [ClassicSimilarity], result of:
          0.10910148 = score(doc=3384,freq=8.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.4876966 = fieldWeight in 3384, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3384)
      0.33333334 = coord(1/3)
    
    Abstract
    In recent years there has been an increasing demand for research evaluation within universities and other research-based organisations. In parallel, there has been an increasing recognition that traditional citation-based indicators are not able to reflect the societal impacts of research and are slow to appear. This has led to the creation of new indicators for different types of research impact as well as timelier indicators, mainly derived from the Web. These indicators have been called altmetrics, webometrics or just web metrics. This book describes and evaluates a range of web indicators for aspects of societal or scholarly impact, discusses the theory and practice of using and evaluating web indicators for research assessment and outlines practical strategies for obtaining many web indicators. In addition to describing impact indicators for traditional scholarly outputs, such as journal articles and monographs, it also covers indicators for videos, datasets, software and other non-standard scholarly outputs. The book describes strategies to analyse web indicators for individual publications as well as to compare the impacts of groups of publications. The practical part of the book includes descriptions of how to use the free software Webometric Analyst to gather and analyse web data. This book is written for information science undergraduate and Master?s students that are learning about alternative indicators or scientometrics as well as Ph.D. students and other researchers and practitioners using indicators to help assess research impact or to study scholarly communication.
  20. Meho, L.I.; Sonnenwald, D.H.: Citation ranking versus peer evaluation of senior faculty research performance : a case study of Kurdish scholarship (2000) 0.03
    0.030858565 = product of:
      0.09257569 = sum of:
        0.09257569 = weight(_text_:book in 4382) [ClassicSimilarity], result of:
          0.09257569 = score(doc=4382,freq=4.0), product of:
            0.2237077 = queryWeight, product of:
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.050679956 = queryNorm
            0.41382432 = fieldWeight in 4382, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.414126 = idf(docFreq=1454, maxDocs=44218)
              0.046875 = fieldNorm(doc=4382)
      0.33333334 = coord(1/3)
    
    Abstract
    The purpose of this study is to analyze the relationship between citation ranking and peer evaluation in assessing senior faculty research performance. Other studies typically derive their peer evaluation data directly from referees, often in the form of ranking. This study uses two additional sources of peer evaluation data: citation contant analysis and book review content analysis. 2 main questions are investigated: (a) To what degree does citation ranking correlate with data from citation content analysis, book reviews and peer ranking? (b) Is citation ranking a valif evaluative indicator of research performance of senior faculty members? This study shows that citation ranking can provide a valid indicator for comparative evaluation of senior faculty research performance

Authors

Years

Languages

  • e 217
  • d 10
  • sp 2
  • ro 1
  • More… Less…

Types

  • a 220
  • m 8
  • el 4
  • s 3
  • More… Less…