Search (159 results, page 1 of 8)

  • × language_ss:"e"
  • × theme_ss:"Informetrie"
  • × type_ss:"a"
  1. Raan, A.F.J. van: Statistical properties of bibliometric indicators : research group indicator distributions and correlations (2006) 0.08
    0.08044747 = product of:
      0.16089495 = sum of:
        0.16089495 = sum of:
          0.1025501 = weight(_text_:assessment in 5275) [ClassicSimilarity], result of:
            0.1025501 = score(doc=5275,freq=2.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.36599535 = fieldWeight in 5275, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.046875 = fieldNorm(doc=5275)
          0.05834485 = weight(_text_:22 in 5275) [ClassicSimilarity], result of:
            0.05834485 = score(doc=5275,freq=4.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.32829654 = fieldWeight in 5275, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5275)
      0.5 = coord(1/2)
    
    Abstract
    In this article we present an empirical approach to the study of the statistical properties of bibliometric indicators on a very relevant but not simply available aggregation level: the research group. We focus on the distribution functions of a coherent set of indicators that are used frequently in the analysis of research performance. In this sense, the coherent set of indicators acts as a measuring instrument. Better insight into the statistical properties of a measuring instrument is necessary to enable assessment of the instrument itself. The most basic distribution in bibliometric analysis is the distribution of citations over publications, and this distribution is very skewed. Nevertheless, we clearly observe the working of the central limit theorem and find that at the level of research groups the distribution functions of the main indicators, particularly the journal- normalized and the field-normalized indicators, approach normal distributions. The results of our study underline the importance of the idea of group oeuvre, that is, the role of sets of related publications as a unit of analysis.
    Date
    22. 7.2006 16:20:22
  2. Didegah, F.; Thelwall, M.: Co-saved, co-tweeted, and co-cited networks (2018) 0.07
    0.071903065 = product of:
      0.14380613 = sum of:
        0.14380613 = sum of:
          0.1025501 = weight(_text_:assessment in 4291) [ClassicSimilarity], result of:
            0.1025501 = score(doc=4291,freq=2.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.36599535 = fieldWeight in 4291, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.046875 = fieldNorm(doc=4291)
          0.041256037 = weight(_text_:22 in 4291) [ClassicSimilarity], result of:
            0.041256037 = score(doc=4291,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.23214069 = fieldWeight in 4291, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4291)
      0.5 = coord(1/2)
    
    Abstract
    Counts of tweets and Mendeley user libraries have been proposed as altmetric alternatives to citation counts for the impact assessment of articles. Although both have been investigated to discover whether they correlate with article citations, it is not known whether users tend to tweet or save (in Mendeley) the same kinds of articles that they cite. In response, this article compares pairs of articles that are tweeted, saved to a Mendeley library, or cited by the same user, but possibly a different user for each source. The study analyzes 1,131,318 articles published in 2012, with minimum tweeted (10), saved to Mendeley (100), and cited (10) thresholds. The results show surprisingly minor overall overlaps between the three phenomena. The importance of journals for Twitter and the presence of many bots at different levels of activity suggest that this site has little value for impact altmetrics. The moderate differences between patterns of saving and citation suggest that Mendeley can be used for some types of impact assessments, but sensitivity is needed for underlying differences.
    Date
    28. 7.2018 10:00:22
  3. Thelwall, M.; Kousha, K.; Abdoli, M.; Stuart, E.; Makita, M.; Wilson, P.; Levitt, J.: Why are coauthored academic articles more cited : higher quality or larger audience? (2023) 0.06
    0.059919223 = product of:
      0.11983845 = sum of:
        0.11983845 = sum of:
          0.08545842 = weight(_text_:assessment in 995) [ClassicSimilarity], result of:
            0.08545842 = score(doc=995,freq=2.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.30499613 = fieldWeight in 995, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0390625 = fieldNorm(doc=995)
          0.03438003 = weight(_text_:22 in 995) [ClassicSimilarity], result of:
            0.03438003 = score(doc=995,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.19345059 = fieldWeight in 995, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=995)
      0.5 = coord(1/2)
    
    Abstract
    Collaboration is encouraged because it is believed to improve academic research, supported by indirect evidence in the form of more coauthored articles being more cited. Nevertheless, this might not reflect quality but increased self-citations or the "audience effect": citations from increased awareness through multiple author networks. We address this with the first science wide investigation into whether author numbers associate with journal article quality, using expert peer quality judgments for 122,331 articles from the 2014-20 UK national assessment. Spearman correlations between author numbers and quality scores show moderately strong positive associations (0.2-0.4) in the health, life, and physical sciences, but weak or no positive associations in engineering and social sciences, with weak negative/positive or no associations in various arts and humanities, and a possible negative association for decision sciences. This gives the first systematic evidence that greater numbers of authors associates with higher quality journal articles in the majority of academia outside the arts and humanities, at least for the UK. Positive associations between team size and citation counts in areas with little association between team size and quality also show that audience effects or other nonquality factors account for the higher citation rates of coauthored articles in some fields.
    Date
    22. 6.2023 18:11:50
  4. Haustein, S.; Sugimoto, C.; Larivière, V.: Social media in scholarly communication : Guest editorial (2015) 0.05
    0.05471951 = product of:
      0.10943902 = sum of:
        0.10943902 = sum of:
          0.088811 = weight(_text_:assessment in 3809) [ClassicSimilarity], result of:
            0.088811 = score(doc=3809,freq=6.0), product of:
              0.2801951 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.050750602 = queryNorm
              0.31696132 = fieldWeight in 3809, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0234375 = fieldNorm(doc=3809)
          0.020628018 = weight(_text_:22 in 3809) [ClassicSimilarity], result of:
            0.020628018 = score(doc=3809,freq=2.0), product of:
              0.17771997 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050750602 = queryNorm
              0.116070345 = fieldWeight in 3809, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=3809)
      0.5 = coord(1/2)
    
    Abstract
    Furthermore, the rise of the web, and subsequently, the social web, has challenged the quasi-monopolistic status of the journal as the main form of scholarly communication and citation indices as the primary assessment mechanisms. Scientific communication is becoming more open, transparent, and diverse: publications are increasingly open access; manuscripts, presentations, code, and data are shared online; research ideas and results are discussed and criticized openly on blogs; and new peer review experiments, with open post publication assessment by anonymous or non-anonymous referees, are underway. The diversification of scholarly production and assessment, paired with the increasing speed of the communication process, leads to an increased information overload (Bawden and Robinson, 2008), demanding new filters. The concept of altmetrics, short for alternative (to citation) metrics, was created out of an attempt to provide a filter (Priem et al., 2010) and to steer against the oversimplification of the measurement of scientific success solely on the basis of number of journal articles published and citations received, by considering a wider range of research outputs and metrics (Piwowar, 2013). Although the term altmetrics was introduced in a tweet in 2010 (Priem, 2010), the idea of capturing traces - "polymorphous mentioning" (Cronin et al., 1998, p. 1320) - of scholars and their documents on the web to measure "impact" of science in a broader manner than citations was introduced years before, largely in the context of webometrics (Almind and Ingwersen, 1997; Thelwall et al., 2005):
    Date
    20. 1.2015 18:30:22
  5. López Piñeiro, C.; Gimenez Toledo, E.: Knowledge classification : a problem for scientific assessment in Spain? (2011) 0.04
    0.0444055 = product of:
      0.088811 = sum of:
        0.088811 = product of:
          0.177622 = sum of:
            0.177622 = weight(_text_:assessment in 4735) [ClassicSimilarity], result of:
              0.177622 = score(doc=4735,freq=6.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.63392264 = fieldWeight in 4735, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4735)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Agreements and disagreements among some of the most important knowledge classifications involved in Spanish scientific activity assessment are presented here. Knowledge classifications used by two Spanish platforms for journals evaluation, RESH and In-RECS/In-RECJ; the one used by Web of Knowledge; and those used by the three main agencies working on scientific evaluation in Spain, ANECA, ANEP, and CNEAI are compared and analysed in order to check the differences between them. Four disciplines were traced across these knowledge classifications, and none of them tally with others. This state favours failures in the assessment system, especially in those disciplines whose position on classifications seems to be less clear. In this paper, the need for a rapprochement to the subject exposed is expressed. The opening of a debate is offered, with the aim of stimulating the improvement of the whole system, especially in Humanities and Social Sciences fields.
  6. Martin, B.R.: ¬The use of multiple indicators in the assessment of basic research (1996) 0.04
    0.04272921 = product of:
      0.08545842 = sum of:
        0.08545842 = product of:
          0.17091684 = sum of:
            0.17091684 = weight(_text_:assessment in 6696) [ClassicSimilarity], result of:
              0.17091684 = score(doc=6696,freq=2.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.60999227 = fieldWeight in 6696, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6696)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  7. Huang, M.-H.; Lin, C.-S.; Chen, D.-Z.: Counting methods, country rank changes, and counting inflation in the assessment of national research productivity and impact (2011) 0.04
    0.04272921 = product of:
      0.08545842 = sum of:
        0.08545842 = product of:
          0.17091684 = sum of:
            0.17091684 = weight(_text_:assessment in 4942) [ClassicSimilarity], result of:
              0.17091684 = score(doc=4942,freq=8.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.60999227 = fieldWeight in 4942, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4942)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The counting of papers and citations is fundamental to the assessment of research productivity and impact. In an age of increasing scientific collaboration across national borders, the counting of papers produced by collaboration between multiple countries, and citations of such papers, raises concerns in country-level research evaluation. In this study, we compared the number counts and country ranks resulting from five different counting methods. We also observed inflation depending on the method used. Using the 1989 to 2008 physics papers indexed in ISI's Web of Science as our sample, we analyzed the counting results in terms of paper count (research productivity) as well as citation count and citation-paper ratio (CP ratio) based evaluation (research impact). The results show that at the country-level assessment, the selection of counting method had only minor influence on the number counts and country rankings in each assessment. However, the influences of counting methods varied between paper count, citation count, and CP ratio based evaluation. The findings also suggest that the popular counting method (whole counting) that gives each collaborating country one full credit may not be the best counting method. Straight counting that accredits only the first or the corresponding author or fractional counting that accredits each collaborator with partial and weighted credit might be the better choices.
  8. Trevorrow, P.: ¬The use of H-index for the assessment of journals' performance will lead to shifts in editorial policies : a response (2012) 0.04
    0.04272921 = product of:
      0.08545842 = sum of:
        0.08545842 = product of:
          0.17091684 = sum of:
            0.17091684 = weight(_text_:assessment in 49) [ClassicSimilarity], result of:
              0.17091684 = score(doc=49,freq=2.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.60999227 = fieldWeight in 49, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.078125 = fieldNorm(doc=49)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  9. Joint, N.: Bemused by bibliometrics : using citation analysis to evaluate research quality (2008) 0.04
    0.037004586 = product of:
      0.07400917 = sum of:
        0.07400917 = product of:
          0.14801835 = sum of:
            0.14801835 = weight(_text_:assessment in 1900) [ClassicSimilarity], result of:
              0.14801835 = score(doc=1900,freq=6.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.5282689 = fieldWeight in 1900, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1900)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - The purpose of this paper is to examine the way in which library and information science (LIS) issues have been handled in the formulation of recent UK Higher Education policy concerned with research quality evaluation. Design/methodology/approach - A chronological review of decision making about digital rights arrangements for the 2008 Research Assessment Exercise (RAE), and of recent announcements about the new shape of metrics-based assessment in the Research Excellence Framework, which supersedes the RAE. Against this chronological framework, the likely nature of LIS practitioner reactions to the flow of decision making is suggested. Findings - It was found that a weak grasp of LIS issues by decision makers undermines the process whereby effective research evaluation models are created. LIS professional opinion should be sampled before key decisions are made. Research limitations/implications - This paper makes no sophisticated comments on the complex research issues underlying advanced bibliometric research evaluation models. It does point out that sophisticated and expensive bibliometric consultancies arrive at many conclusions about metrics-based research assessment that are common knowledge amongst LIS practitioners. Practical implications - Practical difficulties arise when one announces a decision to move to a new and specific type of research evaluation indicator before one has worked out anything very specific about that indicator. Originality/value - In this paper, the importance of information management issues to the mainstream issues of government and public administration is underlined. The most valuable conclusion of this paper is that, because LIS issues are now at the heart of democratic decision making, LIS practitioners and professionals should be given some sort of role in advising on such matters.
  10. Alimohammadi, D.: Webliometrics : a new horizon in information research (2006) 0.04
    0.03625694 = product of:
      0.07251388 = sum of:
        0.07251388 = product of:
          0.14502776 = sum of:
            0.14502776 = weight(_text_:assessment in 621) [ClassicSimilarity], result of:
              0.14502776 = score(doc=621,freq=4.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.51759565 = fieldWeight in 621, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.046875 = fieldNorm(doc=621)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - During the second half of the past century, the field of library and information science (LIS) has frequently used the research methods of the social sciences. In particular, quantitative assessment research methodologies, together with one of its associated concepts, quantitative assessment metrics, have also been used in the information field, out of which more specific bibliometric, scientometric, informetric and webometric research instruments have been developed. This brief communication tries to use the metrics system to coin a new concept in information science metrical studies, namely, webliometrics. Design/methodology/approach - An overview of the webliography is presented, while webliometrics as a type of research method in LIS is defined. Webliometrics' functions are enumerated and webliometric research methods are sketched out. Findings - That webliometrics is worthy of further clarification and development, both in theory and practice. Research limitations/implications - Webliometrics potentially offer a powerful and rigorous new research tool for LIS researchers. Practical implications - The research outputs of webliometrics, although theoretically and statistically rigorous, are of immediate practical value. Originality/value - This paper aims to increase the knowledge of an original thought as yet under-utilised approach to research methods.
  11. Oppenheim, C.; Stuart, D.: Is there a correlation between investment in an academic library and a higher education institution's ratings in the Research Assessment Exercise? (2004) 0.04
    0.03625694 = product of:
      0.07251388 = sum of:
        0.07251388 = product of:
          0.14502776 = sum of:
            0.14502776 = weight(_text_:assessment in 668) [ClassicSimilarity], result of:
              0.14502776 = score(doc=668,freq=4.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.51759565 = fieldWeight in 668, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.046875 = fieldNorm(doc=668)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Investigates whether a correlation exists between a UK university's academic excellence, as judged by its Research Assessment Exercise (RAE) ratings, and the amount spent on its library. Considers both macro and micro levels, looking at institutions as a whole, and on a departmental level within the area of archaeology. As well as comparing all the higher education institutions, this group is broken down further, comparing the ratings and spending of the Russell and 94 Groups. There are correlations between the different groups of higher education institutions and RAE ratings. However, rather than high RAE ratings causing high library spending or high library spending causing high RAE ratings, it is likely that they are indirectly linked, good universities having both high RAE ratings and good libraries and poor universities having low RAE ratings and less money spent on libraries. Also describes how libraries in universities with archaeology departments allocate budgets.
  12. Meho, L.I.; Sugimoto, C.R.: Assessing the scholarly impact of information studies : a tale of two citation databases - Scopus and Web of Science (2009) 0.04
    0.03625694 = product of:
      0.07251388 = sum of:
        0.07251388 = product of:
          0.14502776 = sum of:
            0.14502776 = weight(_text_:assessment in 3298) [ClassicSimilarity], result of:
              0.14502776 = score(doc=3298,freq=4.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.51759565 = fieldWeight in 3298, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3298)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study uses citations, from 1996 to 2007, to the work of 80 randomly selected full-time, information studies (IS) faculty members from North America to examine differences between Scopus and Web of Science in assessing the scholarly impact of the field focusing on the most frequently citing journals, conference proceedings, research domains and institutions, as well as all citing countries. Results show that when assessment is limited to smaller citing entities (e.g., journals, conference proceedings, institutions), the two databases produce considerably different results, whereas when assessment is limited to larger citing entities (e.g., research domains, countries), the two databases produce very similar pictures of scholarly impact. In the former case, the use of Scopus (for journals and institutions) and both Scopus and Web of Science (for conference proceedings) is necessary to more accurately assess or visualize the scholarly impact of IS, whereas in the latter case, assessing or visualizing the scholarly impact of IS is independent of the database used.
  13. Costas, R.; Leeuwen, T.N. van; Bordons, M.: ¬A bibliometric classificatory approach for the study and assessment of research performance at the individual level : the effects of age on productivity and impact (2010) 0.04
    0.03625694 = product of:
      0.07251388 = sum of:
        0.07251388 = product of:
          0.14502776 = sum of:
            0.14502776 = weight(_text_:assessment in 3700) [ClassicSimilarity], result of:
              0.14502776 = score(doc=3700,freq=4.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.51759565 = fieldWeight in 3700, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3700)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The authors set forth a general methodology for conducting bibliometric analyses at the micro level. It combines several indicators grouped into three factors or dimensions, which characterize different aspects of scientific performance. Different profiles or classes of scientists are described according to their research performance in each dimension. A series of results based on the findings from the application of this methodology to the study of Spanish National Research Council scientists in Spain in three thematic areas are presented. Special emphasis is made on the identification and description of top scientists from structural and bibliometric perspectives. The effects of age on the productivity and impact of the different classes of scientists are analyzed. The classificatory approach proposed herein may prove a useful tool in support of research assessment at the individual level and for exploring potential determinants of research success.
  14. Abramo, G.; D'Angelo, C.A.: ¬The VQR, Italy's second national research assessment : methodological failures and ranking distortions (2015) 0.04
    0.03625694 = product of:
      0.07251388 = sum of:
        0.07251388 = product of:
          0.14502776 = sum of:
            0.14502776 = weight(_text_:assessment in 2256) [ClassicSimilarity], result of:
              0.14502776 = score(doc=2256,freq=4.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.51759565 = fieldWeight in 2256, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2256)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The 2004-2010 VQR (Research Quality Evaluation), completed in July 2013, was Italy's second national research assessment exercise. The VQR performance evaluation followed a pattern also seen in other nations, as it was based on a selected subset of products. In this work, we identify the exercise's methodological weaknesses and measure the distortions that result from them in the university performance rankings. First, we create a scenario in which we assume the efficient selection of the products to be submitted by the universities and, from this, simulate a set of rankings applying the precise VQR rating criteria. Next, we compare these "VQR rankings" with those that would derive from the application of more-appropriate bibliometrics. Finally, we extend the comparison to university rankings based on the entire scientific production for the period, as indexed in the Web of Science.
  15. Braun, S.: Manifold: a custom analytics platform to visualize research impact (2015) 0.04
    0.03625694 = product of:
      0.07251388 = sum of:
        0.07251388 = product of:
          0.14502776 = sum of:
            0.14502776 = weight(_text_:assessment in 2906) [ClassicSimilarity], result of:
              0.14502776 = score(doc=2906,freq=4.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.51759565 = fieldWeight in 2906, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2906)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The use of research impact metrics and analytics has become an integral component to many aspects of institutional assessment. Many platforms currently exist to provide such analytics, both proprietary and open source; however, the functionality of these systems may not always overlap to serve uniquely specific needs. In this paper, I describe a novel web-based platform, named Manifold, that I built to serve custom research impact assessment needs in the University of Minnesota Medical School. Built on a standard LAMP architecture, Manifold automatically pulls publication data for faculty from Scopus through APIs, calculates impact metrics through automated analytics, and dynamically generates report-like profiles that visualize those metrics. Work on this project has resulted in many lessons learned about challenges to sustainability and scalability in developing a system of such magnitude.
  16. Oppenheim, C.: Do citations count? : Citation indexing and the Research Assessment Exercise (RAE) (1996) 0.03
    0.034183368 = product of:
      0.068366736 = sum of:
        0.068366736 = product of:
          0.13673347 = sum of:
            0.13673347 = weight(_text_:assessment in 6673) [ClassicSimilarity], result of:
              0.13673347 = score(doc=6673,freq=2.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.4879938 = fieldWeight in 6673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6673)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  17. Pillai, C.V.R.; Girijakumari, S.: Widening horizons of informetrics (1996) 0.03
    0.034183368 = product of:
      0.068366736 = sum of:
        0.068366736 = product of:
          0.13673347 = sum of:
            0.13673347 = weight(_text_:assessment in 7172) [ClassicSimilarity], result of:
              0.13673347 = score(doc=7172,freq=2.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.4879938 = fieldWeight in 7172, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7172)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Traces the origin and development of informetrics in the field of library and information science. 'Informatrics' is seen as a generic term to denote studies in which quantitative methods are applied. Discusses various applications of informetrics including citation analysis; impact factor; absolescence and ageing studies; bibliographic coupling; co-citation; and measurement of information such as retrieval performance assessment. Outlines recent developments in informetrics and calls for attention to be paid to the quality of future research in the field to ensure its reliability
  18. Vaughan, L.; Shaw, D.: Web citation data for impact assessment : a comparison of four science disciplines (2005) 0.03
    0.030214114 = product of:
      0.06042823 = sum of:
        0.06042823 = product of:
          0.12085646 = sum of:
            0.12085646 = weight(_text_:assessment in 3880) [ClassicSimilarity], result of:
              0.12085646 = score(doc=3880,freq=4.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.43132967 = fieldWeight in 3880, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3880)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The number and type of Web citations to journal articles in four areas of science are examined: biology, genetics, medicine, and multidisciplinary sciences. For a sample of 5,972 articles published in 114 journals, the median Web citation counts per journal article range from 6.2 in medicine to 10.4 in genetics. About 30% of Web citations in each area indicate intellectual impact (citations from articles or class readings, in contrast to citations from bibliographic services or the author's or journal's home page). Journals receiving more Web citations also have higher percentages of citations indicating intellectual impact. There is significant correlation between the number of citations reported in the databases from the Institute for Scientific Information (ISI, now Thomson Scientific) and the number of citations retrieved using the Google search engine (Web citations). The correlation is much weaker for journals published outside the United Kingdom or United States and for multidisciplinary journals. Web citation numbers are higher than ISI citation counts, suggesting that Web searches might be conducted for an earlier or a more fine-grained assessment of an article's impact. The Web-evident impact of non-UK/USA publications might provide a balance to the geographic or cultural biases observed in ISI's data, although the stability of Web citation counts is debatable.
  19. White, H.D.; Boell, S.K.; Yu, H.; Davis, M.; Wilson, C.S.; Cole, F.T.H.: Libcitations : a measure for comparative assessment of book publications in the humanities and social sciences (2009) 0.03
    0.030214114 = product of:
      0.06042823 = sum of:
        0.06042823 = product of:
          0.12085646 = sum of:
            0.12085646 = weight(_text_:assessment in 2846) [ClassicSimilarity], result of:
              0.12085646 = score(doc=2846,freq=4.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.43132967 = fieldWeight in 2846, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2846)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Bibliometric measures for evaluating research units in the book-oriented humanities and social sciences are underdeveloped relative to those available for journal-oriented science and technology. We therefore present a new measure designed for book-oriented fields: the libcitation count. This is a count of the libraries holding a given book, as reported in a national or international union catalog. As librarians decide what to acquire for the audiences they serve, they jointly constitute an instrument for gauging the cultural impact of books. Their decisions are informed by knowledge not only of audiences but also of the book world (e.g., the reputations of authors and the prestige of publishers). From libcitation counts, measures can be derived for comparing research units. Here, we imagine a match-up between the departments of history, philosophy, and political science at the University of New South Wales and the University of Sydney in Australia. We chose the 12 books from each department that had the highest libcitation counts in the Libraries Australia union catalog during 2000 to 2006. We present each book's raw libcitation count, its rank within its Library of Congress (LC) class, and its LC-class normalized libcitation score. The latter is patterned on the item-oriented field normalized citation score used in evaluative bibliometrics. Summary statistics based on these measures allow the departments to be compared for cultural impact. Our work has implications for programs such as Excellence in Research for Australia and the Research Assessment Exercise in the United Kingdom. It also has implications for data mining in OCLC's WorldCat.
  20. Kousha, K.; Thelwall, M.; Rezaie, S.: Assessing the citation impact of books : the role of Google Books, Google Scholar, and Scopus (2011) 0.03
    0.030214114 = product of:
      0.06042823 = sum of:
        0.06042823 = product of:
          0.12085646 = sum of:
            0.12085646 = weight(_text_:assessment in 4920) [ClassicSimilarity], result of:
              0.12085646 = score(doc=4920,freq=4.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.43132967 = fieldWeight in 4920, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4920)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Citation indictors are increasingly used in some subject areas to support peer review in the evaluation of researchers and departments. Nevertheless, traditional journal-based citation indexes may be inadequate for the citation impact assessment of book-based disciplines. This article examines whether online citations from Google Books and Google Scholar can provide alternative sources of citation evidence. To investigate this, we compared the citation counts to 1,000 books submitted to the 2008 U.K. Research Assessment Exercise (RAE) from Google Books and Google Scholar with Scopus citations across seven book-based disciplines (archaeology; law; politics and international studies; philosophy; sociology; history; and communication, cultural, and media studies). Google Books and Google Scholar citations to books were 1.4 and 3.2 times more common than were Scopus citations, and their medians were more than twice and three times as high as were Scopus median citations, respectively. This large number of citations is evidence that in book-oriented disciplines in the social sciences, arts, and humanities, online book citations may be sufficiently numerous to support peer review for research evaluation, at least in the United Kingdom.

Authors

Years