Search (3 results, page 1 of 1)

  • × author_ss:"D'Angelo, C.A."
  • × theme_ss:"Informetrie"
  1. D'Angelo, C.A.; Giuffrida, C.; Abramo, G.: ¬A heuristic approach to author name disambiguation in bibliometrics databases for large-scale research assessments (2011) 0.01
    0.008496759 = product of:
      0.042483795 = sum of:
        0.042483795 = weight(_text_:22 in 4190) [ClassicSimilarity], result of:
          0.042483795 = score(doc=4190,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.23214069 = fieldWeight in 4190, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=4190)
      0.2 = coord(1/5)
    
    Date
    22. 1.2011 13:06:52
  2. Abramo, G.; D'Angelo, C.A.: ¬The VQR, Italy's second national research assessment : methodological failures and ranking distortions (2015) 0.01
    0.005796136 = product of:
      0.028980678 = sum of:
        0.028980678 = weight(_text_:it in 2256) [ClassicSimilarity], result of:
          0.028980678 = score(doc=2256,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.19173169 = fieldWeight in 2256, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.046875 = fieldNorm(doc=2256)
      0.2 = coord(1/5)
    
    Abstract
    The 2004-2010 VQR (Research Quality Evaluation), completed in July 2013, was Italy's second national research assessment exercise. The VQR performance evaluation followed a pattern also seen in other nations, as it was based on a selected subset of products. In this work, we identify the exercise's methodological weaknesses and measure the distortions that result from them in the university performance rankings. First, we create a scenario in which we assume the efficient selection of the products to be submitted by the universities and, from this, simulate a set of rankings applying the precise VQR rating criteria. Next, we compare these "VQR rankings" with those that would derive from the application of more-appropriate bibliometrics. Finally, we extend the comparison to university rankings based on the entire scientific production for the period, as indexed in the Web of Science.
  3. Abramo, G.; D'Angelo, C.A.: ¬A decision support system for public research organizations participating in national research assessment exercises (2009) 0.00
    0.004830113 = product of:
      0.024150565 = sum of:
        0.024150565 = weight(_text_:it in 3123) [ClassicSimilarity], result of:
          0.024150565 = score(doc=3123,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.15977642 = fieldWeight in 3123, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3123)
      0.2 = coord(1/5)
    
    Abstract
    We are witnessing a rapid trend toward the adoption of exercises for evaluation of national research systems, generally based on peer review. They respond to two main needs: stimulating higher efficiency in research activities by public laboratories, and realizing better allocative efficiency in government funding of such institutions. However, the peer review approach is typified by several limitations that raise doubts for the achievement of the ultimate objectives. In particular, subjectivity of judgment, which occurs during the step of selecting research outputs to be submitted for the evaluations, risks heavily distorting both the final ratings of the organizations evaluated and the ultimate funding they receive. These distortions become ever more relevant if the evaluation is limited to small samples of the scientific production of the research institutions. The objective of the current study is to propose a quantitative methodology based on bibliometric data that would provide a reliable support for the process of selecting the best products of a laboratory, and thus limit distortions. Benefits are twofold: single research institutions can maximize the probability of receiving a fair evaluation coherent with the real quality of their research. At the same time, broader adoptions of this approach could also provide strong advantages at the macroeconomic level, since it guarantees financial allocations based on the real value of the institutions under evaluation. In this study the proposed methodology was applied to the hard science sectors of the Italian university research system for the period 2004-2006.