Search (3 results, page 1 of 1)

  • × author_ss:"D'Angelo, C.A."
  • × year_i:[2010 TO 2020}
  1. D'Angelo, C.A.; Giuffrida, C.; Abramo, G.: ¬A heuristic approach to author name disambiguation in bibliometrics databases for large-scale research assessments (2011) 0.01
    0.010587957 = product of:
      0.021175914 = sum of:
        0.021175914 = product of:
          0.042351827 = sum of:
            0.042351827 = weight(_text_:22 in 4190) [ClassicSimilarity], result of:
              0.042351827 = score(doc=4190,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.23214069 = fieldWeight in 4190, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4190)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2011 13:06:52
  2. Abramo, G.; D'Angelo, C.A.; Di Costa, F.: ¬A new approach to measure the scientific strengths of territories (2015) 0.01
    0.009195855 = product of:
      0.01839171 = sum of:
        0.01839171 = product of:
          0.03678342 = sum of:
            0.03678342 = weight(_text_:web in 1852) [ClassicSimilarity], result of:
              0.03678342 = score(doc=1852,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.21634221 = fieldWeight in 1852, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1852)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The current work applies a method for mapping the supply of new knowledge from public research organizations, in this case from Italian institutions at the level of regions and provinces (NUTS2 and NUTS3). Through the analysis of scientific production indexed in the Web of Science for the years 2006-2010, the new knowledge is classified in subject categories and mapped according to an algorithm for the reconciliation of authors' affiliations. Unlike other studies in the literature based on simple counting of publications, the present study adopts an indicator, Scientific Strength, which takes account of both the quantity of scientific production and its impact on the advancement of knowledge. The differences in the results that arise from the 2 approaches are examined. The results of works of this kind can inform public research policies, at national and local levels, as well as the localization strategies of research-based companies.
  3. Abramo, G.; D'Angelo, C.A.: ¬The VQR, Italy's second national research assessment : methodological failures and ranking distortions (2015) 0.01
    0.009195855 = product of:
      0.01839171 = sum of:
        0.01839171 = product of:
          0.03678342 = sum of:
            0.03678342 = weight(_text_:web in 2256) [ClassicSimilarity], result of:
              0.03678342 = score(doc=2256,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.21634221 = fieldWeight in 2256, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2256)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The 2004-2010 VQR (Research Quality Evaluation), completed in July 2013, was Italy's second national research assessment exercise. The VQR performance evaluation followed a pattern also seen in other nations, as it was based on a selected subset of products. In this work, we identify the exercise's methodological weaknesses and measure the distortions that result from them in the university performance rankings. First, we create a scenario in which we assume the efficient selection of the products to be submitted by the universities and, from this, simulate a set of rankings applying the precise VQR rating criteria. Next, we compare these "VQR rankings" with those that would derive from the application of more-appropriate bibliometrics. Finally, we extend the comparison to university rankings based on the entire scientific production for the period, as indexed in the Web of Science.