Search (2 results, page 1 of 1)

  • × author_ss:"Schier, H."
  • × author_ss:"Bornmann, L."
  1. Bornmann, L.; Thor, A.; Marx, W.; Schier, H.: ¬The application of bibliometrics to research evaluation in the humanities and social sciences : an exploratory study using normalized Google Scholar data for the publications of a research institute (2016) 0.00
    0.0022374375 = product of:
      0.004474875 = sum of:
        0.004474875 = product of:
          0.00894975 = sum of:
            0.00894975 = weight(_text_:a in 3160) [ClassicSimilarity], result of:
              0.00894975 = score(doc=3160,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1685276 = fieldWeight in 3160, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3160)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In the humanities and social sciences, bibliometric methods for the assessment of research performance are (so far) less common. This study uses a concrete example in an attempt to evaluate a research institute from the area of social sciences and humanities with the help of data from Google Scholar (GS). In order to use GS for a bibliometric study, we developed procedures for the normalization of citation impact, building on the procedures of classical bibliometrics. In order to test the convergent validity of the normalized citation impact scores, we calculated normalized scores for a subset of the publications based on data from the Web of Science (WoS) and Scopus. Even if scores calculated with the help of GS and the WoS/Scopus are not identical for the different publication types (considered here), they are so similar that they result in the same assessment of the institute investigated in this study: For example, the institute's papers whose journals are covered in the WoS are cited at about an average rate (compared with the other papers in the journals).
    Type
    a
  2. Bornmann, L.; Schier, H.; Marx, W.; Daniel, H.-D.: Is interactive open access publishing able to identify high-impact submissions? : a study on the predictive validity of Atmospheric Chemistry and Physics by using percentile rank classes (2011) 0.00
    0.0018909799 = product of:
      0.0037819599 = sum of:
        0.0037819599 = product of:
          0.0075639198 = sum of:
            0.0075639198 = weight(_text_:a in 4132) [ClassicSimilarity], result of:
              0.0075639198 = score(doc=4132,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14243183 = fieldWeight in 4132, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4132)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In a comprehensive research project, we investigated the predictive validity of selection decisions and reviewers' ratings at the open access journal Atmospheric Chemistry and Physics (ACP). ACP is a high-impact journal publishing papers on the Earth's atmosphere and the underlying chemical and physical processes. Scientific journals have to deal with the following question concerning the predictive validity: Are in fact the "best" scientific works selected from the manuscripts submitted? In this study we examined whether selecting the "best" manuscripts means selecting papers that after publication show top citation performance as compared to other papers in this research area. First, we appraised the citation impact of later published manuscripts based on the percentile citedness rank classes of the population distribution (scaling in a specific subfield). Second, we analyzed the association between the decisions (n = 677 accepted or rejected, but published elsewhere manuscripts) or ratings (reviewers' ratings for n = 315 manuscripts), respectively, and the citation impact classes of the manuscripts. The results confirm the predictive validity of the ACP peer review system.
    Type
    a