Search (2 results, page 1 of 1)

  • × author_ss:"Schmidt, M."
  • × theme_ss:"Informetrie"
  1. Olensky, M.; Schmidt, M.; Eck, N.J. van: Evaluation of the citation matching algorithms of CWTS and iFQ in comparison to the Web of science (2016) 0.00
    0.0016647738 = product of:
      0.014982964 = sum of:
        0.014982964 = product of:
          0.029965928 = sum of:
            0.029965928 = weight(_text_:web in 3130) [ClassicSimilarity], result of:
              0.029965928 = score(doc=3130,freq=6.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.3122631 = fieldWeight in 3130, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3130)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    The results of bibliometric studies provided by bibliometric research groups, for example, the Centre for Science and Technology Studies (CWTS) and the Institute for Research Information and Quality Assurance (iFQ), are often used in the process of research assessment. Their databases use Web of Science (WoS) citation data, which they match according to their own matching algorithms-in the case of CWTS for standard usage in their studies and in the case of iFQ on an experimental basis. Because the problem of nonmatched citations in the WoS persists due to inaccuracies in the references or inaccuracies introduced in the data extraction process, it is important to ascertain how well these inaccuracies are rectified in these citation matching algorithms. This article evaluates the algorithms of CWTS and iFQ in comparison to the WoS in a quantitative and a qualitative analysis. The analysis builds upon the method and the manually verified corpus of a previous study. The algorithm of CWTS performs best, closely followed by that of iFQ. The WoS algorithm still performs quite well (F1 score: 96.41%), but shows deficits in matching references containing inaccuracies. An additional problem is posed by incorrectly provided cited reference information in source articles by the WoS.
    Object
    Web of science
  2. Schmidt, M.: ¬An analysis of the validity of retraction annotation in pubmed and the web of science (2018) 0.00
    0.0016647738 = product of:
      0.014982964 = sum of:
        0.014982964 = product of:
          0.029965928 = sum of:
            0.029965928 = weight(_text_:web in 4044) [ClassicSimilarity], result of:
              0.029965928 = score(doc=4044,freq=6.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.3122631 = fieldWeight in 4044, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4044)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    Research on scientific misconduct relies increasingly on retractions of articles. An interdisciplinary line of research has been established that empirically assesses the phenomenon of scientific misconduct using information on retractions, and thus aims to shed light on aspects of misconduct that previously were hidden. However, comparability and interpretability of studies are to a certain extent impeded by an absence of standards in corpus delineation and by the fact that the validity of this empirical data basis has never been systematically scrutinized. This article assesses the conceptual and empirical delineation of retractions against related publication types through a comparative analysis of the coverage and consistency of retraction annotation in the databases PubMed and the Web of Science (WoS), which are both commonly used for empicial studies on retractions. The searching and linking approaches of the WoS were subsequently evaluated. The results indicate that a considerable number of PubMed retracted publications and retractions are not labeled as such in the WoS or are indistinguishable from corrections, which is highly relevant for corpus and sample strategies in the WoS.
    Object
    Web of Science