Search (2 results, page 1 of 1)

  • × author_ss:"Rokach, L."
  • × theme_ss:"Informetrie"
  • × year_i:[2010 TO 2020}
  1. Rokach, L.; Mitra, P.: Parsimonious citer-based measures : the artificial intelligence domain as a case study (2013) 0.02
    0.020350434 = product of:
      0.0610513 = sum of:
        0.0610513 = weight(_text_:based in 212) [ClassicSimilarity], result of:
          0.0610513 = score(doc=212,freq=8.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.39947033 = fieldWeight in 212, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=212)
      0.33333334 = coord(1/3)
    
    Abstract
    This article presents a new Parsimonious Citer-Based Measure for assessing the quality of academic papers. This new measure is parsimonious as it looks for the smallest set of citing authors (citers) who have read a certain paper. The Parsimonious Citer-Based Measure aims to address potential distortion in the values of existing citer-based measures. These distortions occur because of various factors, such as the practice of hyperauthorship. This new measure is empirically compared with existing measures, such as the number of citers and the number of citations in the field of artificial intelligence (AI). The results show that the new measure is highly correlated with those two measures. However, the new measure is more robust against citation manipulations and better differentiates between prominent and nonprominent AI researchers than the above-mentioned measures.
  2. Rokach, L.; Kalech, M.; Blank, I.; Stern, R.: Who is going to win the next Association for the Advancement of Artificial Intelligence Fellowship Award? : evaluating researchers by mining bibliographic data (2011) 0.01
    0.008479347 = product of:
      0.025438042 = sum of:
        0.025438042 = weight(_text_:based in 4945) [ClassicSimilarity], result of:
          0.025438042 = score(doc=4945,freq=2.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.16644597 = fieldWeight in 4945, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4945)
      0.33333334 = coord(1/3)
    
    Abstract
    Accurately evaluating a researcher and the quality of his or her work is an important task when decision makers have to decide on such matters as promotions and awards. Publications and citations play a key role in this task, and many previous studies have proposed using measurements based on them for evaluating researchers. Machine learning techniques as a way of enhancing the evaluating process have been relatively unexplored. We propose using a machine learning approach for evaluating researchers. In particular, the proposed method combines the outputs of three learning techniques (logistics regression, decision trees, and artificial neural networks) to obtain a unified prediction with improved accuracy. We conducted several experiments to evaluate the model's ability to: (a) classify researchers in the field of artificial intelligence as Association for the Advancement of Artificial Intelligence (AAAI) fellows and (b) predict the next AAAI fellowship winners. We show that both our classification and prediction methods are more accurate than are previous measurement methods, and reach a precision rate of 96% and a recall of 92%.