Search (3 results, page 1 of 1)

  • × year_i:[1990 TO 2000}
  • × author_ss:"Yang, Y."
  1. Yang, Y.; Liu, X.: ¬A re-examination of text categorization methods (1999) 0.04
    0.041146368 = product of:
      0.082292736 = sum of:
        0.082292736 = product of:
          0.123439096 = sum of:
            0.07962535 = weight(_text_:y in 3386) [ClassicSimilarity], result of:
              0.07962535 = score(doc=3386,freq=2.0), product of:
                0.21393733 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.04445543 = queryNorm
                0.3721901 = fieldWeight in 3386, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3386)
            0.043813743 = weight(_text_:k in 3386) [ClassicSimilarity], result of:
              0.043813743 = score(doc=3386,freq=2.0), product of:
                0.15869603 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04445543 = queryNorm
                0.27608594 = fieldWeight in 3386, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3386)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    This paper reports a controlled study with statistical significance tests an five text categorization methods: the Support Vector Machines (SVM), a k-Nearest Neighbor (kNN) classifier, a neural network (NNet) approach, the Linear Leastsquares Fit (LLSF) mapping and a Naive Bayes (NB) classifier. We focus an the robustness of these methods in dealing with a skewed category distribution, and their performance as function of the training-set category frequency. Our results show that SVM, kNN and LLSF significantly outperform NNet and NB when the number of positive training instances per category are small (less than ten, and that all the methods perform comparably when the categories are sufficiently common (over 300 instances).
  2. Yang, Y.; Chute, C.G.A.: ¬A schematic analysis of the Unified Medical Language System (1992) 0.02
    0.022750102 = product of:
      0.045500204 = sum of:
        0.045500204 = product of:
          0.13650061 = sum of:
            0.13650061 = weight(_text_:y in 6445) [ClassicSimilarity], result of:
              0.13650061 = score(doc=6445,freq=2.0), product of:
                0.21393733 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.04445543 = queryNorm
                0.6380402 = fieldWeight in 6445, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6445)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  3. Yang, Y.; Wilbur, J.: Using corpus statistics to remove redundant words in text categorization (1996) 0.01
    0.011375051 = product of:
      0.022750102 = sum of:
        0.022750102 = product of:
          0.068250306 = sum of:
            0.068250306 = weight(_text_:y in 4199) [ClassicSimilarity], result of:
              0.068250306 = score(doc=4199,freq=2.0), product of:
                0.21393733 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.04445543 = queryNorm
                0.3190201 = fieldWeight in 4199, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4199)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)