Search (4 results, page 1 of 1)

  • × author_ss:"Sun, A."
  1. Qu, B.; Cong, G.; Li, C.; Sun, A.; Chen, H.: ¬An evaluation of classification models for question topic categorization (2012) 0.01
    0.008697641 = product of:
      0.017395282 = sum of:
        0.017395282 = product of:
          0.034790564 = sum of:
            0.034790564 = weight(_text_:c in 237) [ClassicSimilarity], result of:
              0.034790564 = score(doc=237,freq=4.0), product of:
                0.1291003 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.037426826 = queryNorm
                0.2694848 = fieldWeight in 237, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=237)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We study the problem of question topic classification using a very large real-world Community Question Answering (CQA) dataset from Yahoo! Answers. The dataset comprises 3.9 million questions and these questions are organized into more than 1,000 categories in a hierarchy. To the best knowledge, this is the first systematic evaluation of the performance of different classification methods on question topic classification as well as short texts. Specifically, we empirically evaluate the following in classifying questions into CQA categories: (a) the usefulness of n-gram features and bag-of-word features; (b) the performance of three standard classification algorithms (naive Bayes, maximum entropy, and support vector machines); (c) the performance of the state-of-the-art hierarchical classification algorithms; (d) the effect of training data size on performance; and (e) the effectiveness of the different components of CQA data, including subject, content, asker, and the best answer. The experimental results show what aspects are important for question topic classification in terms of both effectiveness and efficiency. We believe that the experimental findings from this study will be useful in real-world classification problems.
  2. Sun, A.; Lim, E.-P.: Web unit-based mining of homepage relationships (2006) 0.01
    0.0063385232 = product of:
      0.0126770465 = sum of:
        0.0126770465 = product of:
          0.025354093 = sum of:
            0.025354093 = weight(_text_:22 in 5274) [ClassicSimilarity], result of:
              0.025354093 = score(doc=5274,freq=2.0), product of:
                0.13106237 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037426826 = queryNorm
                0.19345059 = fieldWeight in 5274, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5274)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 16:18:25
  3. Li, C.; Sun, A.; Datta, A.: TSDW: Two-stage word sense disambiguation using Wikipedia (2013) 0.01
    0.0061501605 = product of:
      0.012300321 = sum of:
        0.012300321 = product of:
          0.024600642 = sum of:
            0.024600642 = weight(_text_:c in 956) [ClassicSimilarity], result of:
              0.024600642 = score(doc=956,freq=2.0), product of:
                0.1291003 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.037426826 = queryNorm
                0.1905545 = fieldWeight in 956, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=956)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  4. Li, C.; Sun, A.: Extracting fine-grained location with temporal awareness in tweets : a two-stage approach (2017) 0.00
    0.0049201283 = product of:
      0.009840257 = sum of:
        0.009840257 = product of:
          0.019680513 = sum of:
            0.019680513 = weight(_text_:c in 3686) [ClassicSimilarity], result of:
              0.019680513 = score(doc=3686,freq=2.0), product of:
                0.1291003 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.037426826 = queryNorm
                0.1524436 = fieldWeight in 3686, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3686)
          0.5 = coord(1/2)
      0.5 = coord(1/2)