Search (6 results, page 1 of 1)

  • × author_ss:"Li, C."
  • × year_i:[2010 TO 2020}
  1. Qu, B.; Cong, G.; Li, C.; Sun, A.; Chen, H.: ¬An evaluation of classification models for question topic categorization (2012) 0.01
    0.008697641 = product of:
      0.017395282 = sum of:
        0.017395282 = product of:
          0.034790564 = sum of:
            0.034790564 = weight(_text_:c in 237) [ClassicSimilarity], result of:
              0.034790564 = score(doc=237,freq=4.0), product of:
                0.1291003 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.037426826 = queryNorm
                0.2694848 = fieldWeight in 237, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=237)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We study the problem of question topic classification using a very large real-world Community Question Answering (CQA) dataset from Yahoo! Answers. The dataset comprises 3.9 million questions and these questions are organized into more than 1,000 categories in a hierarchy. To the best knowledge, this is the first systematic evaluation of the performance of different classification methods on question topic classification as well as short texts. Specifically, we empirically evaluate the following in classifying questions into CQA categories: (a) the usefulness of n-gram features and bag-of-word features; (b) the performance of three standard classification algorithms (naive Bayes, maximum entropy, and support vector machines); (c) the performance of the state-of-the-art hierarchical classification algorithms; (d) the effect of training data size on performance; and (e) the effectiveness of the different components of CQA data, including subject, content, asker, and the best answer. The experimental results show what aspects are important for question topic classification in terms of both effectiveness and efficiency. We believe that the experimental findings from this study will be useful in real-world classification problems.
  2. Cheang, B.; Chu, S.K.W.; Li, C.; Lim, A.: ¬A multidimensional approach to evaluating management journals : refining pagerank via the differentiation of citation types and identifying the roles that management journals play (2014) 0.01
    0.007380193 = product of:
      0.014760386 = sum of:
        0.014760386 = product of:
          0.029520772 = sum of:
            0.029520772 = weight(_text_:c in 1551) [ClassicSimilarity], result of:
              0.029520772 = score(doc=1551,freq=2.0), product of:
                0.1291003 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.037426826 = queryNorm
                0.22866541 = fieldWeight in 1551, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1551)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  3. Li, C.; Sun, A.; Datta, A.: TSDW: Two-stage word sense disambiguation using Wikipedia (2013) 0.01
    0.0061501605 = product of:
      0.012300321 = sum of:
        0.012300321 = product of:
          0.024600642 = sum of:
            0.024600642 = weight(_text_:c in 956) [ClassicSimilarity], result of:
              0.024600642 = score(doc=956,freq=2.0), product of:
                0.1291003 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.037426826 = queryNorm
                0.1905545 = fieldWeight in 956, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=956)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  4. Li, C.; Sugimoto, S.: Provenance description of metadata application profiles for long-term maintenance of metadata schemas : Luciano Floridi's philosophy of information as the foundation for library and information science (2018) 0.01
    0.0061501605 = product of:
      0.012300321 = sum of:
        0.012300321 = product of:
          0.024600642 = sum of:
            0.024600642 = weight(_text_:c in 4048) [ClassicSimilarity], result of:
              0.024600642 = score(doc=4048,freq=2.0), product of:
                0.1291003 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.037426826 = queryNorm
                0.1905545 = fieldWeight in 4048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4048)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  5. Li, X.; Zhang, A.; Li, C.; Ouyang, J.; Cai, Y.: Exploring coherent topics by topic modeling with term weighting (2018) 0.01
    0.0061501605 = product of:
      0.012300321 = sum of:
        0.012300321 = product of:
          0.024600642 = sum of:
            0.024600642 = weight(_text_:c in 5045) [ClassicSimilarity], result of:
              0.024600642 = score(doc=5045,freq=2.0), product of:
                0.1291003 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.037426826 = queryNorm
                0.1905545 = fieldWeight in 5045, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5045)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  6. Li, C.; Sun, A.: Extracting fine-grained location with temporal awareness in tweets : a two-stage approach (2017) 0.00
    0.0049201283 = product of:
      0.009840257 = sum of:
        0.009840257 = product of:
          0.019680513 = sum of:
            0.019680513 = weight(_text_:c in 3686) [ClassicSimilarity], result of:
              0.019680513 = score(doc=3686,freq=2.0), product of:
                0.1291003 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.037426826 = queryNorm
                0.1524436 = fieldWeight in 3686, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3686)
          0.5 = coord(1/2)
      0.5 = coord(1/2)