Search (4 results, page 1 of 1)

  • × author_ss:"Ho, K.S."
  1. Dang, E.K.F.; Luk, R.W.P.; Ho, K.S.; Chan, S.C.F.; Lee, D.L.: ¬A new measure of clustering effectiveness : algorithms and experimental studies (2008) 0.06
    0.058072533 = product of:
      0.1742176 = sum of:
        0.1742176 = weight(_text_:d.l in 1367) [ClassicSimilarity], result of:
          0.1742176 = score(doc=1367,freq=2.0), product of:
            0.31052554 = queryWeight, product of:
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.0428061 = queryNorm
            0.5610411 = fieldWeight in 1367, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1367)
      0.33333334 = coord(1/3)
    
  2. Wong, W.S.; Luk, R.W.P.; Leong, H.V.; Ho, K.S.; Lee, D.L.: Re-examining the effects of adding relevance information in a relevance feedback environment (2008) 0.04
    0.04148038 = product of:
      0.12444114 = sum of:
        0.12444114 = weight(_text_:d.l in 2084) [ClassicSimilarity], result of:
          0.12444114 = score(doc=2084,freq=2.0), product of:
            0.31052554 = queryWeight, product of:
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.0428061 = queryNorm
            0.40074366 = fieldWeight in 2084, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2084)
      0.33333334 = coord(1/3)
    
  3. Dang, E.K.F.; Luk, R.W.P.; Allan, J.; Ho, K.S.; Chung, K.F.L.; Lee, D.L.: ¬A new context-dependent term weight computed by boost and discount using relevance information (2010) 0.04
    0.04148038 = product of:
      0.12444114 = sum of:
        0.12444114 = weight(_text_:d.l in 4120) [ClassicSimilarity], result of:
          0.12444114 = score(doc=4120,freq=2.0), product of:
            0.31052554 = queryWeight, product of:
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.0428061 = queryNorm
            0.40074366 = fieldWeight in 4120, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.2542357 = idf(docFreq=84, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4120)
      0.33333334 = coord(1/3)
    
  4. Lan, K.C.; Ho, K.S.; Luk, R.W.P.; Leong, H.V.: Dialogue act recognition using maximum entropy (2008) 0.02
    0.020289257 = product of:
      0.06086777 = sum of:
        0.06086777 = product of:
          0.12173554 = sum of:
            0.12173554 = weight(_text_:da in 1717) [ClassicSimilarity], result of:
              0.12173554 = score(doc=1717,freq=10.0), product of:
                0.20539105 = queryWeight, product of:
                  4.7981725 = idf(docFreq=990, maxDocs=44218)
                  0.0428061 = queryNorm
                0.5927013 = fieldWeight in 1717, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.7981725 = idf(docFreq=990, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1717)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    A dialogue-based interface for information systems is considered a potentially very useful approach to information access. A key step in computer processing of natural-language dialogues is dialogue-act (DA) recognition. In this paper, we apply a feature-based classification approach for DA recognition, by using the maximum entropy (ME) method to build a classifier for labeling utterances with DA tags. The ME method has the advantage that a large number of heterogeneous features can be flexibly combined in one classifier, which can facilitate feature selection. A unique characteristic of our approach is that it does not need to model the prior probability of DAs directly, and thus avoids the use of a discourse grammar. This simplifies the implementation of the classifier and improves the efficiency of DA recognition, without sacrificing the classification accuracy. We evaluate the classifier using a large data set based on the Switchboard corpus. Encouraging performance is observed; the highest classification accuracy achieved is 75.03%. We also propose a heuristic to address the problem of sparseness of the data set. This problem has resulted in poor classification accuracies of some DA types that have very low occurrence frequencies in the data set. Preliminary evaluation shows that the method is effective in improving the macroaverage classification accuracy of the ME classifier.