Search (2 results, page 1 of 1)

  • × author_ss:"Yang, Y."
  • × theme_ss:"Computerlinguistik"
  1. Yang, Y.; Lu, Q.; Zhao, T.: ¬A delimiter-based general approach for Chinese term extraction (2009) 0.00
    5.0960475E-4 = product of:
      0.0071344664 = sum of:
        0.0071344664 = weight(_text_:information in 3315) [ClassicSimilarity], result of:
          0.0071344664 = score(doc=3315,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13714671 = fieldWeight in 3315, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3315)
      0.071428575 = coord(1/14)
    
    Abstract
    This article addresses a two-step approach for term extraction. In the first step on term candidate extraction, a new delimiter-based approach is proposed to identify features of the delimiters of term candidates rather than those of the term candidates themselves. This delimiter-based method is much more stable and domain independent than the previous approaches. In the second step on term verification, an algorithm using link analysis is applied to calculate the relevance between term candidates and the sentences from which the terms are extracted. All information is obtained from the working domain corpus without the need for prior domain knowledge. The approach is not targeted at any specific domain and there is no need for extensive training when applying it to new domains. In other words, the method is not domain dependent and it is especially useful for resource-limited domains. Evaluations of Chinese text in two different domains show quite significant improvements over existing techniques and also verify its efficiency and its relatively domain-independent nature. The proposed method is also very effective for extracting new terms so that it can serve as an efficient tool for updating domain knowledge, especially for expanding lexicons.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.1, S.111-125
  2. Yang, Y.; Wilbur, J.: Using corpus statistics to remove redundant words in text categorization (1996) 0.00
    4.32414E-4 = product of:
      0.0060537956 = sum of:
        0.0060537956 = weight(_text_:information in 4199) [ClassicSimilarity], result of:
          0.0060537956 = score(doc=4199,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.116372846 = fieldWeight in 4199, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4199)
      0.071428575 = coord(1/14)
    
    Source
    Journal of the American Society for Information Science. 47(1996) no.5, S.357-369