Search (9 results, page 1 of 1)

  • × author_ss:"Yang, C.C."
  1. Chau, M.; Lu, Y.; Fang, X.; Yang, C.C.: Characteristics of character usage in Chinese Web searching (2009) 0.04
    0.043103583 = product of:
      0.08620717 = sum of:
        0.08620717 = sum of:
          0.051078994 = weight(_text_:x in 2456) [ClassicSimilarity], result of:
            0.051078994 = score(doc=2456,freq=2.0), product of:
              0.21896711 = queryWeight, product of:
                4.2226825 = idf(docFreq=1761, maxDocs=44218)
                0.05185498 = queryNorm
              0.23327245 = fieldWeight in 2456, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.2226825 = idf(docFreq=1761, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2456)
          0.035128172 = weight(_text_:22 in 2456) [ClassicSimilarity], result of:
            0.035128172 = score(doc=2456,freq=2.0), product of:
              0.18158731 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05185498 = queryNorm
              0.19345059 = fieldWeight in 2456, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2456)
      0.5 = coord(1/2)
    
    Date
    22.11.2008 17:57:22
  2. Chua, A.Y.K.; Yang, C.C.: ¬The shift towards multi-disciplinarity in information science (2008) 0.02
    0.019968502 = product of:
      0.039937004 = sum of:
        0.039937004 = product of:
          0.15974802 = sum of:
            0.15974802 = weight(_text_:authors in 2389) [ClassicSimilarity], result of:
              0.15974802 = score(doc=2389,freq=10.0), product of:
                0.2363972 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.05185498 = queryNorm
                0.67576104 = fieldWeight in 2389, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2389)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    This article analyzes the collaboration trends, authorship and keywords of all research articles published in the Journal of American Society for Information Science and Technology (JASIST). Comparing the articles between two 10-year periods, namely, 1988-1997 and 1998-2007, the three-fold objectives are to analyze the shifts in (a) authors' collaboration trends (b) top authors, their affiliations as well as the pattern of coauthorship among them, and (c) top keywords and the subdisciplines from which they emerge. The findings reveal a distinct tendency towards collaboration among authors, with external collaborations becoming more prevalent. Top authors have grown in diversity from those being affiliated predominantly with library/information-related departments to include those from information systems management, information technology, businesss, and the humanities. Amid heterogeneous clusters of collaboration among top authors, strongly connected cross-disciplinary coauthor pairs have become more prevalent. Correspondingly, the distribution of top keywords' occurrences that leans heavily on core information science has shifted towards other subdisciplines such as information technology and sociobehavioral science.
  3. Shi, X.; Yang, C.C.: Mining related queries from Web search engine query logs using an improved association rule mining model (2007) 0.01
    0.012769748 = product of:
      0.025539497 = sum of:
        0.025539497 = product of:
          0.051078994 = sum of:
            0.051078994 = weight(_text_:x in 597) [ClassicSimilarity], result of:
              0.051078994 = score(doc=597,freq=2.0), product of:
                0.21896711 = queryWeight, product of:
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.05185498 = queryNorm
                0.23327245 = fieldWeight in 597, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=597)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  4. Tang, X.; Yang, C.C.; Song, M.: Understanding the evolution of multiple scientific research domains using a content and network approach (2013) 0.01
    0.012769748 = product of:
      0.025539497 = sum of:
        0.025539497 = product of:
          0.051078994 = sum of:
            0.051078994 = weight(_text_:x in 744) [ClassicSimilarity], result of:
              0.051078994 = score(doc=744,freq=2.0), product of:
                0.21896711 = queryWeight, product of:
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.05185498 = queryNorm
                0.23327245 = fieldWeight in 744, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.2226825 = idf(docFreq=1761, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=744)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  5. Yang, C.C.; Li, K.W.: ¬A heuristic method based on a statistical approach for chinese text segmentation (2005) 0.01
    0.010524326 = product of:
      0.021048652 = sum of:
        0.021048652 = product of:
          0.08419461 = sum of:
            0.08419461 = weight(_text_:authors in 4580) [ClassicSimilarity], result of:
              0.08419461 = score(doc=4580,freq=4.0), product of:
                0.2363972 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.05185498 = queryNorm
                0.35615736 = fieldWeight in 4580, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4580)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    The authors propose a heuristic method for Chinese automatic text segmentation based an a statistical approach. This method is developed based an statistical information about the association among adjacent characters in Chinese text. Mutual information of bi-grams and significant estimation of tri-grams are utilized. A heuristic method with six rules is then proposed to determine the segmentation points in a Chinese sentence. No dictionary is required in this method. Chinese text segmentation is important in Chinese text indexing and thus greatly affects the performance of Chinese information retrieval. Due to the lack of delimiters of words in Chinese text, Chinese text segmentation is more difficult than English text segmentation. Besides, segmentation ambiguities and occurrences of out-of-vocabulary words (i.e., unknown words) are the major challenges in Chinese segmentation. Many research studies dealing with the problem of word segmentation have focused an the resolution of segmentation ambiguities. The problem of unknown word identification has not drawn much attention. The experimental result Shows that the proposed heuristic method is promising to segment the unknown words as weIl as the known words. The authors further investigated the distribution of the errors of commission and the errors of omission caused by the proposed heuristic method and benchmarked the proposed heuristic method with a previous proposed technique, boundary detection. It is found that the heuristic method outperformed the boundary detection method.
  6. Li, K.W.; Yang, C.C.: Conceptual analysis of parallel corpus collected from the Web (2006) 0.01
    0.010524326 = product of:
      0.021048652 = sum of:
        0.021048652 = product of:
          0.08419461 = sum of:
            0.08419461 = weight(_text_:authors in 5051) [ClassicSimilarity], result of:
              0.08419461 = score(doc=5051,freq=4.0), product of:
                0.2363972 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.05185498 = queryNorm
                0.35615736 = fieldWeight in 5051, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5051)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    As illustrated by the World Wide Web, the volume of information in languages other than English has grown significantly in recent years. This highlights the importance of multilingual corpora. Much effort has been devoted to the compilation of multilingual corpora for the purpose of cross-lingual information retrieval and machine translation. Existing parallel corpora mostly involve European languages, such as English-French and English-Spanish. There is still a lack of parallel corpora between European languages and Asian. languages. In the authors' previous work, an alignment method to identify one-to-one Chinese and English title pairs was developed to construct an English-Chinese parallel corpus that works automatically from the World Wide Web, and a 100% precision and 87% recall were obtained. Careful analysis of these results has helped the authors to understand how the alignment method can be improved. A conceptual analysis was conducted, which includes the analysis of conceptual equivalent and conceptual information alternation in the aligned and nonaligned English-Chinese title pairs that are obtained by the alignment method. The result of the analysis not only reflects the characteristics of parallel corpora, but also gives insight into the strengths and weaknesses of the alignment method. In particular, conceptual alternation, such as omission and addition, is found to have a significant impact on the performance of the alignment method.
  7. Yang, C.C.; Liu, N.: Web site topic-hierarchy generation based on link structure (2009) 0.01
    0.008782043 = product of:
      0.017564086 = sum of:
        0.017564086 = product of:
          0.035128172 = sum of:
            0.035128172 = weight(_text_:22 in 2738) [ClassicSimilarity], result of:
              0.035128172 = score(doc=2738,freq=2.0), product of:
                0.18158731 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05185498 = queryNorm
                0.19345059 = fieldWeight in 2738, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2738)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2009 12:51:47
  8. Wang, F.L.; Yang, C.C.: ¬The impact analysis of language differences on an automatic multilingual text summarization system (2006) 0.01
    0.007441822 = product of:
      0.014883644 = sum of:
        0.014883644 = product of:
          0.059534576 = sum of:
            0.059534576 = weight(_text_:authors in 5049) [ClassicSimilarity], result of:
              0.059534576 = score(doc=5049,freq=2.0), product of:
                0.2363972 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.05185498 = queryNorm
                0.25184128 = fieldWeight in 5049, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5049)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    Based on the salient features of the documents, automatic text summarization systems extract the key sentences from source documents. This process supports the users in evaluating the relevance of the extracted documents returned by information retrieval systems. Because of this tool, efficient filtering can be achieved. Indirectly, these systems help to resolve the problem of information overloading. Many automatic text summarization systems have been implemented for use with different languages. It has been established that the grammatical and lexical differences between languages have a significant effect on text processing. However, the impact of the language differences on the automatic text summarization systems has not yet been investigated. The authors provide an impact analysis of language difference on automatic text summarization. It includes the effect on the extraction processes, the scoring mechanisms, the performance, and the matching of the extracted sentences, using the parallel corpus in English and Chinese as the tested object. The analysis results provide a greater understanding of language differences and promote the future development of more advanced text summarization techniques.
  9. Yang, C.C.; Luk, J.: Automatic generation of English/Chinese thesaurus based on a parallel corpus in laws (2003) 0.01
    0.00614743 = product of:
      0.01229486 = sum of:
        0.01229486 = product of:
          0.02458972 = sum of:
            0.02458972 = weight(_text_:22 in 1616) [ClassicSimilarity], result of:
              0.02458972 = score(doc=1616,freq=2.0), product of:
                0.18158731 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05185498 = queryNorm
                0.1354154 = fieldWeight in 1616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1616)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The information available in languages other than English in the World Wide Web is increasing significantly. According to a report from Computer Economics in 1999, 54% of Internet users are English speakers ("English Will Dominate Web for Only Three More Years," Computer Economics, July 9, 1999, http://www.computereconomics. com/new4/pr/pr990610.html). However, it is predicted that there will be only 60% increase in Internet users among English speakers verses a 150% growth among nonEnglish speakers for the next five years. By 2005, 57% of Internet users will be non-English speakers. A report by CNN.com in 2000 showed that the number of Internet users in China had been increased from 8.9 million to 16.9 million from January to June in 2000 ("Report: China Internet users double to 17 million," CNN.com, July, 2000, http://cnn.org/2000/TECH/computing/07/27/ china.internet.reut/index.html). According to Nielsen/ NetRatings, there was a dramatic leap from 22.5 millions to 56.6 millions Internet users from 2001 to 2002. China had become the second largest global at-home Internet population in 2002 (US's Internet population was 166 millions) (Robyn Greenspan, "China Pulls Ahead of Japan," Internet.com, April 22, 2002, http://cyberatias.internet.com/big-picture/geographics/article/0,,5911_1013841,00. html). All of the evidences reveal the importance of crosslingual research to satisfy the needs in the near future. Digital library research has been focusing in structural and semantic interoperability in the past. Searching and retrieving objects across variations in protocols, formats and disciplines are widely explored (Schatz, B., & Chen, H. (1999). Digital libraries: technological advances and social impacts. IEEE Computer, Special Issue an Digital Libraries, February, 32(2), 45-50.; Chen, H., Yen, J., & Yang, C.C. (1999). International activities: development of Asian digital libraries. IEEE Computer, Special Issue an Digital Libraries, 32(2), 48-49.). However, research in crossing language boundaries, especially across European languages and Oriental languages, is still in the initial stage. In this proposal, we put our focus an cross-lingual semantic interoperability by developing automatic generation of a cross-lingual thesaurus based an English/Chinese parallel corpus. When the searchers encounter retrieval problems, Professional librarians usually consult the thesaurus to identify other relevant vocabularies. In the problem of searching across language boundaries, a cross-lingual thesaurus, which is generated by co-occurrence analysis and Hopfield network, can be used to generate additional semantically relevant terms that cannot be obtained from dictionary. In particular, the automatically generated cross-lingual thesaurus is able to capture the unknown words that do not exist in a dictionary, such as names of persons, organizations, and events. Due to Hong Kong's unique history background, both English and Chinese are used as official languages in all legal documents. Therefore, English/Chinese cross-lingual information retrieval is critical for applications in courts and the government. In this paper, we develop an automatic thesaurus by the Hopfield network based an a parallel corpus collected from the Web site of the Department of Justice of the Hong Kong Special Administrative Region (HKSAR) Government. Experiments are conducted to measure the precision and recall of the automatic generated English/Chinese thesaurus. The result Shows that such thesaurus is a promising tool to retrieve relevant terms, especially in the language that is not the same as the input term. The direct translation of the input term can also be retrieved in most of the cases.