Search (21 results, page 1 of 2)

  • × author_ss:"Yang, C.C."
  1. Lam, W.; Yang, C.C.; Menczer, F.: Introduction to the special topic section on mining Web resources for enhancing information retrieval (2007) 0.01
    0.0051002675 = product of:
      0.02040107 = sum of:
        0.02040107 = weight(_text_:information in 600) [ClassicSimilarity], result of:
          0.02040107 = score(doc=600,freq=12.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.3325631 = fieldWeight in 600, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=600)
      0.25 = coord(1/4)
    
    Abstract
    The amount of information on the Web has been expanding at an enormous pace. There are a variety of Web documents in different genres, such as news, reports, reviews. Traditionally, the information displayed on Web sites has been static. Recently, there are many Web sites offering content that is dynamically generated and frequently updated. It is also common for Web sites to contain information in different languages since many countries adopt more than one language. Moreover, content may exist in multimedia formats including text, images, video, and audio.
    Footnote
    Einführung in einen Themenschwerpunkt "Mining Web resources for enhancing information retrieval"
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.12, S.1791-1792
  2. Chua, A.Y.K.; Yang, C.C.: ¬The shift towards multi-disciplinarity in information science (2008) 0.01
    0.0050479556 = product of:
      0.020191822 = sum of:
        0.020191822 = weight(_text_:information in 2389) [ClassicSimilarity], result of:
          0.020191822 = score(doc=2389,freq=16.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.3291521 = fieldWeight in 2389, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2389)
      0.25 = coord(1/4)
    
    Abstract
    This article analyzes the collaboration trends, authorship and keywords of all research articles published in the Journal of American Society for Information Science and Technology (JASIST). Comparing the articles between two 10-year periods, namely, 1988-1997 and 1998-2007, the three-fold objectives are to analyze the shifts in (a) authors' collaboration trends (b) top authors, their affiliations as well as the pattern of coauthorship among them, and (c) top keywords and the subdisciplines from which they emerge. The findings reveal a distinct tendency towards collaboration among authors, with external collaborations becoming more prevalent. Top authors have grown in diversity from those being affiliated predominantly with library/information-related departments to include those from information systems management, information technology, businesss, and the humanities. Amid heterogeneous clusters of collaboration among top authors, strongly connected cross-disciplinary coauthor pairs have become more prevalent. Correspondingly, the distribution of top keywords' occurrences that leans heavily on core information science has shifted towards other subdisciplines such as information technology and sociobehavioral science.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.13, S.2156-2170
  3. Yang, C.C.; Lam, W.: Introduction to the special topic section on multilingual information systems (2006) 0.00
    0.0047219303 = product of:
      0.018887721 = sum of:
        0.018887721 = weight(_text_:information in 5043) [ClassicSimilarity], result of:
          0.018887721 = score(doc=5043,freq=14.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.3078936 = fieldWeight in 5043, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5043)
      0.25 = coord(1/4)
    
    Abstract
    The information available in languages other than English on the World Wide Web and global information systems is increasing significantly. According to some recent reports. the growth of non-English speaking Internet users is significantly higher than the growth of English-speaking Internet users. Asia and Europe have become the two most-populated regions of Internet users. However, there are many different languages in the many different countries of Asia and Europe. And there are many countries in the world using more than one language as their official languages. For example, Chinese and English are official languages in Hong Kong SAR; English and French are official languages in Canada. In the global economy, information systems are no longer utilized by users in a single geographical region but all over the world. Information can be generated, stored, processed, and accessed in several different languages. All of this reveals the importance of research in multilingual information systems.
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.5, S.629-631
  4. Yang, C.C.: Content-based image retrievaI : a comparison between query by example and image browsing map approaches (2005) 0.00
    0.004164351 = product of:
      0.016657405 = sum of:
        0.016657405 = weight(_text_:information in 4649) [ClassicSimilarity], result of:
          0.016657405 = score(doc=4649,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.27153665 = fieldWeight in 4649, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=4649)
      0.25 = coord(1/4)
    
    Source
    Journal of information science. 30(2005) no.3, S.254-
  5. Chuang, K.Y.; Yang, C.C.: Informational support exchanges using different computer-mediated communication formats in a social media alcoholism community (2014) 0.00
    0.0039349417 = product of:
      0.015739767 = sum of:
        0.015739767 = weight(_text_:information in 1179) [ClassicSimilarity], result of:
          0.015739767 = score(doc=1179,freq=14.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.256578 = fieldWeight in 1179, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1179)
      0.25 = coord(1/4)
    
    Abstract
    E-patients seeking information online often seek specific advice related to coping with their health condition(s) among social networking sites. They may be looking for social connectivity with compassionate strangers who may have experienced similar situations to share opinions and experiences rather than for authoritative medical information. Previous studies document distinct technological features and different levels of social support interaction patterns. It is expected that the design of the social media functions will have an impact on the user behavior of social support exchange. In this part of a multipart study, we investigate the social support types, in particular information support types, across multiple computer-mediated communication formats (forum, journal, and notes) within an alcoholism community using descriptive content analysis on 3 months of data from a MedHelp online peer support community. We present the results of identified informational support types including advice, referral, fact, personal experiences, and opinions, either offered or requested. Fact type was exchanged most often among the messages; however, there were some different patterns between notes and journal posts. Notes were used for maintaining relationships rather than as a main source for seeking information. Notes were similar to comments made to journal posts, which may indicate the friendship between journal readers and the author. These findings suggest that users may have initially joined the MedHelp Alcoholism Community for information-seeking purposes but continue participation even after they have completed with information gathering because of the relationships they formed with community members through social media features.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.1, S.37-52
  6. Yang, C.C.; Chung, A.: ¬A personal agent for Chinese financial news on the Web (2002) 0.00
    0.0036430482 = product of:
      0.014572193 = sum of:
        0.014572193 = weight(_text_:information in 205) [ClassicSimilarity], result of:
          0.014572193 = score(doc=205,freq=12.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23754507 = fieldWeight in 205, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=205)
      0.25 = coord(1/4)
    
    Abstract
    As the Web has become a major channel of information dissemination, many newspapers expand their services by providing electronic versions of news information on the Web. However, most investors find it difficult to search for the financial information of interest from the huge Web information space-information overloading problem. In this article, we present a personal agent that utilizes user profiles and user relevance feedback to search for the Chinese Web financial news articles on behalf of users. A Chinese indexing component is developed to index the continuously fetched Chinese financial news articles. User profiles capture the basic knowledge of user preferences based on the sources of news articles, the regions of the news reported, categories of industries related, the listed companies, and user-specified keywords. User feedback captures the semantics of the user rated news articles. The search engine ranks the top 20 news articles that users are most interested in and report to the user daily or on demand. Experiments are conducted to measure the performance of the agents based on the inputs from user profiles and user feedback. It shows that simply using the user profiles does not increase the precision of the retrieval. However, user relevance feedback helps to increase the performance of the retrieval as the user interact with the system until it reaches the optimal performance. Combining both user profiles and user relevance feedback produces the best performance
    Source
    Journal of the American Society for Information Science and technology. 53(2002) no.2, S.186-196
  7. Li, K.W.; Yang, C.C.: Automatic crosslingual thesaurus generated from the Hong Kong SAR Police Department Web Corpus for Crime Analysis (2005) 0.00
    0.0035694435 = product of:
      0.014277774 = sum of:
        0.014277774 = weight(_text_:information in 3391) [ClassicSimilarity], result of:
          0.014277774 = score(doc=3391,freq=18.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23274568 = fieldWeight in 3391, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=3391)
      0.25 = coord(1/4)
    
    Abstract
    For the sake of national security, very large volumes of data and information are generated and gathered daily. Much of this data and information is written in different languages, stored in different locations, and may be seemingly unconnected. Crosslingual semantic interoperability is a major challenge to generate an overview of this disparate data and information so that it can be analyzed, shared, searched, and summarized. The recent terrorist attacks and the tragic events of September 11, 2001 have prompted increased attention an national security and criminal analysis. Many Asian countries and cities, such as Japan, Taiwan, and Singapore, have been advised that they may become the next targets of terrorist attacks. Semantic interoperability has been a focus in digital library research. Traditional information retrieval (IR) approaches normally require a document to share some common keywords with the query. Generating the associations for the related terms between the two term spaces of users and documents is an important issue. The problem can be viewed as the creation of a thesaurus. Apart from this, terrorists and criminals may communicate through letters, e-mails, and faxes in languages other than English. The translation ambiguity significantly exacerbates the retrieval problem. The problem is expanded to crosslingual semantic interoperability. In this paper, we focus an the English/Chinese crosslingual semantic interoperability problem. However, the developed techniques are not limited to English and Chinese languages but can be applied to many other languages. English and Chinese are popular languages in the Asian region. Much information about national security or crime is communicated in these languages. An efficient automatically generated thesaurus between these languages is important to crosslingual information retrieval between English and Chinese languages. To facilitate crosslingual information retrieval, a corpus-based approach uses the term co-occurrence statistics in parallel or comparable corpora to construct a statistical translation model to cross the language boundary. In this paper, the text based approach to align English/Chinese Hong Kong Police press release documents from the Web is first presented. We also introduce an algorithmic approach to generate a robust knowledge base based an statistical correlation analysis of the semantics (knowledge) embedded in the bilingual press release corpus. The research output consisted of a thesaurus-like, semantic network knowledge base, which can aid in semanticsbased crosslingual information management and retrieval.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.3, S.272-281
  8. Li, K.W.; Yang, C.C.: Conceptual analysis of parallel corpus collected from the Web (2006) 0.00
    0.0033256328 = product of:
      0.013302531 = sum of:
        0.013302531 = weight(_text_:information in 5051) [ClassicSimilarity], result of:
          0.013302531 = score(doc=5051,freq=10.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.21684799 = fieldWeight in 5051, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5051)
      0.25 = coord(1/4)
    
    Abstract
    As illustrated by the World Wide Web, the volume of information in languages other than English has grown significantly in recent years. This highlights the importance of multilingual corpora. Much effort has been devoted to the compilation of multilingual corpora for the purpose of cross-lingual information retrieval and machine translation. Existing parallel corpora mostly involve European languages, such as English-French and English-Spanish. There is still a lack of parallel corpora between European languages and Asian. languages. In the authors' previous work, an alignment method to identify one-to-one Chinese and English title pairs was developed to construct an English-Chinese parallel corpus that works automatically from the World Wide Web, and a 100% precision and 87% recall were obtained. Careful analysis of these results has helped the authors to understand how the alignment method can be improved. A conceptual analysis was conducted, which includes the analysis of conceptual equivalent and conceptual information alternation in the aligned and nonaligned English-Chinese title pairs that are obtained by the alignment method. The result of the analysis not only reflects the characteristics of parallel corpora, but also gives insight into the strengths and weaknesses of the alignment method. In particular, conceptual alternation, such as omission and addition, is found to have a significant impact on the performance of the alignment method.
    Footnote
    Beitrag einer special topic section on multilingual information systems
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.5, S.632-644
  9. Shi, X.; Yang, C.C.: Mining related queries from Web search engine query logs using an improved association rule mining model (2007) 0.00
    0.0033256328 = product of:
      0.013302531 = sum of:
        0.013302531 = weight(_text_:information in 597) [ClassicSimilarity], result of:
          0.013302531 = score(doc=597,freq=10.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.21684799 = fieldWeight in 597, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=597)
      0.25 = coord(1/4)
    
    Abstract
    With the overwhelming volume of information, the task of finding relevant information on a given topic on the Web is becoming increasingly difficult. Web search engines hence become one of the most popular solutions available on the Web. However, it has never been easy for novice users to organize and represent their information needs using simple queries. Users have to keep modifying their input queries until they get expected results. Therefore, it is often desirable for search engines to give suggestions on related queries to users. Besides, by identifying those related queries, search engines can potentially perform optimizations on their systems, such as query expansion and file indexing. In this work we propose a method that suggests a list of related queries given an initial input query. The related queries are based in the query log of previously submitted queries by human users, which can be identified using an enhanced model of association rules. Users can utilize the suggested related queries to tune or redirect the search process. Our method not only discovers the related queries, but also ranks them according to the degree of their relatedness. Unlike many other rival techniques, it also performs reasonably well on less frequent input queries.
    Footnote
    Beitrag eines Themenschwerpunktes "Mining Web resources for enhancing information retrieval"
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.12, S.1871-1883
  10. Yang, C.C.; Li, K.W.: ¬A heuristic method based on a statistical approach for chinese text segmentation (2005) 0.00
    0.0029745363 = product of:
      0.011898145 = sum of:
        0.011898145 = weight(_text_:information in 4580) [ClassicSimilarity], result of:
          0.011898145 = score(doc=4580,freq=8.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.19395474 = fieldWeight in 4580, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4580)
      0.25 = coord(1/4)
    
    Abstract
    The authors propose a heuristic method for Chinese automatic text segmentation based an a statistical approach. This method is developed based an statistical information about the association among adjacent characters in Chinese text. Mutual information of bi-grams and significant estimation of tri-grams are utilized. A heuristic method with six rules is then proposed to determine the segmentation points in a Chinese sentence. No dictionary is required in this method. Chinese text segmentation is important in Chinese text indexing and thus greatly affects the performance of Chinese information retrieval. Due to the lack of delimiters of words in Chinese text, Chinese text segmentation is more difficult than English text segmentation. Besides, segmentation ambiguities and occurrences of out-of-vocabulary words (i.e., unknown words) are the major challenges in Chinese segmentation. Many research studies dealing with the problem of word segmentation have focused an the resolution of segmentation ambiguities. The problem of unknown word identification has not drawn much attention. The experimental result Shows that the proposed heuristic method is promising to segment the unknown words as weIl as the known words. The authors further investigated the distribution of the errors of commission and the errors of omission caused by the proposed heuristic method and benchmarked the proposed heuristic method with a previous proposed technique, boundary detection. It is found that the heuristic method outperformed the boundary detection method.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.13, S.1438-1447
  11. Wang, F.L.; Yang, C.C.: ¬The impact analysis of language differences on an automatic multilingual text summarization system (2006) 0.00
    0.0029745363 = product of:
      0.011898145 = sum of:
        0.011898145 = weight(_text_:information in 5049) [ClassicSimilarity], result of:
          0.011898145 = score(doc=5049,freq=8.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.19395474 = fieldWeight in 5049, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5049)
      0.25 = coord(1/4)
    
    Abstract
    Based on the salient features of the documents, automatic text summarization systems extract the key sentences from source documents. This process supports the users in evaluating the relevance of the extracted documents returned by information retrieval systems. Because of this tool, efficient filtering can be achieved. Indirectly, these systems help to resolve the problem of information overloading. Many automatic text summarization systems have been implemented for use with different languages. It has been established that the grammatical and lexical differences between languages have a significant effect on text processing. However, the impact of the language differences on the automatic text summarization systems has not yet been investigated. The authors provide an impact analysis of language difference on automatic text summarization. It includes the effect on the extraction processes, the scoring mechanisms, the performance, and the matching of the extracted sentences, using the parallel corpus in English and Chinese as the tested object. The analysis results provide a greater understanding of language differences and promote the future development of more advanced text summarization techniques.
    Footnote
    Beitrag einer special topic section on multilingual information systems
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.5, S.684-696
  12. Yang, C.C.; Wang, F.L.: Hierarchical summarization of large documents (2008) 0.00
    0.0029745363 = product of:
      0.011898145 = sum of:
        0.011898145 = weight(_text_:information in 1719) [ClassicSimilarity], result of:
          0.011898145 = score(doc=1719,freq=8.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.19395474 = fieldWeight in 1719, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1719)
      0.25 = coord(1/4)
    
    Abstract
    Many automatic text summarization models have been developed in the last decades. Related research in information science has shown that human abstractors extract sentences for summaries based on the hierarchical structure of documents; however, the existing automatic summarization models do not take into account the human abstractor's behavior of sentence extraction and only consider the document as a sequence of sentences during the process of extraction of sentences as a summary. In general, a document exhibits a well-defined hierarchical structure that can be described as fractals - mathematical objects with a high degree of redundancy. In this article, we introduce the fractal summarization model based on the fractal theory. The important information is captured from the source document by exploring the hierarchical structure and salient features of the document. A condensed version of the document that is informatively close to the source document is produced iteratively using the contractive transformation in the fractal theory. The fractal summarization model is the first attempt to apply fractal theory to document summarization. It significantly improves the divergence of information coverage of summary and the precision of summary. User evaluations have been conducted. Results have indicated that fractal summarization is promising and outperforms current summarization techniques that do not consider the hierarchical structure of documents.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.6, S.887-902
  13. Wang, F.L.; Yang, C.C.: Mining Web data for Chinese segmentation (2007) 0.00
    0.0025760243 = product of:
      0.010304097 = sum of:
        0.010304097 = weight(_text_:information in 604) [ClassicSimilarity], result of:
          0.010304097 = score(doc=604,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.16796975 = fieldWeight in 604, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=604)
      0.25 = coord(1/4)
    
    Abstract
    Modern information retrieval systems use keywords within documents as indexing terms for search of relevant documents. As Chinese is an ideographic character-based language, the words in the texts are not delimited by white spaces. Indexing of Chinese documents is impossible without a proper segmentation algorithm. Many Chinese segmentation algorithms have been proposed in the past. Traditional segmentation algorithms cannot operate without a large dictionary or a large corpus of training data. Nowadays, the Web has become the largest corpus that is ideal for Chinese segmentation. Although most search engines have problems in segmenting texts into proper words, they maintain huge databases of documents and frequencies of character sequences in the documents. Their databases are important potential resources for segmentation. In this paper, we propose a segmentation algorithm by mining Web data with the help of search engines. On the other hand, the Romanized pinyin of Chinese language indicates boundaries of words in the text. Our algorithm is the first to utilize the Romanized pinyin to segmentation. It is the first unified segmentation algorithm for the Chinese language from different geographical areas, and it is also domain independent because of the nature of the Web. Experiments have been conducted on the datasets of a recent Chinese segmentation competition. The results show that our algorithm outperforms the traditional algorithms in terms of precision and recall. Moreover, our algorithm can effectively deal with the problems of segmentation ambiguity, new word (unknown word) detection, and stop words.
    Footnote
    Beitrag eines Themenschwerpunktes "Mining Web resources for enhancing information retrieval"
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.12, S.1820-1837
  14. Yang, C.C.; Li, K.W.: Automatic construction of English/Chinese parallel corpora (2003) 0.00
    0.002379629 = product of:
      0.009518516 = sum of:
        0.009518516 = weight(_text_:information in 1683) [ClassicSimilarity], result of:
          0.009518516 = score(doc=1683,freq=8.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.1551638 = fieldWeight in 1683, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1683)
      0.25 = coord(1/4)
    
    Abstract
    As the demand for global information increases significantly, multilingual corpora has become a valuable linguistic resource for applications to cross-lingual information retrieval and natural language processing. In order to cross the boundaries that exist between different languages, dictionaries are the most typical tools. However, the general-purpose dictionary is less sensitive in both genre and domain. It is also impractical to manually construct tailored bilingual dictionaries or sophisticated multilingual thesauri for large applications. Corpusbased approaches, which do not have the limitation of dictionaries, provide a statistical translation model with which to cross the language boundary. There are many domain-specific parallel or comparable corpora that are employed in machine translation and cross-lingual information retrieval. Most of these are corpora between Indo-European languages, such as English/French and English/Spanish. The Asian/Indo-European corpus, especially English/Chinese corpus, is relatively sparse. The objective of the present research is to construct English/ Chinese parallel corpus automatically from the World Wide Web. In this paper, an alignment method is presented which is based an dynamic programming to identify the one-to-one Chinese and English title pairs. The method includes alignment at title level, word level and character level. The longest common subsequence (LCS) is applied to find the most reliabie Chinese translation of an English word. As one word for a language may translate into two or more words repetitively in another language, the edit operation, deletion, is used to resolve redundancy. A score function is then proposed to determine the optimal title pairs. Experiments have been conducted to investigate the performance of the proposed method using the daily press release articles by the Hong Kong SAR government as the test bed. The precision of the result is 0.998 while the recall is 0.806. The release articles and speech articles, published by Hongkong & Shanghai Banking Corporation Limited, are also used to test our method, the precision is 1.00, and the recall is 0.948.
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.8, S.730-742
  15. Chau, M.; Lu, Y.; Fang, X.; Yang, C.C.: Characteristics of character usage in Chinese Web searching (2009) 0.00
    0.0021033147 = product of:
      0.008413259 = sum of:
        0.008413259 = weight(_text_:information in 2456) [ClassicSimilarity], result of:
          0.008413259 = score(doc=2456,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.13714671 = fieldWeight in 2456, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2456)
      0.25 = coord(1/4)
    
    Abstract
    The use of non-English Web search engines has been prevalent. Given the popularity of Chinese Web searching and the unique characteristics of Chinese language, it is imperative to conduct studies with focuses on the analysis of Chinese Web search queries. In this paper, we report our research on the character usage of Chinese search logs from a Web search engine in Hong Kong. By examining the distribution of search query terms, we found that users tended to use more diversified terms and that the usage of characters in search queries was quite different from the character usage of general online information in Chinese. After studying the Zipf distribution of n-grams with different values of n, we found that the curve of unigram is the most curved one of all while the bigram curve follows the Zipf distribution best, and that the curves of n-grams with larger n (n = 3-6) had similar structures with ?-values in the range of 0.66-0.86. The distribution of combined n-grams was also studied. All the analyses are performed on the data both before and after the removal of function terms and incomplete terms and similar findings are revealed. We believe the findings from this study have provided some insights into further research in non-English Web searching and will assist in the design of more effective Chinese Web search engines.
    Source
    Information processing and management. 45(2009) no.1, S.115-130
  16. Yang, C.C.; Liu, N.: Web site topic-hierarchy generation based on link structure (2009) 0.00
    0.0021033147 = product of:
      0.008413259 = sum of:
        0.008413259 = weight(_text_:information in 2738) [ClassicSimilarity], result of:
          0.008413259 = score(doc=2738,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.13714671 = fieldWeight in 2738, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2738)
      0.25 = coord(1/4)
    
    Abstract
    Navigating through hyperlinks within a Web site to look for information from one of its Web pages without the support of a site map can be inefficient and ineffective. Although the content of a Web site is usually organized with an inherent structure like a topic hierarchy, which is a directed tree rooted at a Web site's homepage whose vertices and edges correspond to Web pages and hyperlinks, such a topic hierarchy is not always available to the user. In this work, we studied the problem of automatic generation of Web sites' topic hierarchies. We modeled a Web site's link structure as a weighted directed graph and proposed methods for estimating edge weights based on eight types of features and three learning algorithms, namely decision trees, naïve Bayes classifiers, and logistic regression. Three graph algorithms, namely breadth-first search, shortest-path search, and directed minimum-spanning tree, were adapted to generate the topic hierarchy based on the graph model. We have tested the model and algorithms on real Web sites. It is found that the directed minimum-spanning tree algorithm with the decision tree as the weight learning algorithm achieves the highest performance with an average accuracy of 91.9%.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.3, S.495-508
  17. Yang, C.C.; Lin, J.; Wei, C.-P.: Retaining knowledge for document management : category-tree integration by exploiting category relationships and hierarchical structures (2010) 0.00
    0.0021033147 = product of:
      0.008413259 = sum of:
        0.008413259 = weight(_text_:information in 3581) [ClassicSimilarity], result of:
          0.008413259 = score(doc=3581,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.13714671 = fieldWeight in 3581, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3581)
      0.25 = coord(1/4)
    
    Abstract
    The category-tree document-classification structure is widely used by enterprises and information providers to organize, archive, and access documents for effective knowledge management. However, category trees from various sources use different hierarchical structures, which usually make mappings between categories in different category trees difficult. In this work, we propose a category-tree integration technique. We develop a method to learn the relationships between any two categories and develop operations such as mapping, splitting, and insertion for this integration. According to the parent-child relationship of the integrating categories, the developed decision rules use integration operations to integrate categories from the source category tree with those from the master category tree. A unified category tree can accumulate knowledge from multiple resources without forfeiting the knowledge in individual category trees. Experiments have been conducted to measure the performance of the integration operations and the accuracy of the integrated category trees. The proposed category-tree integration technique achieves greater than 80% integration accuracy, and the insert operation is the most frequently utilized, followed by map and split. The insert operation achieves 77% of F1 while the map and split operations achieves 86% and 29% of F1, respectively.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.7, S.1313-1331
  18. Tang, X.; Yang, C.C.; Song, M.: Understanding the evolution of multiple scientific research domains using a content and network approach (2013) 0.00
    0.0021033147 = product of:
      0.008413259 = sum of:
        0.008413259 = weight(_text_:information in 744) [ClassicSimilarity], result of:
          0.008413259 = score(doc=744,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.13714671 = fieldWeight in 744, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=744)
      0.25 = coord(1/4)
    
    Abstract
    Interdisciplinary research has been attracting more attention in recent decades. In this article, we compare the similarity between scientific research domains and quantifying the temporal similarities of domains. We narrowed our study to three research domains: information retrieval (IR), database (DB), and World Wide Web (W3), because the rapid development of the W3 domain substantially attracted research efforts from both IR and DB domains and introduced new research questions to these two areas. Most existing approaches either employed a content-based technique or a cocitation or coauthorship network-based technique to study the development trend of a research area. In this work, we proposed an effective way to quantify the similarities among different research domains by incorporating content similarity and coauthorship network similarity. Experimental results on DBLP (DataBase systems and Logic Programming) data related to IR, DB, and W3 domains showed that the W3 domain was getting closer to both IR and DB whereas the distance between IR and DB remained relatively constant. In addition, comparing to IR and W3 with the DB domain, the DB domain was more conservative and evolved relatively slower.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.5, S.1065-1075
  19. Zhang, M.; Yang, C.C.: Using content and network analysis to understand the social support exchange patterns and user behaviors of an online smoking cessation intervention program (2015) 0.00
    0.0021033147 = product of:
      0.008413259 = sum of:
        0.008413259 = weight(_text_:information in 1668) [ClassicSimilarity], result of:
          0.008413259 = score(doc=1668,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.13714671 = fieldWeight in 1668, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1668)
      0.25 = coord(1/4)
    
    Abstract
    Informational support and nurturant support are two basic types of social support offered in online health communities. This study identifies types of social support in the QuitStop forum and brings insights to exchange patterns of social support and user behaviors with content analysis and social network analysis. Motivated by user information behavior, this study defines two patterns to describe social support exchange: initiated support exchange and invited support exchange. It is found that users with a longer quitting time tend to actively give initiated support, and recent quitters with a shorter abstinent time are likely to seek and receive invited support. This study also finds that support givers of informational support quit longer ago than support givers of nurturant support, and support receivers of informational support quit more recently than support receivers of nurturant support. Usually, informational support is offered by users at late quit stages to users at early quit stages. Nurturant support is also exchanged among users within the same quit stage. These findings help us understand how health consumers are supporting each other and reveal new capabilities of online intervention programs that can be designed to offer social support in a timely and effective manner.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.3, S.564-575
  20. Yang, C.C.; Luk, J.: Automatic generation of English/Chinese thesaurus based on a parallel corpus in laws (2003) 0.00
    0.001803217 = product of:
      0.007212868 = sum of:
        0.007212868 = weight(_text_:information in 1616) [ClassicSimilarity], result of:
          0.007212868 = score(doc=1616,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.11757882 = fieldWeight in 1616, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1616)
      0.25 = coord(1/4)
    
    Abstract
    The information available in languages other than English in the World Wide Web is increasing significantly. According to a report from Computer Economics in 1999, 54% of Internet users are English speakers ("English Will Dominate Web for Only Three More Years," Computer Economics, July 9, 1999, http://www.computereconomics. com/new4/pr/pr990610.html). However, it is predicted that there will be only 60% increase in Internet users among English speakers verses a 150% growth among nonEnglish speakers for the next five years. By 2005, 57% of Internet users will be non-English speakers. A report by CNN.com in 2000 showed that the number of Internet users in China had been increased from 8.9 million to 16.9 million from January to June in 2000 ("Report: China Internet users double to 17 million," CNN.com, July, 2000, http://cnn.org/2000/TECH/computing/07/27/ china.internet.reut/index.html). According to Nielsen/ NetRatings, there was a dramatic leap from 22.5 millions to 56.6 millions Internet users from 2001 to 2002. China had become the second largest global at-home Internet population in 2002 (US's Internet population was 166 millions) (Robyn Greenspan, "China Pulls Ahead of Japan," Internet.com, April 22, 2002, http://cyberatias.internet.com/big-picture/geographics/article/0,,5911_1013841,00. html). All of the evidences reveal the importance of crosslingual research to satisfy the needs in the near future. Digital library research has been focusing in structural and semantic interoperability in the past. Searching and retrieving objects across variations in protocols, formats and disciplines are widely explored (Schatz, B., & Chen, H. (1999). Digital libraries: technological advances and social impacts. IEEE Computer, Special Issue an Digital Libraries, February, 32(2), 45-50.; Chen, H., Yen, J., & Yang, C.C. (1999). International activities: development of Asian digital libraries. IEEE Computer, Special Issue an Digital Libraries, 32(2), 48-49.). However, research in crossing language boundaries, especially across European languages and Oriental languages, is still in the initial stage. In this proposal, we put our focus an cross-lingual semantic interoperability by developing automatic generation of a cross-lingual thesaurus based an English/Chinese parallel corpus. When the searchers encounter retrieval problems, Professional librarians usually consult the thesaurus to identify other relevant vocabularies. In the problem of searching across language boundaries, a cross-lingual thesaurus, which is generated by co-occurrence analysis and Hopfield network, can be used to generate additional semantically relevant terms that cannot be obtained from dictionary. In particular, the automatically generated cross-lingual thesaurus is able to capture the unknown words that do not exist in a dictionary, such as names of persons, organizations, and events. Due to Hong Kong's unique history background, both English and Chinese are used as official languages in all legal documents. Therefore, English/Chinese cross-lingual information retrieval is critical for applications in courts and the government. In this paper, we develop an automatic thesaurus by the Hopfield network based an a parallel corpus collected from the Web site of the Department of Justice of the Hong Kong Special Administrative Region (HKSAR) Government. Experiments are conducted to measure the precision and recall of the automatic generated English/Chinese thesaurus. The result Shows that such thesaurus is a promising tool to retrieve relevant terms, especially in the language that is not the same as the input term. The direct translation of the input term can also be retrieved in most of the cases.
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.7, S.671-682