Search (61 results, page 1 of 4)

  • × author_ss:"Chen, H."
  1. Zheng, R.; Li, J.; Chen, H.; Huang, Z.: ¬A framework for authorship identification of online messages : writing-style features and classification techniques (2006) 0.07
    0.067009345 = sum of:
      0.021531772 = product of:
        0.08612709 = sum of:
          0.08612709 = weight(_text_:authors in 5276) [ClassicSimilarity], result of:
            0.08612709 = score(doc=5276,freq=4.0), product of:
              0.24182312 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.053045183 = queryNorm
              0.35615736 = fieldWeight in 5276, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5276)
        0.25 = coord(1/4)
      0.045477577 = sum of:
        0.0095431255 = weight(_text_:a in 5276) [ClassicSimilarity], result of:
          0.0095431255 = score(doc=5276,freq=12.0), product of:
            0.06116359 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.053045183 = queryNorm
            0.15602624 = fieldWeight in 5276, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5276)
        0.035934452 = weight(_text_:22 in 5276) [ClassicSimilarity], result of:
          0.035934452 = score(doc=5276,freq=2.0), product of:
            0.1857552 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.053045183 = queryNorm
            0.19345059 = fieldWeight in 5276, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5276)
    
    Abstract
    With the rapid proliferation of Internet technologies and applications, misuse of online messages for inappropriate or illegal purposes has become a major concern for society. The anonymous nature of online-message distribution makes identity tracing a critical problem. We developed a framework for authorship identification of online messages to address the identity-tracing problem. In this framework, four types of writing-style features (lexical, syntactic, structural, and content-specific features) are extracted and inductive learning algorithms are used to build feature-based classification models to identify authorship of online messages. To examine this framework, we conducted experiments on English and Chinese online-newsgroup messages. We compared the discriminating power of the four types of features and of three classification techniques: decision trees, backpropagation neural networks, and support vector machines. The experimental results showed that the proposed approach was able to identify authors of online messages with satisfactory accuracy of 70 to 95%. All four types of message features contributed to discriminating authors of online messages. Support vector machines outperformed the other two classification techniques in our experiments. The high performance we achieved for both the English and Chinese datasets showed the potential of this approach in a multiple-language context.
    Date
    22. 7.2006 16:14:37
    Type
    a
  2. Chung, W.; Chen, H.: Browsing the underdeveloped Web : an experiment on the Arabic Medical Web Directory (2009) 0.03
    0.027745321 = product of:
      0.055490643 = sum of:
        0.055490643 = sum of:
          0.012369305 = weight(_text_:a in 2733) [ClassicSimilarity], result of:
            0.012369305 = score(doc=2733,freq=14.0), product of:
              0.06116359 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.053045183 = queryNorm
              0.20223314 = fieldWeight in 2733, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=2733)
          0.043121338 = weight(_text_:22 in 2733) [ClassicSimilarity], result of:
            0.043121338 = score(doc=2733,freq=2.0), product of:
              0.1857552 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.053045183 = queryNorm
              0.23214069 = fieldWeight in 2733, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2733)
      0.5 = coord(1/2)
    
    Abstract
    While the Web has grown significantly in recent years, some portions of the Web remain largely underdeveloped, as shown in a lack of high-quality content and functionality. An example is the Arabic Web, in which a lack of well-structured Web directories limits users' ability to browse for Arabic resources. In this research, we proposed an approach to building Web directories for the underdeveloped Web and developed a proof-of-concept prototype called the Arabic Medical Web Directory (AMedDir) that supports browsing of over 5,000 Arabic medical Web sites and pages organized in a hierarchical structure. We conducted an experiment involving Arab participants and found that the AMedDir significantly outperformed two benchmark Arabic Web directories in terms of browsing effectiveness, efficiency, information quality, and user satisfaction. Participants expressed strong preference for the AMedDir and provided many positive comments. This research thus contributes to developing a useful Web directory for organizing the information in the Arabic medical domain and to a better understanding of how to support browsing on the underdeveloped Web.
    Date
    22. 3.2009 17:57:50
    Type
    a
  3. Carmel, E.; Crawford, S.; Chen, H.: Browsing in hypertext : a cognitive study (1992) 0.02
    0.023121104 = product of:
      0.046242207 = sum of:
        0.046242207 = sum of:
          0.010307753 = weight(_text_:a in 7469) [ClassicSimilarity], result of:
            0.010307753 = score(doc=7469,freq=14.0), product of:
              0.06116359 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.053045183 = queryNorm
              0.1685276 = fieldWeight in 7469, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=7469)
          0.035934452 = weight(_text_:22 in 7469) [ClassicSimilarity], result of:
            0.035934452 = score(doc=7469,freq=2.0), product of:
              0.1857552 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.053045183 = queryNorm
              0.19345059 = fieldWeight in 7469, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=7469)
      0.5 = coord(1/2)
    
    Abstract
    With the growth of hypertext and multimedia applications that support and encourage browsing it is time to take a penetrating look at browsing behaviour. Several dimensions of browsing are exemined, to find out: first, what is browsing and what cognitive processes are associated with it: second, is there a browsing strategy, and if so, are there any differences between how subject-area experts and novices browse; and finally, how can this knowledge be applied to improve the design of hypertext systems. Two groups of students, subject-area experts and novices, were studied while browsing a Macintosh HyperCard application on the subject The Vietnam War. A protocol analysis technique was used to gather and analyze data. Components of the GOMS model were used to describe the goals, operators, methods, and selection rules observed: Three browsing strategies were identified: (1) search-oriented browse, scanning and and reviewing information relevant to a fixed task; (2) review-browse, scanning and reviewing intersting information in the presence of transient browse goals that represent changing tasks, and (3) scan-browse, scanning for interesting information (without review). Most subjects primarily used review-browse interspersed with search-oriented browse. Within this strategy, comparisons between subject-area experts and novices revealed differences in tactics: experts browsed in more depth, seldom used referential links, selected different kinds of topics, and viewed information differently thatn did novices. Based on these findings, suggestions are made to hypertext developers
    Source
    IEEE transactions on systems, man and cybernetics. 22(1992) no.5, S.865-884
    Type
    a
  4. Chen, H.; Chau, M.: Web mining : machine learning for Web applications (2003) 0.02
    0.02294547 = sum of:
      0.018270312 = product of:
        0.07308125 = sum of:
          0.07308125 = weight(_text_:authors in 4242) [ClassicSimilarity], result of:
            0.07308125 = score(doc=4242,freq=2.0), product of:
              0.24182312 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.053045183 = queryNorm
              0.30220953 = fieldWeight in 4242, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=4242)
        0.25 = coord(1/4)
      0.0046751574 = product of:
        0.009350315 = sum of:
          0.009350315 = weight(_text_:a in 4242) [ClassicSimilarity], result of:
            0.009350315 = score(doc=4242,freq=8.0), product of:
              0.06116359 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.053045183 = queryNorm
              0.15287387 = fieldWeight in 4242, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=4242)
        0.5 = coord(1/2)
    
    Abstract
    With more than two billion pages created by millions of Web page authors and organizations, the World Wide Web is a tremendously rich knowledge base. The knowledge comes not only from the content of the pages themselves, but also from the unique characteristics of the Web, such as its hyperlink structure and its diversity of content and languages. Analysis of these characteristics often reveals interesting patterns and new knowledge. Such knowledge can be used to improve users' efficiency and effectiveness in searching for information an the Web, and also for applications unrelated to the Web, such as support for decision making or business management. The Web's size and its unstructured and dynamic content, as well as its multilingual nature, make the extraction of useful knowledge a challenging research problem. Furthermore, the Web generates a large amount of data in other formats that contain valuable information. For example, Web server logs' information about user access patterns can be used for information personalization or improving Web page design.
    Type
    a
  5. Leroy, G.; Chen, H.: Genescene: an ontology-enhanced integration of linguistic and co-occurrence based relations in biomedical texts (2005) 0.02
    0.022323046 = product of:
      0.04464609 = sum of:
        0.04464609 = sum of:
          0.008711642 = weight(_text_:a in 5259) [ClassicSimilarity], result of:
            0.008711642 = score(doc=5259,freq=10.0), product of:
              0.06116359 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.053045183 = queryNorm
              0.14243183 = fieldWeight in 5259, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5259)
          0.035934452 = weight(_text_:22 in 5259) [ClassicSimilarity], result of:
            0.035934452 = score(doc=5259,freq=2.0), product of:
              0.1857552 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.053045183 = queryNorm
              0.19345059 = fieldWeight in 5259, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5259)
      0.5 = coord(1/2)
    
    Abstract
    The increasing amount of publicly available literature and experimental data in biomedicine makes it hard for biomedical researchers to stay up-to-date. Genescene is a toolkit that will help alleviate this problem by providing an overview of published literature content. We combined a linguistic parser with Concept Space, a co-occurrence based semantic net. Both techniques extract complementary biomedical relations between noun phrases from MEDLINE abstracts. The parser extracts precise and semantically rich relations from individual abstracts. Concept Space extracts relations that hold true for the collection of abstracts. The Gene Ontology, the Human Genome Nomenclature, and the Unified Medical Language System, are also integrated in Genescene. Currently, they are used to facilitate the integration of the two relation types, and to select the more interesting and high-quality relations for presentation. A user study focusing on p53 literature is discussed. All MEDLINE abstracts discussing p53 were processed in Genescene. Two researchers evaluated the terms and relations from several abstracts of interest to them. The results show that the terms were precise (precision 93%) and relevant, as were the parser relations (precision 95%). The Concept Space relations were more precise when selected with ontological knowledge (precision 78%) than without (60%).
    Date
    22. 7.2006 14:26:01
    Type
    a
  6. Hu, D.; Kaza, S.; Chen, H.: Identifying significant facilitators of dark network evolution (2009) 0.02
    0.022323046 = product of:
      0.04464609 = sum of:
        0.04464609 = sum of:
          0.008711642 = weight(_text_:a in 2753) [ClassicSimilarity], result of:
            0.008711642 = score(doc=2753,freq=10.0), product of:
              0.06116359 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.053045183 = queryNorm
              0.14243183 = fieldWeight in 2753, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2753)
          0.035934452 = weight(_text_:22 in 2753) [ClassicSimilarity], result of:
            0.035934452 = score(doc=2753,freq=2.0), product of:
              0.1857552 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.053045183 = queryNorm
              0.19345059 = fieldWeight in 2753, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2753)
      0.5 = coord(1/2)
    
    Abstract
    Social networks evolve over time with the addition and removal of nodes and links to survive and thrive in their environments. Previous studies have shown that the link-formation process in such networks is influenced by a set of facilitators. However, there have been few empirical evaluations to determine the important facilitators. In a research partnership with law enforcement agencies, we used dynamic social-network analysis methods to examine several plausible facilitators of co-offending relationships in a large-scale narcotics network consisting of individuals and vehicles. Multivariate Cox regression and a two-proportion z-test on cyclic and focal closures of the network showed that mutual acquaintance and vehicle affiliations were significant facilitators for the network under study. We also found that homophily with respect to age, race, and gender were not good predictors of future link formation in these networks. Moreover, we examined the social causes and policy implications for the significance and insignificance of various facilitators including common jails on future co-offending. These findings provide important insights into the link-formation processes and the resilience of social networks. In addition, they can be used to aid in the prediction of future links. The methods described can also help in understanding the driving forces behind the formation and evolution of social networks facilitated by mobile and Web technologies.
    Date
    22. 3.2009 18:50:30
    Type
    a
  7. Qin, J.; Zhou, Y.; Chau, M.; Chen, H.: Multilingual Web retrieval : an experiment in English-Chinese business intelligence (2006) 0.02
    0.020379137 = sum of:
      0.0152252605 = product of:
        0.060901042 = sum of:
          0.060901042 = weight(_text_:authors in 5054) [ClassicSimilarity], result of:
            0.060901042 = score(doc=5054,freq=2.0), product of:
              0.24182312 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.053045183 = queryNorm
              0.25184128 = fieldWeight in 5054, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5054)
        0.25 = coord(1/4)
      0.0051538767 = product of:
        0.010307753 = sum of:
          0.010307753 = weight(_text_:a in 5054) [ClassicSimilarity], result of:
            0.010307753 = score(doc=5054,freq=14.0), product of:
              0.06116359 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.053045183 = queryNorm
              0.1685276 = fieldWeight in 5054, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5054)
        0.5 = coord(1/2)
    
    Abstract
    As increasing numbers of non-English resources have become available on the Web, the interesting and important issue of how Web users can retrieve documents in different languages has arisen. Cross-language information retrieval (CLIP), the study of retrieving information in one language by queries expressed in another language, is a promising approach to the problem. Cross-language information retrieval has attracted much attention in recent years. Most research systems have achieved satisfactory performance on standard Text REtrieval Conference (TREC) collections such as news articles, but CLIR techniques have not been widely studied and evaluated for applications such as Web portals. In this article, the authors present their research in developing and evaluating a multilingual English-Chinese Web portal that incorporates various CLIP techniques for use in the business domain. A dictionary-based approach was adopted and combines phrasal translation, co-occurrence analysis, and pre- and posttranslation query expansion. The portal was evaluated by domain experts, using a set of queries in both English and Chinese. The experimental results showed that co-occurrence-based phrasal translation achieved a 74.6% improvement in precision over simple word-byword translation. When used together, pre- and posttranslation query expansion improved the performance slightly, achieving a 78.0% improvement over the baseline word-by-word translation approach. In general, applying CLIR techniques in Web applications shows promise.
    Type
    a
  8. Vishwanath, A.; Chen, H.: Technology clusters : using multidimensional scaling to evaluate and structure technology clusters (2006) 0.02
    0.019581081 = sum of:
      0.0152252605 = product of:
        0.060901042 = sum of:
          0.060901042 = weight(_text_:authors in 6006) [ClassicSimilarity], result of:
            0.060901042 = score(doc=6006,freq=2.0), product of:
              0.24182312 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.053045183 = queryNorm
              0.25184128 = fieldWeight in 6006, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=6006)
        0.25 = coord(1/4)
      0.004355821 = product of:
        0.008711642 = sum of:
          0.008711642 = weight(_text_:a in 6006) [ClassicSimilarity], result of:
            0.008711642 = score(doc=6006,freq=10.0), product of:
              0.06116359 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.053045183 = queryNorm
              0.14243183 = fieldWeight in 6006, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=6006)
        0.5 = coord(1/2)
    
    Abstract
    Empirical evidence suggests that the ownership of related products that form a technology cluster is signifIcantly better than the attributes of an innovation at predicting adoption. The treatment of technology clusters, however, has been ad hoc and study specific: Researchers often make a priori assumptions about the relationships between technologies and measure ownership using lists of functionally related technology, without any systematic reasoning. Hence, the authors set out to examine empirically the composition of technology clusters and the differences, if any, in clusters of technologies formed by adopters and nonadopters. Using the Galileo system of multidimensional scaling and the associational diffusion framework, the dissimilarities between 30 technology concepts were scored by adopters and nonadopters. Results indicate clear differences in conceptualization of clusters: Adopters tend to relate technologies based an their functional similarity; here, innovations are perceived to be complementary, and hence, adoption of one technology spurs the adoption of related technologies. On the other hand, nonadopters tend to relate technologies using a stricter ascendancy of association where the adoption of an innovation makes subsequent innovations redundant. The results question the measurement approaches and present an alternative methodology.
    Type
    a
  9. Schumaker, R.P.; Chen, H.: Evaluating a news-aware quantitative trader : the effect of momentum and contrarian stock selection strategies (2008) 0.00
    0.0045225085 = product of:
      0.009045017 = sum of:
        0.009045017 = product of:
          0.018090034 = sum of:
            0.018090034 = weight(_text_:a in 1352) [ClassicSimilarity], result of:
              0.018090034 = score(doc=1352,freq=22.0), product of:
                0.06116359 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.053045183 = queryNorm
                0.29576474 = fieldWeight in 1352, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1352)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We study the coupling of basic quantitative portfolio selection strategies with a financial news article prediction system, AZFinText. By varying the degrees of portfolio formation time, we found that a hybrid system using both quantitative strategy and a full set of financial news articles performed the best. With a 1-week portfolio formation period, we achieved a 20.79% trading return using a Momentum strategy and a 4.54% return using a Contrarian strategy over a 5-week holding period. We also found that trader overreaction to these events led AZFinText to capitalize on these short-term surges in price.
    Type
    a
  10. Chen, H.; Ng, T.: ¬An algorithmic approach to concept exploration in a large knowledge network (automatic thesaurus consultation) : symbolic branch-and-bound search versus connectionist Hopfield Net Activation (1995) 0.00
    0.0033058354 = product of:
      0.006611671 = sum of:
        0.006611671 = product of:
          0.013223342 = sum of:
            0.013223342 = weight(_text_:a in 2203) [ClassicSimilarity], result of:
              0.013223342 = score(doc=2203,freq=16.0), product of:
                0.06116359 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.053045183 = queryNorm
                0.2161963 = fieldWeight in 2203, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2203)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Presents a framework for knowledge discovery and concept exploration. In order to enhance the concept exploration capability of knowledge based systems and to alleviate the limitation of the manual browsing approach, develops 2 spreading activation based algorithms for concept exploration in large, heterogeneous networks of concepts (eg multiple thesauri). One algorithm, which is based on the symbolic AI paradigma, performs a conventional branch-and-bound search on a semantic net representation to identify other highly relevant concepts (a serial, optimal search process). The 2nd algorithm, which is absed on the neural network approach, executes the Hopfield net parallel relaxation and convergence process to identify 'convergent' concepts for some initial queries (a parallel, heuristic search process). Tests these 2 algorithms on a large text-based knowledge network of about 13.000 nodes (terms) and 80.000 directed links in the area of computing technologies
    Type
    a
  11. Chen, H.; Dhar, V.: Cognitive process as a basis for intelligent retrieval system design (1991) 0.00
    0.0031167713 = product of:
      0.0062335427 = sum of:
        0.0062335427 = product of:
          0.012467085 = sum of:
            0.012467085 = weight(_text_:a in 3845) [ClassicSimilarity], result of:
              0.012467085 = score(doc=3845,freq=8.0), product of:
                0.06116359 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.053045183 = queryNorm
                0.20383182 = fieldWeight in 3845, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3845)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    2 studies were conducted to investigate the cognitive processes involved in online document-based information retrieval. These studies led to the development of 5 computerised models of online document retrieval. These models were incorporated into a design of an 'intelligent' document-based retrieval system. Following a discussion of this system, discusses the broader implications of the research for the design of information retrieval sysems
    Type
    a
  12. Chen, H.: Knowledge-based document retrieval : framework and design (1992) 0.00
    0.0031167713 = product of:
      0.0062335427 = sum of:
        0.0062335427 = product of:
          0.012467085 = sum of:
            0.012467085 = weight(_text_:a in 5283) [ClassicSimilarity], result of:
              0.012467085 = score(doc=5283,freq=2.0), product of:
                0.06116359 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.053045183 = queryNorm
                0.20383182 = fieldWeight in 5283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=5283)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  13. Chen, H.: Generating, integrating and activating thesauri for concept-based document retrieval (1993) 0.00
    0.0031167713 = product of:
      0.0062335427 = sum of:
        0.0062335427 = product of:
          0.012467085 = sum of:
            0.012467085 = weight(_text_:a in 7623) [ClassicSimilarity], result of:
              0.012467085 = score(doc=7623,freq=2.0), product of:
                0.06116359 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.053045183 = queryNorm
                0.20383182 = fieldWeight in 7623, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=7623)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  14. Chen, H.: Explaining and alleviating information management indeterminism : a knowledge-based framework (1994) 0.00
    0.0031167713 = product of:
      0.0062335427 = sum of:
        0.0062335427 = product of:
          0.012467085 = sum of:
            0.012467085 = weight(_text_:a in 8221) [ClassicSimilarity], result of:
              0.012467085 = score(doc=8221,freq=8.0), product of:
                0.06116359 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.053045183 = queryNorm
                0.20383182 = fieldWeight in 8221, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8221)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Attempts to identify the nature and causes of information management indeterminism in an online research environment and proposes solutions for alleviating this indeterminism. Conducts two empirical studies of information management activities. The first identified the types and nature of information management indeterminism by evaluating archived text. The second focused on four sources of indeterminism: subject area knowledge, classification knowledge, system knowledge, and collaboration knowledge. Proposes a knowledge based design for alleviating indeterminism, which contains a system generated thesaurus and an inferencing engine
    Type
    a
  15. Chen, H.; Lynch, K.J.; Bashu, K.; Ng, T.D.: Generating, integrating, and activating thesauri for concept-based document retrieval (1993) 0.00
    0.0031167713 = product of:
      0.0062335427 = sum of:
        0.0062335427 = product of:
          0.012467085 = sum of:
            0.012467085 = weight(_text_:a in 8549) [ClassicSimilarity], result of:
              0.012467085 = score(doc=8549,freq=2.0), product of:
                0.06116359 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.053045183 = queryNorm
                0.20383182 = fieldWeight in 8549, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=8549)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  16. Orwig, R.E.; Chen, H.; Nunamaker, J.F.: ¬A graphical, self-organizing approach to classifying electronic meeting output (1997) 0.00
    0.0030490744 = product of:
      0.006098149 = sum of:
        0.006098149 = product of:
          0.012196298 = sum of:
            0.012196298 = weight(_text_:a in 6928) [ClassicSimilarity], result of:
              0.012196298 = score(doc=6928,freq=10.0), product of:
                0.06116359 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.053045183 = queryNorm
                0.19940455 = fieldWeight in 6928, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6928)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Describes research in the application of a Kohonen Self-Organizing Map (SOM) to the problem of classification of electronic brainstorming output and an evaluation of the results. Describes an electronic meeting system and describes the classification problem that exists in the group problem solving process. Surveys the literature concerning classification. Describes the application of the Kohonen SOM to the meeting output classification problem. Describes an experiment that evaluated the classification performed by the Kohonen SOM by comparing it with those of a human expert and a Hopfield neural network. Discusses conclusions and directions for future research
    Type
    a
  17. Zhu, B.; Chen, H.: Information visualization (2004) 0.00
    0.0029718704 = product of:
      0.0059437407 = sum of:
        0.0059437407 = product of:
          0.011887481 = sum of:
            0.011887481 = weight(_text_:a in 4276) [ClassicSimilarity], result of:
              0.011887481 = score(doc=4276,freq=38.0), product of:
                0.06116359 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.053045183 = queryNorm
                0.19435552 = fieldWeight in 4276, product of:
                  6.164414 = tf(freq=38.0), with freq of:
                    38.0 = termFreq=38.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4276)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Advanced technology has resulted in the generation of about one million terabytes of information every year. Ninety-reine percent of this is available in digital format (Keim, 2001). More information will be generated in the next three years than was created during all of previous human history (Keim, 2001). Collecting information is no longer a problem, but extracting value from information collections has become progressively more difficult. Various search engines have been developed to make it easier to locate information of interest, but these work well only for a person who has a specific goal and who understands what and how information is stored. This usually is not the Gase. Visualization was commonly thought of in terms of representing human mental processes (MacEachren, 1991; Miller, 1984). The concept is now associated with the amplification of these mental processes (Card, Mackinlay, & Shneiderman, 1999). Human eyes can process visual cues rapidly, whereas advanced information analysis techniques transform the computer into a powerful means of managing digitized information. Visualization offers a link between these two potent systems, the human eye and the computer (Gershon, Eick, & Card, 1998), helping to identify patterns and to extract insights from large amounts of information. The identification of patterns is important because it may lead to a scientific discovery, an interpretation of clues to solve a crime, the prediction of catastrophic weather, a successful financial investment, or a better understanding of human behavior in a computermediated environment. Visualization technology shows considerable promise for increasing the value of large-scale collections of information, as evidenced by several commercial applications of TreeMap (e.g., http://www.smartmoney.com) and Hyperbolic tree (e.g., http://www.inxight.com) to visualize large-scale hierarchical structures. Although the proliferation of visualization technologies dates from the 1990s where sophisticated hardware and software made increasingly faster generation of graphical objects possible, the role of visual aids in facilitating the construction of mental images has a long history. Visualization has been used to communicate ideas, to monitor trends implicit in data, and to explore large volumes of data for hypothesis generation. Imagine traveling to a strange place without a map, having to memorize physical and chemical properties of an element without Mendeleyev's periodic table, trying to understand the stock market without statistical diagrams, or browsing a collection of documents without interactive visual aids. A collection of information can lose its value simply because of the effort required for exhaustive exploration. Such frustrations can be overcome by visualization.
    Visualization can be classified as scientific visualization, software visualization, or information visualization. Although the data differ, the underlying techniques have much in common. They use the same elements (visual cues) and follow the same rules of combining visual cues to deliver patterns. They all involve understanding human perception (Encarnacao, Foley, Bryson, & Feiner, 1994) and require domain knowledge (Tufte, 1990). Because most decisions are based an unstructured information, such as text documents, Web pages, or e-mail messages, this chapter focuses an the visualization of unstructured textual documents. The chapter reviews information visualization techniques developed over the last decade and examines how they have been applied in different domains. The first section provides the background by describing visualization history and giving overviews of scientific, software, and information visualization as well as the perceptual aspects of visualization. The next section assesses important visualization techniques that convert abstract information into visual objects and facilitate navigation through displays an a computer screen. It also explores information analysis algorithms that can be applied to identify or extract salient visualizable structures from collections of information. Information visualization systems that integrate different types of technologies to address problems in different domains are then surveyed; and we move an to a survey and critique of visualization system evaluation studies. The chapter concludes with a summary and identification of future research directions.
    Type
    a
  18. Chen, H.; Ng, T.D.; Martinez, J.; Schatz, B.R.: ¬A concept space approach to addressing the vocabulary problem in scientific information retrieval : an experiment on the Worm Community System (1997) 0.00
    0.0029219734 = product of:
      0.0058439467 = sum of:
        0.0058439467 = product of:
          0.011687893 = sum of:
            0.011687893 = weight(_text_:a in 6492) [ClassicSimilarity], result of:
              0.011687893 = score(doc=6492,freq=18.0), product of:
                0.06116359 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.053045183 = queryNorm
                0.19109234 = fieldWeight in 6492, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6492)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This research presents an algorithmic approach to addressing the vocabulary problem in scientific information retrieval and information sharing, using the molecular biology domain as an example. We first present a literature review of cognitive studies related to the vocabulary problem and vocabulary-based search aids (thesauri) and then discuss techniques for building robust and domain-specific thesauri to assist in cross-domain scientific information retrieval. Using a variation of the automatic thesaurus generation techniques, which we refer to as the concept space approach, we recently conducted an experiment in the molecular biology domain in which we created a C. elegans worm thesaurus of 7.657 worm-specific terms and a Drosophila fly thesaurus of 15.626 terms. About 30% of these terms overlapped, which created vocabulary paths from one subject domain to the other. Based on a cognitve study of term association involving 4 biologists, we found that a large percentage (59,6-85,6%) of the terms suggested by the subjects were identified in the cojoined fly-worm thesaurus. However, we found only a small percentage (8,4-18,1%) of the associations suggested by the subjects in the thesaurus
    Type
    a
  19. Chen, H.; Martinez, J.; Kirchhoff, A.; Ng, T.D.; Schatz, B.R.: Alleviating search uncertainty through concept associations : automatic indexing, co-occurence analysis, and parallel computing (1998) 0.00
    0.0028629373 = product of:
      0.0057258746 = sum of:
        0.0057258746 = product of:
          0.011451749 = sum of:
            0.011451749 = weight(_text_:a in 5202) [ClassicSimilarity], result of:
              0.011451749 = score(doc=5202,freq=12.0), product of:
                0.06116359 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.053045183 = queryNorm
                0.18723148 = fieldWeight in 5202, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5202)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this article, we report research on an algorithmic approach to alleviating search uncertainty in a large information space. Grounded on object filtering, automatic indexing, and co-occurence analysis, we performed a large-scale experiment using a parallel supercomputer (SGI Power Challenge) to analyze 400.000+ abstracts in an INSPEC computer engineering collection. Two system-generated thesauri, one based on a combined object filtering and automatic indexing method, and the other based on automatic indexing only, were compaed with the human-generated INSPEC subject thesaurus. Our user evaluation revealed that the system-generated thesauri were better than the INSPEC thesaurus in 'concept recall', but in 'concept precision' the 3 thesauri were comparable. Our analysis also revealed that the terms suggested by the 3 thesauri were complementary and could be used to significantly increase 'variety' in search terms the thereby reduce search uncertainty
    Type
    a
  20. Li, J.; Zhang, Z.; Li, X.; Chen, H.: Kernel-based learning for biomedical relation extraction (2008) 0.00
    0.0028629373 = product of:
      0.0057258746 = sum of:
        0.0057258746 = product of:
          0.011451749 = sum of:
            0.011451749 = weight(_text_:a in 1611) [ClassicSimilarity], result of:
              0.011451749 = score(doc=1611,freq=12.0), product of:
                0.06116359 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.053045183 = queryNorm
                0.18723148 = fieldWeight in 1611, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1611)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Relation extraction is the process of scanning text for relationships between named entities. Recently, significant studies have focused on automatically extracting relations from biomedical corpora. Most existing biomedical relation extractors require manual creation of biomedical lexicons or parsing templates based on domain knowledge. In this study, we propose to use kernel-based learning methods to automatically extract biomedical relations from literature text. We develop a framework of kernel-based learning for biomedical relation extraction. In particular, we modified the standard tree kernel function by incorporating a trace kernel to capture richer contextual information. In our experiments on a biomedical corpus, we compare different kernel functions for biomedical relation detection and classification. The experimental results show that a tree kernel outperforms word and sequence kernels for relation detection, our trace-tree kernel outperforms the standard tree kernel, and a composite kernel outperforms individual kernels for relation extraction.
    Type
    a

Types

  • a 61
  • el 1
  • More… Less…