Search (55 results, page 1 of 3)

  • × author_ss:"Chen, H."
  1. Chung, W.; Chen, H.: Browsing the underdeveloped Web : an experiment on the Arabic Medical Web Directory (2009) 0.03
    0.027215695 = product of:
      0.04082354 = sum of:
        0.02231296 = weight(_text_:to in 2733) [ClassicSimilarity], result of:
          0.02231296 = score(doc=2733,freq=10.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.26949292 = fieldWeight in 2733, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.046875 = fieldNorm(doc=2733)
        0.018510582 = product of:
          0.037021164 = sum of:
            0.037021164 = weight(_text_:22 in 2733) [ClassicSimilarity], result of:
              0.037021164 = score(doc=2733,freq=2.0), product of:
                0.15947726 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045541126 = queryNorm
                0.23214069 = fieldWeight in 2733, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2733)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    While the Web has grown significantly in recent years, some portions of the Web remain largely underdeveloped, as shown in a lack of high-quality content and functionality. An example is the Arabic Web, in which a lack of well-structured Web directories limits users' ability to browse for Arabic resources. In this research, we proposed an approach to building Web directories for the underdeveloped Web and developed a proof-of-concept prototype called the Arabic Medical Web Directory (AMedDir) that supports browsing of over 5,000 Arabic medical Web sites and pages organized in a hierarchical structure. We conducted an experiment involving Arab participants and found that the AMedDir significantly outperformed two benchmark Arabic Web directories in terms of browsing effectiveness, efficiency, information quality, and user satisfaction. Participants expressed strong preference for the AMedDir and provided many positive comments. This research thus contributes to developing a useful Web directory for organizing the information in the Arabic medical domain and to a better understanding of how to support browsing on the underdeveloped Web.
    Date
    22. 3.2009 17:57:50
  2. Carmel, E.; Crawford, S.; Chen, H.: Browsing in hypertext : a cognitive study (1992) 0.02
    0.024950907 = product of:
      0.03742636 = sum of:
        0.022000873 = weight(_text_:to in 7469) [ClassicSimilarity], result of:
          0.022000873 = score(doc=7469,freq=14.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.2657236 = fieldWeight in 7469, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0390625 = fieldNorm(doc=7469)
        0.015425485 = product of:
          0.03085097 = sum of:
            0.03085097 = weight(_text_:22 in 7469) [ClassicSimilarity], result of:
              0.03085097 = score(doc=7469,freq=2.0), product of:
                0.15947726 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045541126 = queryNorm
                0.19345059 = fieldWeight in 7469, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=7469)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    With the growth of hypertext and multimedia applications that support and encourage browsing it is time to take a penetrating look at browsing behaviour. Several dimensions of browsing are exemined, to find out: first, what is browsing and what cognitive processes are associated with it: second, is there a browsing strategy, and if so, are there any differences between how subject-area experts and novices browse; and finally, how can this knowledge be applied to improve the design of hypertext systems. Two groups of students, subject-area experts and novices, were studied while browsing a Macintosh HyperCard application on the subject The Vietnam War. A protocol analysis technique was used to gather and analyze data. Components of the GOMS model were used to describe the goals, operators, methods, and selection rules observed: Three browsing strategies were identified: (1) search-oriented browse, scanning and and reviewing information relevant to a fixed task; (2) review-browse, scanning and reviewing intersting information in the presence of transient browse goals that represent changing tasks, and (3) scan-browse, scanning for interesting information (without review). Most subjects primarily used review-browse interspersed with search-oriented browse. Within this strategy, comparisons between subject-area experts and novices revealed differences in tactics: experts browsed in more depth, seldom used referential links, selected different kinds of topics, and viewed information differently thatn did novices. Based on these findings, suggestions are made to hypertext developers
    Source
    IEEE transactions on systems, man and cybernetics. 22(1992) no.5, S.865-884
  3. Zheng, R.; Li, J.; Chen, H.; Huang, Z.: ¬A framework for authorship identification of online messages : writing-style features and classification techniques (2006) 0.02
    0.024950907 = product of:
      0.03742636 = sum of:
        0.022000873 = weight(_text_:to in 5276) [ClassicSimilarity], result of:
          0.022000873 = score(doc=5276,freq=14.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.2657236 = fieldWeight in 5276, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5276)
        0.015425485 = product of:
          0.03085097 = sum of:
            0.03085097 = weight(_text_:22 in 5276) [ClassicSimilarity], result of:
              0.03085097 = score(doc=5276,freq=2.0), product of:
                0.15947726 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045541126 = queryNorm
                0.19345059 = fieldWeight in 5276, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5276)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    With the rapid proliferation of Internet technologies and applications, misuse of online messages for inappropriate or illegal purposes has become a major concern for society. The anonymous nature of online-message distribution makes identity tracing a critical problem. We developed a framework for authorship identification of online messages to address the identity-tracing problem. In this framework, four types of writing-style features (lexical, syntactic, structural, and content-specific features) are extracted and inductive learning algorithms are used to build feature-based classification models to identify authorship of online messages. To examine this framework, we conducted experiments on English and Chinese online-newsgroup messages. We compared the discriminating power of the four types of features and of three classification techniques: decision trees, backpropagation neural networks, and support vector machines. The experimental results showed that the proposed approach was able to identify authors of online messages with satisfactory accuracy of 70 to 95%. All four types of message features contributed to discriminating authors of online messages. Support vector machines outperformed the other two classification techniques in our experiments. The high performance we achieved for both the English and Chinese datasets showed the potential of this approach in a multiple-language context.
    Date
    22. 7.2006 16:14:37
  4. Leroy, G.; Chen, H.: Genescene: an ontology-enhanced integration of linguistic and co-occurrence based relations in biomedical texts (2005) 0.02
    0.022679746 = product of:
      0.03401962 = sum of:
        0.018594133 = weight(_text_:to in 5259) [ClassicSimilarity], result of:
          0.018594133 = score(doc=5259,freq=10.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.22457743 = fieldWeight in 5259, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5259)
        0.015425485 = product of:
          0.03085097 = sum of:
            0.03085097 = weight(_text_:22 in 5259) [ClassicSimilarity], result of:
              0.03085097 = score(doc=5259,freq=2.0), product of:
                0.15947726 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045541126 = queryNorm
                0.19345059 = fieldWeight in 5259, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5259)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The increasing amount of publicly available literature and experimental data in biomedicine makes it hard for biomedical researchers to stay up-to-date. Genescene is a toolkit that will help alleviate this problem by providing an overview of published literature content. We combined a linguistic parser with Concept Space, a co-occurrence based semantic net. Both techniques extract complementary biomedical relations between noun phrases from MEDLINE abstracts. The parser extracts precise and semantically rich relations from individual abstracts. Concept Space extracts relations that hold true for the collection of abstracts. The Gene Ontology, the Human Genome Nomenclature, and the Unified Medical Language System, are also integrated in Genescene. Currently, they are used to facilitate the integration of the two relation types, and to select the more interesting and high-quality relations for presentation. A user study focusing on p53 literature is discussed. All MEDLINE abstracts discussing p53 were processed in Genescene. Two researchers evaluated the terms and relations from several abstracts of interest to them. The results show that the terms were precise (precision 93%) and relevant, as were the parser relations (precision 95%). The Concept Space relations were more precise when selected with ontological knowledge (precision 78%) than without (60%).
    Date
    22. 7.2006 14:26:01
  5. Hu, D.; Kaza, S.; Chen, H.: Identifying significant facilitators of dark network evolution (2009) 0.02
    0.022679746 = product of:
      0.03401962 = sum of:
        0.018594133 = weight(_text_:to in 2753) [ClassicSimilarity], result of:
          0.018594133 = score(doc=2753,freq=10.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.22457743 = fieldWeight in 2753, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2753)
        0.015425485 = product of:
          0.03085097 = sum of:
            0.03085097 = weight(_text_:22 in 2753) [ClassicSimilarity], result of:
              0.03085097 = score(doc=2753,freq=2.0), product of:
                0.15947726 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045541126 = queryNorm
                0.19345059 = fieldWeight in 2753, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2753)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Social networks evolve over time with the addition and removal of nodes and links to survive and thrive in their environments. Previous studies have shown that the link-formation process in such networks is influenced by a set of facilitators. However, there have been few empirical evaluations to determine the important facilitators. In a research partnership with law enforcement agencies, we used dynamic social-network analysis methods to examine several plausible facilitators of co-offending relationships in a large-scale narcotics network consisting of individuals and vehicles. Multivariate Cox regression and a two-proportion z-test on cyclic and focal closures of the network showed that mutual acquaintance and vehicle affiliations were significant facilitators for the network under study. We also found that homophily with respect to age, race, and gender were not good predictors of future link formation in these networks. Moreover, we examined the social causes and policy implications for the significance and insignificance of various facilitators including common jails on future co-offending. These findings provide important insights into the link-formation processes and the resilience of social networks. In addition, they can be used to aid in the prediction of future links. The methods described can also help in understanding the driving forces behind the formation and evolution of social networks facilitated by mobile and Web technologies.
    Date
    22. 3.2009 18:50:30
  6. Chen, H.: ¬An analysis of image queries in the field of art history (2001) 0.01
    0.010267075 = product of:
      0.030801224 = sum of:
        0.030801224 = weight(_text_:to in 5187) [ClassicSimilarity], result of:
          0.030801224 = score(doc=5187,freq=14.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.37201303 = fieldWeight in 5187, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5187)
      0.33333334 = coord(1/3)
    
    Abstract
    Chen arranged with an Art History instructor to require 20 medieval art images in papers received from 29 students. Participants completed a self administered presearch and postsearch questionnaire, and were interviewed after questionnaire analysis, in order to collect both the keywords and phrases they planned to use, and those actually used. Three MLIS student reviewers then mapped the queries to Enser and McGregor's four categories, Jorgensen's 12 classes, and Fidel's 12 feature data and object poles providing a degree of match on a seven point scale (one not at all to 7 exact). The reviewers give highest scores to Enser and McGregor;'s categories. Modifications to both the Enser and McGregor and Jorgensen schemes are suggested
  7. Chung, W.; Chen, H.; Reid, E.: Business stakeholder analyzer : an experiment of classifying stakeholders on the Web (2009) 0.01
    0.009193186 = product of:
      0.027579557 = sum of:
        0.027579557 = weight(_text_:to in 2699) [ClassicSimilarity], result of:
          0.027579557 = score(doc=2699,freq=22.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.33310217 = fieldWeight in 2699, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2699)
      0.33333334 = coord(1/3)
    
    Abstract
    As the Web is used increasingly to share and disseminate information, business analysts and managers are challenged to understand stakeholder relationships. Traditional stakeholder theories and frameworks employ a manual approach to analysis and do not scale up to accommodate the rapid growth of the Web. Unfortunately, existing business intelligence (BI) tools lack analysis capability, and research on BI systems is sparse. This research proposes a framework for designing BI systems to identify and to classify stakeholders on the Web, incorporating human knowledge and machine-learned information from Web pages. Based on the framework, we have developed a prototype called Business Stakeholder Analyzer (BSA) that helps managers and analysts to identify and to classify their stakeholders on the Web. Results from our experiment involving algorithm comparison, feature comparison, and a user study showed that the system achieved better within-class accuracies in widespread stakeholder types such as partner/sponsor/supplier and media/reviewer, and was more efficient than human classification. The student and practitioner subjects in our user study strongly agreed that such a system would save analysts' time and help to identify and classify stakeholders. This research contributes to a better understanding of how to integrate information technology with stakeholder theory, and enriches the knowledge base of BI system design.
  8. Zhu, B.; Chen, H.: Validating a geographical image retrieval system (2000) 0.01
    0.00880035 = product of:
      0.026401049 = sum of:
        0.026401049 = weight(_text_:to in 4769) [ClassicSimilarity], result of:
          0.026401049 = score(doc=4769,freq=14.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.3188683 = fieldWeight in 4769, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.046875 = fieldNorm(doc=4769)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper summarizes a prototype geographical image retrieval system that demonstrates how to integrate image processing and information analysis techniques to support large-scale content-based image retrieval. By using an image as its interface, the prototype system addresses a troublesome aspect of traditional retrieval models, which require users to have complete knowledge of the low-level features of an image. In addition we describe an experiment to validate against that of human subjects in an effort to address the scarcity of research evaluating performance of an algorithm against that of human beings. The results of the experiment indicate that the system could do as well as human subjects in accomplishing the tasks of similarity analysis and image categorization. We also found that under some circumstances texture features of an image are insufficient to represent an geographic image. We believe, however, that our image retrieval system provides a promising approach to integrating image processing techniques and information retrieval algorithms
  9. Fu, T.; Abbasi, A.; Chen, H.: ¬A focused crawler for Dark Web forums (2010) 0.01
    0.0087653585 = product of:
      0.026296074 = sum of:
        0.026296074 = weight(_text_:to in 3471) [ClassicSimilarity], result of:
          0.026296074 = score(doc=3471,freq=20.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.31760043 = fieldWeight in 3471, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3471)
      0.33333334 = coord(1/3)
    
    Abstract
    The unprecedented growth of the Internet has given rise to the Dark Web, the problematic facet of the Web associated with cybercrime, hate, and extremism. Despite the need for tools to collect and analyze Dark Web forums, the covert nature of this part of the Internet makes traditional Web crawling techniques insufficient for capturing such content. In this study, we propose a novel crawling system designed to collect Dark Web forum content. The system uses a human-assisted accessibility approach to gain access to Dark Web forums. Several URL ordering features and techniques enable efficient extraction of forum postings. The system also includes an incremental crawler coupled with a recall-improvement mechanism intended to facilitate enhanced retrieval and updating of collected content. Experiments conducted to evaluate the effectiveness of the human-assisted accessibility approach and the recall-improvement-based, incremental-update procedure yielded favorable results. The human-assisted approach significantly improved access to Dark Web forums while the incremental crawler with recall improvement also outperformed standard periodic- and incremental-update approaches. Using the system, we were able to collect over 100 Dark Web forums from three regions. A case study encompassing link and content analysis of collected forums was used to illustrate the value and importance of gathering and analyzing content from such online communities.
  10. Chen, H.: Intelligence and security informatics : Introduction to the special topic issue (2005) 0.01
    0.008231973 = product of:
      0.02469592 = sum of:
        0.02469592 = weight(_text_:to in 3232) [ClassicSimilarity], result of:
          0.02469592 = score(doc=3232,freq=36.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.29827398 = fieldWeight in 3232, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3232)
      0.33333334 = coord(1/3)
    
    Abstract
    Making the Nation Safer: The Role of Science and Technology in Countering Terrorism The commitment of the scientific, engineering, and health communities to helping the United States and the world respond to security challenges became evident after September 11, 2001. The U.S. National Research Council's report an "Making the Nation Safer: The Role of Science and Technology in Countering Terrorism," (National Research Council, 2002, p. 1) explains the context of such a new commitment: Terrorism is a serious threat to the Security of the United States and indeed the world. The vulnerability of societies to terrorist attacks results in part from the proliferation of chemical, biological, and nuclear weapons of mass destruction, but it also is a consequence of the highly efficient and interconnected systems that we rely an for key services such as transportation, information, energy, and health care. The efficient functioning of these systems reflects great technological achievements of the past century, but interconnectedness within and across systems also means that infrastructures are vulnerable to local disruptions, which could lead to widespread or catastrophic failures. As terrorists seek to exploit these vulnerabilities, it is fitting that we harness the nation's exceptional scientific and technological capabilities to Counter terrorist threats. A committee of 24 of the leading scientific, engineering, medical, and policy experts in the United States conducted the study described in the report. Eight panels were separately appointed and asked to provide input to the committee. The panels included: (a) biological sciences, (b) chemical issues, (c) nuclear and radiological issues, (d) information technology, (e) transportation, (f) energy facilities, Cities, and fixed infrastructure, (g) behavioral, social, and institutional issues, and (h) systems analysis and systems engineering. The focus of the committee's work was to make the nation safer from emerging terrorist threats that sought to inflict catastrophic damage an the nation's people, its infrastructure, or its economy. The committee considered nine areas, each of which is discussed in a separate chapter in the report: nuclear and radiological materials, human and agricultural health systems, toxic chemicals and explosive materials, information technology, energy systems, transportation systems, Cities and fixed infrastructure, the response of people to terrorism, and complex and interdependent systems. The chapter an information technology (IT) is particularly relevant to this special issue. The report recommends that "a strategic long-term research and development agenda should be established to address three primary counterterrorismrelated areas in IT: information and network security, the IT needs of emergency responders, and information fusion and management" (National Research Council, 2002, pp. 11 -12). The MD in information and network security should include approaches and architectures for prevention, identification, and containment of cyber-intrusions and recovery from them. The R&D to address IT needs of emergency responders should include ensuring interoperability, maintaining and expanding communications capability during an emergency, communicating with the public during an emergency, and providing support for decision makers. The R&D in information fusion and management for the intelligence, law enforcement, and emergency response communities should include data mining, data integration, language technologies, and processing of image and audio data. Much of the research reported in this special issue is related to information fusion and management for homeland security.
  11. Chen, H.: Introduction to the JASIST special topic section on Web retrieval and mining : A machine learning perspective (2003) 0.01
    0.0081475405 = product of:
      0.02444262 = sum of:
        0.02444262 = weight(_text_:to in 1610) [ClassicSimilarity], result of:
          0.02444262 = score(doc=1610,freq=12.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.29521468 = fieldWeight in 1610, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.046875 = fieldNorm(doc=1610)
      0.33333334 = coord(1/3)
    
    Abstract
    Research in information retrieval (IR) has advanced significantly in the past few decades. Many tasks, such as indexing and text categorization, can be performed automatically with minimal human effort. Machine learning has played an important role in such automation by learning various patterns such as document topics, text structures, and user interests from examples. In recent years, it has become increasingly difficult to search for useful information an the World Wide Web because of its large size and unstructured nature. Useful information and resources are often hidden in the Web. While machine learning has been successfully applied to traditional IR systems, it poses some new challenges to apply these algorithms to the Web due to its large size, link structure, diversity in content and languages, and dynamic nature. On the other hand, such characteristics of the Web also provide interesting patterns and knowledge that do not present in traditional information retrieval systems.
  12. Ku, Y.; Chiu, C.; Zhang, Y.; Chen, H.; Su, H.: Text mining self-disclosing health information for public health service (2014) 0.01
    0.0081475405 = product of:
      0.02444262 = sum of:
        0.02444262 = weight(_text_:to in 1262) [ClassicSimilarity], result of:
          0.02444262 = score(doc=1262,freq=12.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.29521468 = fieldWeight in 1262, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.046875 = fieldNorm(doc=1262)
      0.33333334 = coord(1/3)
    
    Abstract
    Understanding specific patterns or knowledge of self-disclosing health information could support public health surveillance and healthcare. This study aimed to develop an analytical framework to identify self-disclosing health information with unusual messages on web forums by leveraging advanced text-mining techniques. To demonstrate the performance of the proposed analytical framework, we conducted an experimental study on 2 major human immunodeficiency virus (HIV)/acquired immune deficiency syndrome (AIDS) forums in Taiwan. The experimental results show that the classification accuracy increased significantly (up to 83.83%) when using features selected by the information gain technique. The results also show the importance of adopting domain-specific features in analyzing unusual messages on web forums. This study has practical implications for the prevention and support of HIV/AIDS healthcare. For example, public health agencies can re-allocate resources and deliver services to people who need help via social media sites. In addition, individuals can also join a social media site to get better suggestions and support from each other.
  13. Zhu, B.; Chen, H.: Information visualization (2004) 0.01
    0.00800004 = product of:
      0.02400012 = sum of:
        0.02400012 = weight(_text_:to in 4276) [ClassicSimilarity], result of:
          0.02400012 = score(doc=4276,freq=34.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.2898702 = fieldWeight in 4276, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4276)
      0.33333334 = coord(1/3)
    
    Abstract
    Advanced technology has resulted in the generation of about one million terabytes of information every year. Ninety-reine percent of this is available in digital format (Keim, 2001). More information will be generated in the next three years than was created during all of previous human history (Keim, 2001). Collecting information is no longer a problem, but extracting value from information collections has become progressively more difficult. Various search engines have been developed to make it easier to locate information of interest, but these work well only for a person who has a specific goal and who understands what and how information is stored. This usually is not the Gase. Visualization was commonly thought of in terms of representing human mental processes (MacEachren, 1991; Miller, 1984). The concept is now associated with the amplification of these mental processes (Card, Mackinlay, & Shneiderman, 1999). Human eyes can process visual cues rapidly, whereas advanced information analysis techniques transform the computer into a powerful means of managing digitized information. Visualization offers a link between these two potent systems, the human eye and the computer (Gershon, Eick, & Card, 1998), helping to identify patterns and to extract insights from large amounts of information. The identification of patterns is important because it may lead to a scientific discovery, an interpretation of clues to solve a crime, the prediction of catastrophic weather, a successful financial investment, or a better understanding of human behavior in a computermediated environment. Visualization technology shows considerable promise for increasing the value of large-scale collections of information, as evidenced by several commercial applications of TreeMap (e.g., http://www.smartmoney.com) and Hyperbolic tree (e.g., http://www.inxight.com) to visualize large-scale hierarchical structures. Although the proliferation of visualization technologies dates from the 1990s where sophisticated hardware and software made increasingly faster generation of graphical objects possible, the role of visual aids in facilitating the construction of mental images has a long history. Visualization has been used to communicate ideas, to monitor trends implicit in data, and to explore large volumes of data for hypothesis generation. Imagine traveling to a strange place without a map, having to memorize physical and chemical properties of an element without Mendeleyev's periodic table, trying to understand the stock market without statistical diagrams, or browsing a collection of documents without interactive visual aids. A collection of information can lose its value simply because of the effort required for exhaustive exploration. Such frustrations can be overcome by visualization.
    Visualization can be classified as scientific visualization, software visualization, or information visualization. Although the data differ, the underlying techniques have much in common. They use the same elements (visual cues) and follow the same rules of combining visual cues to deliver patterns. They all involve understanding human perception (Encarnacao, Foley, Bryson, & Feiner, 1994) and require domain knowledge (Tufte, 1990). Because most decisions are based an unstructured information, such as text documents, Web pages, or e-mail messages, this chapter focuses an the visualization of unstructured textual documents. The chapter reviews information visualization techniques developed over the last decade and examines how they have been applied in different domains. The first section provides the background by describing visualization history and giving overviews of scientific, software, and information visualization as well as the perceptual aspects of visualization. The next section assesses important visualization techniques that convert abstract information into visual objects and facilitate navigation through displays an a computer screen. It also explores information analysis algorithms that can be applied to identify or extract salient visualizable structures from collections of information. Information visualization systems that integrate different types of technologies to address problems in different domains are then surveyed; and we move an to a survey and critique of visualization system evaluation studies. The chapter concludes with a summary and identification of future research directions.
  14. Chen, H.; Lally, A.M.; Zhu, B.; Chau, M.: HelpfulMed : Intelligent searching for medical information over the Internet (2003) 0.01
    0.007839975 = product of:
      0.023519924 = sum of:
        0.023519924 = weight(_text_:to in 1615) [ClassicSimilarity], result of:
          0.023519924 = score(doc=1615,freq=16.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.28407046 = fieldWeight in 1615, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1615)
      0.33333334 = coord(1/3)
    
    Abstract
    The Medical professionals and researchers need information from reputable sources to accomplish their work. Unfortunately, the Web has a large number of documents that are irrelevant to their work, even those documents that purport to be "medically-related." This paper describes an architecture designed to integrate advanced searching and indexing algorithms, an automatic thesaurus, or "concept space," and Kohonen-based Self-Organizing Map (SOM) technologies to provide searchers with finegrained results. Initial results indicate that these systems provide complementary retrieval functionalities. HelpfulMed not only allows users to search Web pages and other online databases, but also allows them to build searches through the use of an automatic thesaurus and browse a graphical display of medical-related topics. Evaluation results for each of the different components are included. Our spidering algorithm outperformed both breadth-first search and PageRank spiders an a test collection of 100,000 Web pages. The automatically generated thesaurus performed as well as both MeSH and UMLS-systems which require human mediation for currency. Lastly, a variant of the Kohonen SOM was comparable to MeSH terms in perceived cluster precision and significantly better at perceived cluster recall.
  15. Dang, Y.; Zhang, Y.; Chen, H.; Hu, P.J.-H.; Brown, S.A.; Larson, C.: Arizona Literature Mapper : an integrated approach to monitor and analyze global bioterrorism research literature (2009) 0.01
    0.007839975 = product of:
      0.023519924 = sum of:
        0.023519924 = weight(_text_:to in 2943) [ClassicSimilarity], result of:
          0.023519924 = score(doc=2943,freq=16.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.28407046 = fieldWeight in 2943, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2943)
      0.33333334 = coord(1/3)
    
    Abstract
    Biomedical research is critical to biodefense, which is drawing increasing attention from governments globally as well as from various research communities. The U.S. government has been closely monitoring and regulating biomedical research activities, particularly those studying or involving bioterrorism agents or diseases. Effective surveillance requires comprehensive understanding of extant biomedical research and timely detection of new developments or emerging trends. The rapid knowledge expansion, technical breakthroughs, and spiraling collaboration networks demand greater support for literature search and sharing, which cannot be effectively supported by conventional literature search mechanisms or systems. In this study, we propose an integrated approach that integrates advanced techniques for content analysis, network analysis, and information visualization. We design and implement Arizona Literature Mapper, a Web-based portal that allows users to gain timely, comprehensive understanding of bioterrorism research, including leading scientists, research groups, institutions as well as insights about current mainstream interests or emerging trends. We conduct two user studies to evaluate Arizona Literature Mapper and include a well-known system for benchmarking purposes. According to our results, Arizona Literature Mapper is significantly more effective for supporting users' search of bioterrorism publications than PubMed. Users consider Arizona Literature Mapper more useful and easier to use than PubMed. Users are also more satisfied with Arizona Literature Mapper and show stronger intentions to use it in the future. Assessments of Arizona Literature Mapper's analysis functions are also positive, as our subjects consider them useful, easy to use, and satisfactory. Our results have important implications that are also discussed in the article.
  16. Huang, C.; Fu, T.; Chen, H.: Text-based video content classification for online video-sharing sites (2010) 0.01
    0.007839975 = product of:
      0.023519924 = sum of:
        0.023519924 = weight(_text_:to in 3452) [ClassicSimilarity], result of:
          0.023519924 = score(doc=3452,freq=16.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.28407046 = fieldWeight in 3452, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3452)
      0.33333334 = coord(1/3)
    
    Abstract
    With the emergence of Web 2.0, sharing personal content, communicating ideas, and interacting with other online users in Web 2.0 communities have become daily routines for online users. User-generated data from Web 2.0 sites provide rich personal information (e.g., personal preferences and interests) and can be utilized to obtain insight about cyber communities and their social networks. Many studies have focused on leveraging user-generated information to analyze blogs and forums, but few studies have applied this approach to video-sharing Web sites. In this study, we propose a text-based framework for video content classification of online-video sharing Web sites. Different types of user-generated data (e.g., titles, descriptions, and comments) were used as proxies for online videos, and three types of text features (lexical, syntactic, and content-specific features) were extracted. Three feature-based classification techniques (C4.5, Naïve Bayes, and Support Vector Machine) were used to classify videos. To evaluate the proposed framework, user-generated data from candidate videos, which were identified by searching user-given keywords on YouTube, were first collected. Then, a subset of the collected data was randomly selected and manually tagged by users as our experiment data. The experimental results showed that the proposed approach was able to classify online videos based on users' interests with accuracy rates up to 87.2%, and all three types of text features contributed to discriminating videos. Support Vector Machine outperformed C4.5 and Naïve Bayes techniques in our experiments. In addition, our case study further demonstrated that accurate video-classification results are very useful for identifying implicit cyber communities on video-sharing Web sites.
  17. Benjamin, V.; Chen, H.; Zimbra, D.: Bridging the virtual and real : the relationship between web content, linkage, and geographical proximity of social movements (2014) 0.01
    0.007839975 = product of:
      0.023519924 = sum of:
        0.023519924 = weight(_text_:to in 1527) [ClassicSimilarity], result of:
          0.023519924 = score(doc=1527,freq=16.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.28407046 = fieldWeight in 1527, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1527)
      0.33333334 = coord(1/3)
    
    Abstract
    As the Internet becomes ubiquitous, it has advanced to more closely represent aspects of the real world. Due to this trend, researchers in various disciplines have become interested in studying relationships between real-world phenomena and their virtual representations. One such area of emerging research seeks to study relationships between real-world and virtual activism of social movement organization (SMOs). In particular, SMOs holding extreme social perspectives are often studied due to their tendency to have robust virtual presences to circumvent real-world social barriers preventing information dissemination. However, many previous studies have been limited in scope because they utilize manual data-collection and analysis methods. They also often have failed to consider the real-world aspects of groups that partake in virtual activism. We utilize automated data-collection and analysis methods to identify significant relationships between aspects of SMO virtual communities and their respective real-world locations and ideological perspectives. Our results also demonstrate that the interconnectedness of SMO virtual communities is affected specifically by aspects of the real world. These observations provide insight into the behaviors of SMOs within virtual environments, suggesting that the virtual communities of SMOs are strongly affected by aspects of the real world.
  18. Chen, H.: Machine learning for information retrieval : neural networks, symbolic learning, and genetic algorithms (1994) 0.01
    0.0077611795 = product of:
      0.023283537 = sum of:
        0.023283537 = weight(_text_:to in 2657) [ClassicSimilarity], result of:
          0.023283537 = score(doc=2657,freq=8.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.28121543 = fieldWeight in 2657, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2657)
      0.33333334 = coord(1/3)
    
    Abstract
    In the 1980s, knowledge-based techniques also made an impressive contribution to 'intelligent' information retrieval and indexing. More recently, researchers have turned to newer artificial intelligence based inductive learning techniques including neural networks, symbolic learning, and genetic algorithms grounded on diverse paradigms. These have provided great opportunities to enhance the capabilities of current information storage and retrieval systems. Provides an overview of these techniques and presents 3 popular methods: the connectionist Hopfield network; the symbolic ID3/ID5R; and evaluation based genetic algorithms in the context of information retrieval. The techniques are promising in their ability to analyze user queries, identify users' information needs, and suggest alternatives for search and can greatly complement the prevailing full text, keyword based, probabilistic, and knowledge based techniques
  19. Chen, H.; Ng, T.: ¬An algorithmic approach to concept exploration in a large knowledge network (automatic thesaurus consultation) : symbolic branch-and-bound search versus connectionist Hopfield Net Activation (1995) 0.01
    0.0074376534 = product of:
      0.02231296 = sum of:
        0.02231296 = weight(_text_:to in 2203) [ClassicSimilarity], result of:
          0.02231296 = score(doc=2203,freq=10.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.26949292 = fieldWeight in 2203, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.046875 = fieldNorm(doc=2203)
      0.33333334 = coord(1/3)
    
    Abstract
    Presents a framework for knowledge discovery and concept exploration. In order to enhance the concept exploration capability of knowledge based systems and to alleviate the limitation of the manual browsing approach, develops 2 spreading activation based algorithms for concept exploration in large, heterogeneous networks of concepts (eg multiple thesauri). One algorithm, which is based on the symbolic AI paradigma, performs a conventional branch-and-bound search on a semantic net representation to identify other highly relevant concepts (a serial, optimal search process). The 2nd algorithm, which is absed on the neural network approach, executes the Hopfield net parallel relaxation and convergence process to identify 'convergent' concepts for some initial queries (a parallel, heuristic search process). Tests these 2 algorithms on a large text-based knowledge network of about 13.000 nodes (terms) and 80.000 directed links in the area of computing technologies
  20. Chen, H.; Shankaranarayanan, G.; She, L.: ¬A machine learning approach to inductive query by examples : an experiment using relevance feedback, ID3, genetic algorithms, and simulated annealing (1998) 0.01
    0.0074376534 = product of:
      0.02231296 = sum of:
        0.02231296 = weight(_text_:to in 1148) [ClassicSimilarity], result of:
          0.02231296 = score(doc=1148,freq=10.0), product of:
            0.08279609 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.045541126 = queryNorm
            0.26949292 = fieldWeight in 1148, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.046875 = fieldNorm(doc=1148)
      0.33333334 = coord(1/3)
    
    Abstract
    Information retrieval using probabilistic techniques has attracted significant attention on the part of researchers in information and computer science over the past few decades. In the 1980s, knowledge-based techniques also made an impressive contribution to 'intelligent' information retrieval and indexing. More recently, information science researchers have tfurned to other newer inductive learning techniques including symbolic learning, genetic algorithms, and simulated annealing. These newer techniques, which are grounded in diverse paradigms, have provided great opportunities for researchers to enhance the information processing and retrieval capabilities of current information systems. In this article, we first provide an overview of these newer techniques and their use in information retrieval research. In order to femiliarize readers with the techniques, we present 3 promising methods: the symbolic ID3 algorithm, evolution-based genetic algorithms, and simulated annealing. We discuss their knowledge representations and algorithms in the unique context of information retrieval

Types

  • a 55
  • el 1
  • More… Less…