Search (21 results, page 1 of 2)

  • × author_ss:"Chen, H."
  1. Chen, H.: Intelligence and security informatics : Introduction to the special topic issue (2005) 0.05
    0.048086423 = product of:
      0.16830248 = sum of:
        0.08571867 = weight(_text_:united in 3232) [ClassicSimilarity], result of:
          0.08571867 = score(doc=3232,freq=6.0), product of:
            0.22812355 = queryWeight, product of:
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.04066292 = queryNorm
            0.37575546 = fieldWeight in 3232, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3232)
        0.08258381 = weight(_text_:states in 3232) [ClassicSimilarity], result of:
          0.08258381 = score(doc=3232,freq=6.0), product of:
            0.22391328 = queryWeight, product of:
              5.506572 = idf(docFreq=487, maxDocs=44218)
              0.04066292 = queryNorm
            0.3688205 = fieldWeight in 3232, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.506572 = idf(docFreq=487, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3232)
      0.2857143 = coord(2/7)
    
    Abstract
    Making the Nation Safer: The Role of Science and Technology in Countering Terrorism The commitment of the scientific, engineering, and health communities to helping the United States and the world respond to security challenges became evident after September 11, 2001. The U.S. National Research Council's report an "Making the Nation Safer: The Role of Science and Technology in Countering Terrorism," (National Research Council, 2002, p. 1) explains the context of such a new commitment: Terrorism is a serious threat to the Security of the United States and indeed the world. The vulnerability of societies to terrorist attacks results in part from the proliferation of chemical, biological, and nuclear weapons of mass destruction, but it also is a consequence of the highly efficient and interconnected systems that we rely an for key services such as transportation, information, energy, and health care. The efficient functioning of these systems reflects great technological achievements of the past century, but interconnectedness within and across systems also means that infrastructures are vulnerable to local disruptions, which could lead to widespread or catastrophic failures. As terrorists seek to exploit these vulnerabilities, it is fitting that we harness the nation's exceptional scientific and technological capabilities to Counter terrorist threats. A committee of 24 of the leading scientific, engineering, medical, and policy experts in the United States conducted the study described in the report. Eight panels were separately appointed and asked to provide input to the committee. The panels included: (a) biological sciences, (b) chemical issues, (c) nuclear and radiological issues, (d) information technology, (e) transportation, (f) energy facilities, Cities, and fixed infrastructure, (g) behavioral, social, and institutional issues, and (h) systems analysis and systems engineering. The focus of the committee's work was to make the nation safer from emerging terrorist threats that sought to inflict catastrophic damage an the nation's people, its infrastructure, or its economy. The committee considered nine areas, each of which is discussed in a separate chapter in the report: nuclear and radiological materials, human and agricultural health systems, toxic chemicals and explosive materials, information technology, energy systems, transportation systems, Cities and fixed infrastructure, the response of people to terrorism, and complex and interdependent systems. The chapter an information technology (IT) is particularly relevant to this special issue. The report recommends that "a strategic long-term research and development agenda should be established to address three primary counterterrorismrelated areas in IT: information and network security, the IT needs of emergency responders, and information fusion and management" (National Research Council, 2002, pp. 11 -12). The MD in information and network security should include approaches and architectures for prevention, identification, and containment of cyber-intrusions and recovery from them. The R&D to address IT needs of emergency responders should include ensuring interoperability, maintaining and expanding communications capability during an emergency, communicating with the public during an emergency, and providing support for decision makers. The R&D in information fusion and management for the intelligence, law enforcement, and emergency response communities should include data mining, data integration, language technologies, and processing of image and audio data. Much of the research reported in this special issue is related to information fusion and management for homeland security.
  2. Chung, W.; Chen, H.: Browsing the underdeveloped Web : an experiment on the Arabic Medical Web Directory (2009) 0.03
    0.025769586 = product of:
      0.09019355 = sum of:
        0.07366575 = weight(_text_:sites in 2733) [ClassicSimilarity], result of:
          0.07366575 = score(doc=2733,freq=2.0), product of:
            0.21257097 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.04066292 = queryNorm
            0.34654665 = fieldWeight in 2733, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046875 = fieldNorm(doc=2733)
        0.016527792 = product of:
          0.033055585 = sum of:
            0.033055585 = weight(_text_:22 in 2733) [ClassicSimilarity], result of:
              0.033055585 = score(doc=2733,freq=2.0), product of:
                0.14239462 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04066292 = queryNorm
                0.23214069 = fieldWeight in 2733, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2733)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    While the Web has grown significantly in recent years, some portions of the Web remain largely underdeveloped, as shown in a lack of high-quality content and functionality. An example is the Arabic Web, in which a lack of well-structured Web directories limits users' ability to browse for Arabic resources. In this research, we proposed an approach to building Web directories for the underdeveloped Web and developed a proof-of-concept prototype called the Arabic Medical Web Directory (AMedDir) that supports browsing of over 5,000 Arabic medical Web sites and pages organized in a hierarchical structure. We conducted an experiment involving Arab participants and found that the AMedDir significantly outperformed two benchmark Arabic Web directories in terms of browsing effectiveness, efficiency, information quality, and user satisfaction. Participants expressed strong preference for the AMedDir and provided many positive comments. This research thus contributes to developing a useful Web directory for organizing the information in the Arabic medical domain and to a better understanding of how to support browsing on the underdeveloped Web.
    Date
    22. 3.2009 17:57:50
  3. Huang, C.; Fu, T.; Chen, H.: Text-based video content classification for online video-sharing sites (2010) 0.02
    0.019609718 = product of:
      0.13726802 = sum of:
        0.13726802 = weight(_text_:sites in 3452) [ClassicSimilarity], result of:
          0.13726802 = score(doc=3452,freq=10.0), product of:
            0.21257097 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.04066292 = queryNorm
            0.6457515 = fieldWeight in 3452, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3452)
      0.14285715 = coord(1/7)
    
    Abstract
    With the emergence of Web 2.0, sharing personal content, communicating ideas, and interacting with other online users in Web 2.0 communities have become daily routines for online users. User-generated data from Web 2.0 sites provide rich personal information (e.g., personal preferences and interests) and can be utilized to obtain insight about cyber communities and their social networks. Many studies have focused on leveraging user-generated information to analyze blogs and forums, but few studies have applied this approach to video-sharing Web sites. In this study, we propose a text-based framework for video content classification of online-video sharing Web sites. Different types of user-generated data (e.g., titles, descriptions, and comments) were used as proxies for online videos, and three types of text features (lexical, syntactic, and content-specific features) were extracted. Three feature-based classification techniques (C4.5, Naïve Bayes, and Support Vector Machine) were used to classify videos. To evaluate the proposed framework, user-generated data from candidate videos, which were identified by searching user-given keywords on YouTube, were first collected. Then, a subset of the collected data was randomly selected and manually tagged by users as our experiment data. The experimental results showed that the proposed approach was able to classify online videos based on users' interests with accuracy rates up to 87.2%, and all three types of text features contributed to discriminating videos. Support Vector Machine outperformed C4.5 and Naïve Bayes techniques in our experiments. In addition, our case study further demonstrated that accurate video-classification results are very useful for identifying implicit cyber communities on video-sharing Web sites.
  4. Chen, H.; Chung, W.; Qin, J.; Reid, E.; Sageman, M.; Weimann, G.: Uncovering the dark Web : a case study of Jihad on the Web (2008) 0.01
    0.01052368 = product of:
      0.07366575 = sum of:
        0.07366575 = weight(_text_:sites in 1880) [ClassicSimilarity], result of:
          0.07366575 = score(doc=1880,freq=2.0), product of:
            0.21257097 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.04066292 = queryNorm
            0.34654665 = fieldWeight in 1880, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046875 = fieldNorm(doc=1880)
      0.14285715 = coord(1/7)
    
    Abstract
    While the Web has become a worldwide platform for communication, terrorists share their ideology and communicate with members on the Dark Web - the reverse side of the Web used by terrorists. Currently, the problems of information overload and difficulty to obtain a comprehensive picture of terrorist activities hinder effective and efficient analysis of terrorist information on the Web. To improve understanding of terrorist activities, we have developed a novel methodology for collecting and analyzing Dark Web information. The methodology incorporates information collection, analysis, and visualization techniques, and exploits various Web information sources. We applied it to collecting and analyzing information of 39 Jihad Web sites and developed visualization of their site contents, relationships, and activity levels. An expert evaluation showed that the methodology is very useful and promising, having a high potential to assist in investigation and understanding of terrorist activities by producing results that could potentially help guide both policymaking and intelligence research.
  5. Ku, Y.; Chiu, C.; Zhang, Y.; Chen, H.; Su, H.: Text mining self-disclosing health information for public health service (2014) 0.01
    0.01052368 = product of:
      0.07366575 = sum of:
        0.07366575 = weight(_text_:sites in 1262) [ClassicSimilarity], result of:
          0.07366575 = score(doc=1262,freq=2.0), product of:
            0.21257097 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.04066292 = queryNorm
            0.34654665 = fieldWeight in 1262, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046875 = fieldNorm(doc=1262)
      0.14285715 = coord(1/7)
    
    Abstract
    Understanding specific patterns or knowledge of self-disclosing health information could support public health surveillance and healthcare. This study aimed to develop an analytical framework to identify self-disclosing health information with unusual messages on web forums by leveraging advanced text-mining techniques. To demonstrate the performance of the proposed analytical framework, we conducted an experimental study on 2 major human immunodeficiency virus (HIV)/acquired immune deficiency syndrome (AIDS) forums in Taiwan. The experimental results show that the classification accuracy increased significantly (up to 83.83%) when using features selected by the information gain technique. The results also show the importance of adopting domain-specific features in analyzing unusual messages on web forums. This study has practical implications for the prevention and support of HIV/AIDS healthcare. For example, public health agencies can re-allocate resources and deliver services to people who need help via social media sites. In addition, individuals can also join a social media site to get better suggestions and support from each other.
  6. Chen, H.; Beaudoin, C.E.; Hong, H.: Teen online information disclosure : empirical testing of a protection motivation and social capital model (2016) 0.01
    0.01052368 = product of:
      0.07366575 = sum of:
        0.07366575 = weight(_text_:sites in 3203) [ClassicSimilarity], result of:
          0.07366575 = score(doc=3203,freq=2.0), product of:
            0.21257097 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.04066292 = queryNorm
            0.34654665 = fieldWeight in 3203, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046875 = fieldNorm(doc=3203)
      0.14285715 = coord(1/7)
    
    Abstract
    With bases in protection motivation theory and social capital theory, this study investigates teen and parental factors that determine teens' online privacy concerns, online privacy protection behaviors, and subsequent online information disclosure on social network sites. With secondary data from a 2012 survey (N?=?622), the final well-fitting structural equation model revealed that teen online privacy concerns were primarily influenced by parental interpersonal trust and parental concerns about teens' online privacy, whereas teen privacy protection behaviors were primarily predicted by teen cost-benefit appraisal of online interactions. In turn, teen online privacy concerns predicted increased privacy protection behaviors and lower teen information disclosure. Finally, restrictive and instructive parental mediation exerted differential influences on teens' privacy protection behaviors and online information disclosure.
  7. Carmel, E.; Crawford, S.; Chen, H.: Browsing in hypertext : a cognitive study (1992) 0.01
    0.008471692 = product of:
      0.05930184 = sum of:
        0.05930184 = sum of:
          0.03175552 = weight(_text_:design in 7469) [ClassicSimilarity], result of:
            0.03175552 = score(doc=7469,freq=2.0), product of:
              0.15288728 = queryWeight, product of:
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.04066292 = queryNorm
              0.20770542 = fieldWeight in 7469, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.0390625 = fieldNorm(doc=7469)
          0.027546322 = weight(_text_:22 in 7469) [ClassicSimilarity], result of:
            0.027546322 = score(doc=7469,freq=2.0), product of:
              0.14239462 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04066292 = queryNorm
              0.19345059 = fieldWeight in 7469, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=7469)
      0.14285715 = coord(1/7)
    
    Abstract
    With the growth of hypertext and multimedia applications that support and encourage browsing it is time to take a penetrating look at browsing behaviour. Several dimensions of browsing are exemined, to find out: first, what is browsing and what cognitive processes are associated with it: second, is there a browsing strategy, and if so, are there any differences between how subject-area experts and novices browse; and finally, how can this knowledge be applied to improve the design of hypertext systems. Two groups of students, subject-area experts and novices, were studied while browsing a Macintosh HyperCard application on the subject The Vietnam War. A protocol analysis technique was used to gather and analyze data. Components of the GOMS model were used to describe the goals, operators, methods, and selection rules observed: Three browsing strategies were identified: (1) search-oriented browse, scanning and and reviewing information relevant to a fixed task; (2) review-browse, scanning and reviewing intersting information in the presence of transient browse goals that represent changing tasks, and (3) scan-browse, scanning for interesting information (without review). Most subjects primarily used review-browse interspersed with search-oriented browse. Within this strategy, comparisons between subject-area experts and novices revealed differences in tactics: experts browsed in more depth, seldom used referential links, selected different kinds of topics, and viewed information differently thatn did novices. Based on these findings, suggestions are made to hypertext developers
    Source
    IEEE transactions on systems, man and cybernetics. 22(1992) no.5, S.865-884
  8. Chen, H.: Knowledge-based document retrieval : framework and design (1992) 0.01
    0.0072584045 = product of:
      0.05080883 = sum of:
        0.05080883 = product of:
          0.10161766 = sum of:
            0.10161766 = weight(_text_:design in 5283) [ClassicSimilarity], result of:
              0.10161766 = score(doc=5283,freq=2.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.66465735 = fieldWeight in 5283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.125 = fieldNorm(doc=5283)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
  9. Chen, H.; Dhar, V.: Cognitive process as a basis for intelligent retrieval system design (1991) 0.01
    0.0062859626 = product of:
      0.044001736 = sum of:
        0.044001736 = product of:
          0.08800347 = sum of:
            0.08800347 = weight(_text_:design in 3845) [ClassicSimilarity], result of:
              0.08800347 = score(doc=3845,freq=6.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.57561016 = fieldWeight in 3845, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3845)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    2 studies were conducted to investigate the cognitive processes involved in online document-based information retrieval. These studies led to the development of 5 computerised models of online document retrieval. These models were incorporated into a design of an 'intelligent' document-based retrieval system. Following a discussion of this system, discusses the broader implications of the research for the design of information retrieval sysems
  10. Chen, H.: Explaining and alleviating information management indeterminism : a knowledge-based framework (1994) 0.00
    0.0036292023 = product of:
      0.025404414 = sum of:
        0.025404414 = product of:
          0.05080883 = sum of:
            0.05080883 = weight(_text_:design in 8221) [ClassicSimilarity], result of:
              0.05080883 = score(doc=8221,freq=2.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.33232868 = fieldWeight in 8221, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8221)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Attempts to identify the nature and causes of information management indeterminism in an online research environment and proposes solutions for alleviating this indeterminism. Conducts two empirical studies of information management activities. The first identified the types and nature of information management indeterminism by evaluating archived text. The second focused on four sources of indeterminism: subject area knowledge, classification knowledge, system knowledge, and collaboration knowledge. Proposes a knowledge based design for alleviating indeterminism, which contains a system generated thesaurus and an inferencing engine
  11. Chen, H.; Chau, M.: Web mining : machine learning for Web applications (2003) 0.00
    0.0027219015 = product of:
      0.01905331 = sum of:
        0.01905331 = product of:
          0.03810662 = sum of:
            0.03810662 = weight(_text_:design in 4242) [ClassicSimilarity], result of:
              0.03810662 = score(doc=4242,freq=2.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.24924651 = fieldWeight in 4242, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4242)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    With more than two billion pages created by millions of Web page authors and organizations, the World Wide Web is a tremendously rich knowledge base. The knowledge comes not only from the content of the pages themselves, but also from the unique characteristics of the Web, such as its hyperlink structure and its diversity of content and languages. Analysis of these characteristics often reveals interesting patterns and new knowledge. Such knowledge can be used to improve users' efficiency and effectiveness in searching for information an the Web, and also for applications unrelated to the Web, such as support for decision making or business management. The Web's size and its unstructured and dynamic content, as well as its multilingual nature, make the extraction of useful knowledge a challenging research problem. Furthermore, the Web generates a large amount of data in other formats that contain valuable information. For example, Web server logs' information about user access patterns can be used for information personalization or improving Web page design.
  12. Chen, H.; Chung, Y.-M.; Ramsey, M.; Yang, C.C.: ¬A smart itsy bitsy spider for the Web (1998) 0.00
    0.0022682515 = product of:
      0.01587776 = sum of:
        0.01587776 = product of:
          0.03175552 = sum of:
            0.03175552 = weight(_text_:design in 871) [ClassicSimilarity], result of:
              0.03175552 = score(doc=871,freq=2.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.20770542 = fieldWeight in 871, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=871)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    As part of the ongoing Illinois Digital Library Initiative project, this research proposes an intelligent agent approach to Web searching. In this experiment, we developed 2 Web personal spiders based on best first search and genetic algorithm techniques, respectively. These personal spiders can dynamically take a user's selected starting homepages and search for the most closely related homepages in the Web, based on the links and keyword indexing. A graphical, dynamic, Jav-based interface was developed and is available for Web access. A system architecture for implementing such an agent-spider is presented, followed by deteiled discussions of benchmark testing and user evaluation results. In benchmark testing, although the genetic algorithm spider did not outperform the best first search spider, we found both results to be comparable and complementary. In user evaluation, the genetic algorithm spider obtained significantly higher recall value than that of the best first search spider. However, their precision values were not statistically different. The mutation process introduced in genetic algorithms allows users to find other potential relevant homepages that cannot be explored via a conventional local search process. In addition, we found the Java-based interface to be a necessary component for design of a truly interactive and dynamic Web agent
  13. Marshall, B.; McDonald, D.; Chen, H.; Chung, W.: EBizPort: collecting and analyzing business intelligence information (2004) 0.00
    0.0022682515 = product of:
      0.01587776 = sum of:
        0.01587776 = product of:
          0.03175552 = sum of:
            0.03175552 = weight(_text_:design in 2505) [ClassicSimilarity], result of:
              0.03175552 = score(doc=2505,freq=2.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.20770542 = fieldWeight in 2505, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2505)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Footnote
    Beitrag innerhalb der special topic section des Heftes: "Document search interface design"
  14. Chau, M.; Shiu, B.; Chan, M.; Chen, H.: Redips: backlink search and analysis on the Web for business intelligence analysis (2007) 0.00
    0.0022682515 = product of:
      0.01587776 = sum of:
        0.01587776 = product of:
          0.03175552 = sum of:
            0.03175552 = weight(_text_:design in 142) [ClassicSimilarity], result of:
              0.03175552 = score(doc=142,freq=2.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.20770542 = fieldWeight in 142, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=142)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    The World Wide Web presents significant opportunities for business intelligence analysis as it can provide information about a company's external environment and its stakeholders. Traditional business intelligence analysis on the Web has focused on simple keyword searching. Recently, it has been suggested that the incoming links, or backlinks, of a company's Web site (i.e., other Web pages that have a hyperlink pointing to the company of Interest) can provide important insights about the company's "online communities." Although analysis of these communities can provide useful signals for a company and information about its stakeholder groups, the manual analysis process can be very time-consuming for business analysts and consultants. In this article, we present a tool called Redips that automatically integrates backlink meta-searching and text-mining techniques to facilitate users in performing such business intelligence analysis on the Web. The architectural design and implementation of the tool are presented in the article. To evaluate the effectiveness, efficiency, and user satisfaction of Redips, an experiment was conducted to compare the tool with two popular business Intelligence analysis methods-using backlink search engines and manual browsing. The experiment results showed that Redips was statistically more effective than both benchmark methods (in terms of Recall and F-measure) but required more time in search tasks. In terms of user satisfaction, Redips scored statistically higher than backlink search engines in all five measures used, and also statistically higher than manual browsing in three measures.
  15. Chung, W.; Chen, H.; Reid, E.: Business stakeholder analyzer : an experiment of classifying stakeholders on the Web (2009) 0.00
    0.0022682515 = product of:
      0.01587776 = sum of:
        0.01587776 = product of:
          0.03175552 = sum of:
            0.03175552 = weight(_text_:design in 2699) [ClassicSimilarity], result of:
              0.03175552 = score(doc=2699,freq=2.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.20770542 = fieldWeight in 2699, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2699)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    As the Web is used increasingly to share and disseminate information, business analysts and managers are challenged to understand stakeholder relationships. Traditional stakeholder theories and frameworks employ a manual approach to analysis and do not scale up to accommodate the rapid growth of the Web. Unfortunately, existing business intelligence (BI) tools lack analysis capability, and research on BI systems is sparse. This research proposes a framework for designing BI systems to identify and to classify stakeholders on the Web, incorporating human knowledge and machine-learned information from Web pages. Based on the framework, we have developed a prototype called Business Stakeholder Analyzer (BSA) that helps managers and analysts to identify and to classify their stakeholders on the Web. Results from our experiment involving algorithm comparison, feature comparison, and a user study showed that the system achieved better within-class accuracies in widespread stakeholder types such as partner/sponsor/supplier and media/reviewer, and was more efficient than human classification. The student and practitioner subjects in our user study strongly agreed that such a system would save analysts' time and help to identify and classify stakeholders. This research contributes to a better understanding of how to integrate information technology with stakeholder theory, and enriches the knowledge base of BI system design.
  16. Dang, Y.; Zhang, Y.; Chen, H.; Hu, P.J.-H.; Brown, S.A.; Larson, C.: Arizona Literature Mapper : an integrated approach to monitor and analyze global bioterrorism research literature (2009) 0.00
    0.0022682515 = product of:
      0.01587776 = sum of:
        0.01587776 = product of:
          0.03175552 = sum of:
            0.03175552 = weight(_text_:design in 2943) [ClassicSimilarity], result of:
              0.03175552 = score(doc=2943,freq=2.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.20770542 = fieldWeight in 2943, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2943)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Biomedical research is critical to biodefense, which is drawing increasing attention from governments globally as well as from various research communities. The U.S. government has been closely monitoring and regulating biomedical research activities, particularly those studying or involving bioterrorism agents or diseases. Effective surveillance requires comprehensive understanding of extant biomedical research and timely detection of new developments or emerging trends. The rapid knowledge expansion, technical breakthroughs, and spiraling collaboration networks demand greater support for literature search and sharing, which cannot be effectively supported by conventional literature search mechanisms or systems. In this study, we propose an integrated approach that integrates advanced techniques for content analysis, network analysis, and information visualization. We design and implement Arizona Literature Mapper, a Web-based portal that allows users to gain timely, comprehensive understanding of bioterrorism research, including leading scientists, research groups, institutions as well as insights about current mainstream interests or emerging trends. We conduct two user studies to evaluate Arizona Literature Mapper and include a well-known system for benchmarking purposes. According to our results, Arizona Literature Mapper is significantly more effective for supporting users' search of bioterrorism publications than PubMed. Users consider Arizona Literature Mapper more useful and easier to use than PubMed. Users are also more satisfied with Arizona Literature Mapper and show stronger intentions to use it in the future. Assessments of Arizona Literature Mapper's analysis functions are also positive, as our subjects consider them useful, easy to use, and satisfactory. Our results have important implications that are also discussed in the article.
  17. Chau, M.; Wong, C.H.; Zhou, Y.; Qin, J.; Chen, H.: Evaluating the use of search engine development tools in IT education (2010) 0.00
    0.0022682515 = product of:
      0.01587776 = sum of:
        0.01587776 = product of:
          0.03175552 = sum of:
            0.03175552 = weight(_text_:design in 3325) [ClassicSimilarity], result of:
              0.03175552 = score(doc=3325,freq=2.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.20770542 = fieldWeight in 3325, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3325)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    It is important for education in computer science and information systems to keep up to date with the latest development in technology. With the rapid development of the Internet and the Web, many schools have included Internet-related technologies, such as Web search engines and e-commerce, as part of their curricula. Previous research has shown that it is effective to use search engine development tools to facilitate students' learning. However, the effectiveness of these tools in the classroom has not been evaluated. In this article, we review the design of three search engine development tools, SpidersRUs, Greenstone, and Alkaline, followed by an evaluation study that compared the three tools in the classroom. In the study, 33 students were divided into 13 groups and each group used the three tools to develop three independent search engines in a class project. Our evaluation results showed that SpidersRUs performed better than the two other tools in overall satisfaction and the level of knowledge gained in their learning experience when using the tools for a class project on Internet applications development.
  18. Leroy, G.; Chen, H.: Genescene: an ontology-enhanced integration of linguistic and co-occurrence based relations in biomedical texts (2005) 0.00
    0.0019675945 = product of:
      0.013773161 = sum of:
        0.013773161 = product of:
          0.027546322 = sum of:
            0.027546322 = weight(_text_:22 in 5259) [ClassicSimilarity], result of:
              0.027546322 = score(doc=5259,freq=2.0), product of:
                0.14239462 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04066292 = queryNorm
                0.19345059 = fieldWeight in 5259, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5259)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 7.2006 14:26:01
  19. Zheng, R.; Li, J.; Chen, H.; Huang, Z.: ¬A framework for authorship identification of online messages : writing-style features and classification techniques (2006) 0.00
    0.0019675945 = product of:
      0.013773161 = sum of:
        0.013773161 = product of:
          0.027546322 = sum of:
            0.027546322 = weight(_text_:22 in 5276) [ClassicSimilarity], result of:
              0.027546322 = score(doc=5276,freq=2.0), product of:
                0.14239462 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04066292 = queryNorm
                0.19345059 = fieldWeight in 5276, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5276)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 7.2006 16:14:37
  20. Hu, D.; Kaza, S.; Chen, H.: Identifying significant facilitators of dark network evolution (2009) 0.00
    0.0019675945 = product of:
      0.013773161 = sum of:
        0.013773161 = product of:
          0.027546322 = sum of:
            0.027546322 = weight(_text_:22 in 2753) [ClassicSimilarity], result of:
              0.027546322 = score(doc=2753,freq=2.0), product of:
                0.14239462 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04066292 = queryNorm
                0.19345059 = fieldWeight in 2753, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2753)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 3.2009 18:50:30