Search (28 results, page 1 of 2)

  • × author_ss:"Chen, H."
  1. Chen, H.: Explaining and alleviating information management indeterminism : a knowledge-based framework (1994) 0.10
    0.09811677 = product of:
      0.14717515 = sum of:
        0.115331195 = weight(_text_:management in 8221) [ClassicSimilarity], result of:
          0.115331195 = score(doc=8221,freq=10.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.6661758 = fieldWeight in 8221, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0625 = fieldNorm(doc=8221)
        0.03184395 = product of:
          0.0636879 = sum of:
            0.0636879 = weight(_text_:system in 8221) [ClassicSimilarity], result of:
              0.0636879 = score(doc=8221,freq=4.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.3936941 = fieldWeight in 8221, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8221)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Attempts to identify the nature and causes of information management indeterminism in an online research environment and proposes solutions for alleviating this indeterminism. Conducts two empirical studies of information management activities. The first identified the types and nature of information management indeterminism by evaluating archived text. The second focused on four sources of indeterminism: subject area knowledge, classification knowledge, system knowledge, and collaboration knowledge. Proposes a knowledge based design for alleviating indeterminism, which contains a system generated thesaurus and an inferencing engine
    Source
    Information processing and management. 30(1994) no.4, S.557-577
  2. Chen, H.; Dhar, V.: Cognitive process as a basis for intelligent retrieval system design (1991) 0.06
    0.060385596 = product of:
      0.09057839 = sum of:
        0.051577676 = weight(_text_:management in 3845) [ClassicSimilarity], result of:
          0.051577676 = score(doc=3845,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.29792285 = fieldWeight in 3845, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0625 = fieldNorm(doc=3845)
        0.039000716 = product of:
          0.07800143 = sum of:
            0.07800143 = weight(_text_:system in 3845) [ClassicSimilarity], result of:
              0.07800143 = score(doc=3845,freq=6.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.48217484 = fieldWeight in 3845, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3845)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    2 studies were conducted to investigate the cognitive processes involved in online document-based information retrieval. These studies led to the development of 5 computerised models of online document retrieval. These models were incorporated into a design of an 'intelligent' document-based retrieval system. Following a discussion of this system, discusses the broader implications of the research for the design of information retrieval sysems
    Source
    Information processing and management. 27(1991) no.5, S.405-432
  3. Hu, P.J.-H.; Hsu, F.-M.; Hu, H.-f.; Chen, H.: Agency satisfaction with electronic record management systems : a large-scale survey (2010) 0.04
    0.039774552 = product of:
      0.059661828 = sum of:
        0.045588657 = weight(_text_:management in 4115) [ClassicSimilarity], result of:
          0.045588657 = score(doc=4115,freq=4.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.2633291 = fieldWeight in 4115, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4115)
        0.01407317 = product of:
          0.02814634 = sum of:
            0.02814634 = weight(_text_:system in 4115) [ClassicSimilarity], result of:
              0.02814634 = score(doc=4115,freq=2.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.17398985 = fieldWeight in 4115, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4115)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    We investigated agency satisfaction with an electronic record management system (ERMS) that supports the electronic creation, archival, processing, transmittal, and sharing of records (documents) among autonomous government agencies. A factor model, explaining agency satisfaction with ERMS functionalities, offers hypotheses, which we tested empirically with a large-scale survey that involved more than 1,600 government agencies in Taiwan. The data showed a good fit to our model and supported all the hypotheses. Overall, agency satisfaction with ERMS functionalities appears jointly determined by regulatory compliance, job relevance, and satisfaction with support services. Among the determinants we studied, agency satisfaction with support services seems the strongest predictor of agency satisfaction with ERMS functionalities. Regulatory compliance also has important influences on agency satisfaction with ERMS, through its influence on job relevance and satisfaction with support services. Further analyses showed that satisfaction with support services partially mediated the impact of regulatory compliance on satisfaction with ERMS functionalities, and job relevance partially mediated the influence of regulatory compliance on satisfaction with ERMS functionalities. Our findings have important implications for research and practice, which we also discuss.
  4. Leroy, G.; Chen, H.: Genescene: an ontology-enhanced integration of linguistic and co-occurrence based relations in biomedical texts (2005) 0.02
    0.020980377 = product of:
      0.06294113 = sum of:
        0.06294113 = sum of:
          0.02814634 = weight(_text_:system in 5259) [ClassicSimilarity], result of:
            0.02814634 = score(doc=5259,freq=2.0), product of:
              0.16177002 = queryWeight, product of:
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.051362853 = queryNorm
              0.17398985 = fieldWeight in 5259, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5259)
          0.03479479 = weight(_text_:22 in 5259) [ClassicSimilarity], result of:
            0.03479479 = score(doc=5259,freq=2.0), product of:
              0.17986396 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051362853 = queryNorm
              0.19345059 = fieldWeight in 5259, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5259)
      0.33333334 = coord(1/3)
    
    Abstract
    The increasing amount of publicly available literature and experimental data in biomedicine makes it hard for biomedical researchers to stay up-to-date. Genescene is a toolkit that will help alleviate this problem by providing an overview of published literature content. We combined a linguistic parser with Concept Space, a co-occurrence based semantic net. Both techniques extract complementary biomedical relations between noun phrases from MEDLINE abstracts. The parser extracts precise and semantically rich relations from individual abstracts. Concept Space extracts relations that hold true for the collection of abstracts. The Gene Ontology, the Human Genome Nomenclature, and the Unified Medical Language System, are also integrated in Genescene. Currently, they are used to facilitate the integration of the two relation types, and to select the more interesting and high-quality relations for presentation. A user study focusing on p53 literature is discussed. All MEDLINE abstracts discussing p53 were processed in Genescene. Two researchers evaluated the terms and relations from several abstracts of interest to them. The results show that the terms were precise (precision 93%) and relevant, as were the parser relations (precision 95%). The Concept Space relations were more precise when selected with ontological knowledge (precision 78%) than without (60%).
    Date
    22. 7.2006 14:26:01
  5. Chen, H.: Intelligence and security informatics : Introduction to the special topic issue (2005) 0.01
    0.013028046 = product of:
      0.039084136 = sum of:
        0.039084136 = weight(_text_:management in 3232) [ClassicSimilarity], result of:
          0.039084136 = score(doc=3232,freq=6.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.22575769 = fieldWeight in 3232, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3232)
      0.33333334 = coord(1/3)
    
    Abstract
    Making the Nation Safer: The Role of Science and Technology in Countering Terrorism The commitment of the scientific, engineering, and health communities to helping the United States and the world respond to security challenges became evident after September 11, 2001. The U.S. National Research Council's report an "Making the Nation Safer: The Role of Science and Technology in Countering Terrorism," (National Research Council, 2002, p. 1) explains the context of such a new commitment: Terrorism is a serious threat to the Security of the United States and indeed the world. The vulnerability of societies to terrorist attacks results in part from the proliferation of chemical, biological, and nuclear weapons of mass destruction, but it also is a consequence of the highly efficient and interconnected systems that we rely an for key services such as transportation, information, energy, and health care. The efficient functioning of these systems reflects great technological achievements of the past century, but interconnectedness within and across systems also means that infrastructures are vulnerable to local disruptions, which could lead to widespread or catastrophic failures. As terrorists seek to exploit these vulnerabilities, it is fitting that we harness the nation's exceptional scientific and technological capabilities to Counter terrorist threats. A committee of 24 of the leading scientific, engineering, medical, and policy experts in the United States conducted the study described in the report. Eight panels were separately appointed and asked to provide input to the committee. The panels included: (a) biological sciences, (b) chemical issues, (c) nuclear and radiological issues, (d) information technology, (e) transportation, (f) energy facilities, Cities, and fixed infrastructure, (g) behavioral, social, and institutional issues, and (h) systems analysis and systems engineering. The focus of the committee's work was to make the nation safer from emerging terrorist threats that sought to inflict catastrophic damage an the nation's people, its infrastructure, or its economy. The committee considered nine areas, each of which is discussed in a separate chapter in the report: nuclear and radiological materials, human and agricultural health systems, toxic chemicals and explosive materials, information technology, energy systems, transportation systems, Cities and fixed infrastructure, the response of people to terrorism, and complex and interdependent systems. The chapter an information technology (IT) is particularly relevant to this special issue. The report recommends that "a strategic long-term research and development agenda should be established to address three primary counterterrorismrelated areas in IT: information and network security, the IT needs of emergency responders, and information fusion and management" (National Research Council, 2002, pp. 11 -12). The MD in information and network security should include approaches and architectures for prevention, identification, and containment of cyber-intrusions and recovery from them. The R&D to address IT needs of emergency responders should include ensuring interoperability, maintaining and expanding communications capability during an emergency, communicating with the public during an emergency, and providing support for decision makers. The R&D in information fusion and management for the intelligence, law enforcement, and emergency response communities should include data mining, data integration, language technologies, and processing of image and audio data. Much of the research reported in this special issue is related to information fusion and management for homeland security.
  6. Hu, P.J.-H.; Lin, C.; Chen, H.: User acceptance of intelligence and security informatics technology : a study of COPLINK (2005) 0.01
    0.01289442 = product of:
      0.038683258 = sum of:
        0.038683258 = weight(_text_:management in 3233) [ClassicSimilarity], result of:
          0.038683258 = score(doc=3233,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.22344214 = fieldWeight in 3233, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=3233)
      0.33333334 = coord(1/3)
    
    Abstract
    The importance of Intelligence and Security Informatics (ISI) has significantly increased with the rapid and largescale migration of local/national security information from physical media to electronic platforms, including the Internet and information systems. Motivated by the significance of ISI in law enforcement (particularly in the digital government context) and the limited investigations of officers' technology-acceptance decisionmaking, we developed and empirically tested a factor model for explaining law-enforcement officers' technology acceptance. Specifically, our empirical examination targeted the COPLINK technology and involved more than 280 police officers. Overall, our model shows a good fit to the data collected and exhibits satisfactory Power for explaining law-enforcement officers' technology acceptance decisions. Our findings have several implications for research and technology management practices in law enforcement, which are also discussed.
  7. Chen, H.; Chau, M.: Web mining : machine learning for Web applications (2003) 0.01
    0.01289442 = product of:
      0.038683258 = sum of:
        0.038683258 = weight(_text_:management in 4242) [ClassicSimilarity], result of:
          0.038683258 = score(doc=4242,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.22344214 = fieldWeight in 4242, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=4242)
      0.33333334 = coord(1/3)
    
    Abstract
    With more than two billion pages created by millions of Web page authors and organizations, the World Wide Web is a tremendously rich knowledge base. The knowledge comes not only from the content of the pages themselves, but also from the unique characteristics of the Web, such as its hyperlink structure and its diversity of content and languages. Analysis of these characteristics often reveals interesting patterns and new knowledge. Such knowledge can be used to improve users' efficiency and effectiveness in searching for information an the Web, and also for applications unrelated to the Web, such as support for decision making or business management. The Web's size and its unstructured and dynamic content, as well as its multilingual nature, make the extraction of useful knowledge a challenging research problem. Furthermore, the Web generates a large amount of data in other formats that contain valuable information. For example, Web server logs' information about user access patterns can be used for information personalization or improving Web page design.
  8. Zhu, B.; Chen, H.: Validating a geographical image retrieval system (2000) 0.01
    0.012587427 = product of:
      0.03776228 = sum of:
        0.03776228 = product of:
          0.07552456 = sum of:
            0.07552456 = weight(_text_:system in 4769) [ClassicSimilarity], result of:
              0.07552456 = score(doc=4769,freq=10.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.46686378 = fieldWeight in 4769, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4769)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper summarizes a prototype geographical image retrieval system that demonstrates how to integrate image processing and information analysis techniques to support large-scale content-based image retrieval. By using an image as its interface, the prototype system addresses a troublesome aspect of traditional retrieval models, which require users to have complete knowledge of the low-level features of an image. In addition we describe an experiment to validate against that of human subjects in an effort to address the scarcity of research evaluating performance of an algorithm against that of human beings. The results of the experiment indicate that the system could do as well as human subjects in accomplishing the tasks of similarity analysis and image categorization. We also found that under some circumstances texture features of an image are insufficient to represent an geographic image. We believe, however, that our image retrieval system provides a promising approach to integrating image processing techniques and information retrieval algorithms
  9. Schroeder, J.; Xu, J.; Chen, H.; Chau, M.: Automated criminal link analysis based on domain knowledge (2007) 0.01
    0.009750179 = product of:
      0.029250536 = sum of:
        0.029250536 = product of:
          0.058501072 = sum of:
            0.058501072 = weight(_text_:system in 275) [ClassicSimilarity], result of:
              0.058501072 = score(doc=275,freq=6.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.36163113 = fieldWeight in 275, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=275)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Link (association) analysis has been used in the criminal justice domain to search large datasets for associations between crime entities in order to facilitate crime investigations. However, link analysis still faces many challenging problems, such as information overload, high search complexity, and heavy reliance on domain knowledge. To address these challenges, this article proposes several techniques for automated, effective, and efficient link analysis. These techniques include the co-occurrence analysis, the shortest path algorithm, and a heuristic approach to identifying associations and determining their importance. We developed a prototype system called CrimeLink Explorer based on the proposed techniques. Results of a user study with 10 crime investigators from the Tucson Police Department showed that our system could help subjects conduct link analysis more efficiently than traditional single-level link analysis tools. Moreover, subjects believed that association paths found based on the heuristic approach were more accurate than those found based solely on the co-occurrence analysis and that the automated link analysis system would be of great help in crime investigations.
  10. Chen, H.; Yim, T.; Fye, D.: Automatic thesaurus generation for an electronic community system (1995) 0.01
    0.009382114 = product of:
      0.02814634 = sum of:
        0.02814634 = product of:
          0.05629268 = sum of:
            0.05629268 = weight(_text_:system in 2918) [ClassicSimilarity], result of:
              0.05629268 = score(doc=2918,freq=8.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.3479797 = fieldWeight in 2918, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2918)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports an algorithmic approach to the automatic generation of thesauri for electronic community systems. The techniques used included terms filtering, automatic indexing, and cluster analysis. The testbed for the research was the Worm Community System, which contains a comprehensive library of specialized community data and literature, currently in use by molecular biologists who study the nematode worm. The resulting worm thesaurus included 2709 researchers' names, 798 gene names, 20 experimental methods, and 4302 subject descriptors. On average, each term had about 90 weighted neighbouring terms indicating relevant concepts. The thesaurus was developed as an online search aide. Tests the worm thesaurus in an experiment with 6 worm researchers of varying degrees of expertise and background. The experiment showed that the thesaurus was an excellent 'memory jogging' device and that it supported learning and serendipitous browsing. Despite some occurrences of obvious noise, the system was useful in suggesting relevant concepts for the researchers' queries and it helped improve concept recall. With a simple browsing interface, an automatic thesaurus can become a useful tool for online search and can assist researchers in exploring and traversing a dynamic and complex electronic community system
  11. Fu, T.; Abbasi, A.; Chen, H.: ¬A focused crawler for Dark Web forums (2010) 0.01
    0.009382114 = product of:
      0.02814634 = sum of:
        0.02814634 = product of:
          0.05629268 = sum of:
            0.05629268 = weight(_text_:system in 3471) [ClassicSimilarity], result of:
              0.05629268 = score(doc=3471,freq=8.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.3479797 = fieldWeight in 3471, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3471)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The unprecedented growth of the Internet has given rise to the Dark Web, the problematic facet of the Web associated with cybercrime, hate, and extremism. Despite the need for tools to collect and analyze Dark Web forums, the covert nature of this part of the Internet makes traditional Web crawling techniques insufficient for capturing such content. In this study, we propose a novel crawling system designed to collect Dark Web forum content. The system uses a human-assisted accessibility approach to gain access to Dark Web forums. Several URL ordering features and techniques enable efficient extraction of forum postings. The system also includes an incremental crawler coupled with a recall-improvement mechanism intended to facilitate enhanced retrieval and updating of collected content. Experiments conducted to evaluate the effectiveness of the human-assisted accessibility approach and the recall-improvement-based, incremental-update procedure yielded favorable results. The human-assisted approach significantly improved access to Dark Web forums while the incremental crawler with recall improvement also outperformed standard periodic- and incremental-update approaches. Using the system, we were able to collect over 100 Dark Web forums from three regions. A case study encompassing link and content analysis of collected forums was used to illustrate the value and importance of gathering and analyzing content from such online communities.
  12. Schumaker, R.P.; Chen, H.: Evaluating a news-aware quantitative trader : the effect of momentum and contrarian stock selection strategies (2008) 0.01
    0.009287819 = product of:
      0.027863456 = sum of:
        0.027863456 = product of:
          0.055726912 = sum of:
            0.055726912 = weight(_text_:system in 1352) [ClassicSimilarity], result of:
              0.055726912 = score(doc=1352,freq=4.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.34448233 = fieldWeight in 1352, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1352)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    We study the coupling of basic quantitative portfolio selection strategies with a financial news article prediction system, AZFinText. By varying the degrees of portfolio formation time, we found that a hybrid system using both quantitative strategy and a full set of financial news articles performed the best. With a 1-week portfolio formation period, we achieved a 20.79% trading return using a Momentum strategy and a 4.54% return using a Contrarian strategy over a 5-week holding period. We also found that trader overreaction to these events led AZFinText to capitalize on these short-term surges in price.
  13. Chung, W.; Chen, H.; Reid, E.: Business stakeholder analyzer : an experiment of classifying stakeholders on the Web (2009) 0.01
    0.008125149 = product of:
      0.024375446 = sum of:
        0.024375446 = product of:
          0.048750892 = sum of:
            0.048750892 = weight(_text_:system in 2699) [ClassicSimilarity], result of:
              0.048750892 = score(doc=2699,freq=6.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.30135927 = fieldWeight in 2699, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2699)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    As the Web is used increasingly to share and disseminate information, business analysts and managers are challenged to understand stakeholder relationships. Traditional stakeholder theories and frameworks employ a manual approach to analysis and do not scale up to accommodate the rapid growth of the Web. Unfortunately, existing business intelligence (BI) tools lack analysis capability, and research on BI systems is sparse. This research proposes a framework for designing BI systems to identify and to classify stakeholders on the Web, incorporating human knowledge and machine-learned information from Web pages. Based on the framework, we have developed a prototype called Business Stakeholder Analyzer (BSA) that helps managers and analysts to identify and to classify their stakeholders on the Web. Results from our experiment involving algorithm comparison, feature comparison, and a user study showed that the system achieved better within-class accuracies in widespread stakeholder types such as partner/sponsor/supplier and media/reviewer, and was more efficient than human classification. The student and practitioner subjects in our user study strongly agreed that such a system would save analysts' time and help to identify and classify stakeholders. This research contributes to a better understanding of how to integrate information technology with stakeholder theory, and enriches the knowledge base of BI system design.
  14. Yang, M.; Kiang, M.; Chen, H.; Li, Y.: Artificial immune system for illicit content identification in social media (2012) 0.01
    0.008125149 = product of:
      0.024375446 = sum of:
        0.024375446 = product of:
          0.048750892 = sum of:
            0.048750892 = weight(_text_:system in 4980) [ClassicSimilarity], result of:
              0.048750892 = score(doc=4980,freq=6.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.30135927 = fieldWeight in 4980, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4980)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Social media is frequently used as a platform for the exchange of information and opinions as well as propaganda dissemination. But online content can be misused for the distribution of illicit information, such as violent postings in web forums. Illicit content is highly distributed in social media, while non-illicit content is unspecific and topically diverse. It is costly and time consuming to label a large amount of illicit content (positive examples) and non-illicit content (negative examples) to train classification systems. Nevertheless, it is relatively easy to obtain large volumes of unlabeled content in social media. In this article, an artificial immune system-based technique is presented to address the difficulties in the illicit content identification in social media. Inspired by the positive selection principle in the immune system, we designed a novel labeling heuristic based on partially supervised learning to extract high-quality positive and negative examples from unlabeled datasets. The empirical evaluation results from two large hate group web forums suggest that our proposed approach generally outperforms the benchmark techniques and exhibits more stable performance.
  15. Chen, H.; Martinez, J.; Kirchhoff, A.; Ng, T.D.; Schatz, B.R.: Alleviating search uncertainty through concept associations : automatic indexing, co-occurence analysis, and parallel computing (1998) 0.01
    0.007960987 = product of:
      0.02388296 = sum of:
        0.02388296 = product of:
          0.04776592 = sum of:
            0.04776592 = weight(_text_:system in 5202) [ClassicSimilarity], result of:
              0.04776592 = score(doc=5202,freq=4.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.29527056 = fieldWeight in 5202, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5202)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In this article, we report research on an algorithmic approach to alleviating search uncertainty in a large information space. Grounded on object filtering, automatic indexing, and co-occurence analysis, we performed a large-scale experiment using a parallel supercomputer (SGI Power Challenge) to analyze 400.000+ abstracts in an INSPEC computer engineering collection. Two system-generated thesauri, one based on a combined object filtering and automatic indexing method, and the other based on automatic indexing only, were compaed with the human-generated INSPEC subject thesaurus. Our user evaluation revealed that the system-generated thesauri were better than the INSPEC thesaurus in 'concept recall', but in 'concept precision' the 3 thesauri were comparable. Our analysis also revealed that the terms suggested by the 3 thesauri were complementary and could be used to significantly increase 'variety' in search terms the thereby reduce search uncertainty
  16. Chung, W.; Chen, H.: Browsing the underdeveloped Web : an experiment on the Arabic Medical Web Directory (2009) 0.01
    0.0069589573 = product of:
      0.020876871 = sum of:
        0.020876871 = product of:
          0.041753743 = sum of:
            0.041753743 = weight(_text_:22 in 2733) [ClassicSimilarity], result of:
              0.041753743 = score(doc=2733,freq=2.0), product of:
                0.17986396 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051362853 = queryNorm
                0.23214069 = fieldWeight in 2733, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2733)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2009 17:57:50
  17. Marshall, B.; McDonald, D.; Chen, H.; Chung, W.: EBizPort: collecting and analyzing business intelligence information (2004) 0.01
    0.0066341567 = product of:
      0.01990247 = sum of:
        0.01990247 = product of:
          0.03980494 = sum of:
            0.03980494 = weight(_text_:system in 2505) [ClassicSimilarity], result of:
              0.03980494 = score(doc=2505,freq=4.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.24605882 = fieldWeight in 2505, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2505)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    To make good decisions, businesses try to gather good intelligence information. Yet managing and processing a large amount of unstructured information and data stand in the way of greater business knowledge. An effective business intelligence tool must be able to access quality information from a variety of sources in a variety of forms, and it must support people as they search for and analyze that information. The EBizPort system was designed to address information needs for the business/IT community. EBizPort's collection-building process is designed to acquire credible, timely, and relevant information. The user interface provides access to collected and metasearched resources using innovative tools for summarization, categorization, and visualization. The effectiveness, efficiency, usability, and information quality of the EBizPort system were measured. EBizPort significantly outperformed Brint, a business search portal, in search effectiveness, information quality, user satisfaction, and usability. Users particularly liked EBizPort's clean and user-friendly interface. Results from our evaluation study suggest that the visualization function added value to the search and analysis process, that the generalizable collection-building technique can be useful for domain-specific information searching an the Web, and that the search interface was important for Web search and browse support.
  18. Orwig, R.E.; Chen, H.; Nunamaker, J.F.: ¬A graphical, self-organizing approach to classifying electronic meeting output (1997) 0.01
    0.0065674796 = product of:
      0.019702438 = sum of:
        0.019702438 = product of:
          0.039404877 = sum of:
            0.039404877 = weight(_text_:system in 6928) [ClassicSimilarity], result of:
              0.039404877 = score(doc=6928,freq=2.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.2435858 = fieldWeight in 6928, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6928)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Describes research in the application of a Kohonen Self-Organizing Map (SOM) to the problem of classification of electronic brainstorming output and an evaluation of the results. Describes an electronic meeting system and describes the classification problem that exists in the group problem solving process. Surveys the literature concerning classification. Describes the application of the Kohonen SOM to the meeting output classification problem. Describes an experiment that evaluated the classification performed by the Kohonen SOM by comparing it with those of a human expert and a Hopfield neural network. Discusses conclusions and directions for future research
  19. Carmel, E.; Crawford, S.; Chen, H.: Browsing in hypertext : a cognitive study (1992) 0.01
    0.0057991315 = product of:
      0.017397394 = sum of:
        0.017397394 = product of:
          0.03479479 = sum of:
            0.03479479 = weight(_text_:22 in 7469) [ClassicSimilarity], result of:
              0.03479479 = score(doc=7469,freq=2.0), product of:
                0.17986396 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051362853 = queryNorm
                0.19345059 = fieldWeight in 7469, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=7469)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    IEEE transactions on systems, man and cybernetics. 22(1992) no.5, S.865-884
  20. Zheng, R.; Li, J.; Chen, H.; Huang, Z.: ¬A framework for authorship identification of online messages : writing-style features and classification techniques (2006) 0.01
    0.0057991315 = product of:
      0.017397394 = sum of:
        0.017397394 = product of:
          0.03479479 = sum of:
            0.03479479 = weight(_text_:22 in 5276) [ClassicSimilarity], result of:
              0.03479479 = score(doc=5276,freq=2.0), product of:
                0.17986396 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051362853 = queryNorm
                0.19345059 = fieldWeight in 5276, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5276)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 7.2006 16:14:37