Search (27 results, page 1 of 2)

  • × author_ss:"Chen, H."
  1. Hu, D.; Kaza, S.; Chen, H.: Identifying significant facilitators of dark network evolution (2009) 0.04
    0.042006757 = product of:
      0.12602027 = sum of:
        0.11071308 = weight(_text_:network in 2753) [ClassicSimilarity], result of:
          0.11071308 = score(doc=2753,freq=10.0), product of:
            0.2012564 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.045191888 = queryNorm
            0.5501096 = fieldWeight in 2753, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2753)
        0.015307193 = product of:
          0.030614385 = sum of:
            0.030614385 = weight(_text_:22 in 2753) [ClassicSimilarity], result of:
              0.030614385 = score(doc=2753,freq=2.0), product of:
                0.1582543 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191888 = queryNorm
                0.19345059 = fieldWeight in 2753, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2753)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Social networks evolve over time with the addition and removal of nodes and links to survive and thrive in their environments. Previous studies have shown that the link-formation process in such networks is influenced by a set of facilitators. However, there have been few empirical evaluations to determine the important facilitators. In a research partnership with law enforcement agencies, we used dynamic social-network analysis methods to examine several plausible facilitators of co-offending relationships in a large-scale narcotics network consisting of individuals and vehicles. Multivariate Cox regression and a two-proportion z-test on cyclic and focal closures of the network showed that mutual acquaintance and vehicle affiliations were significant facilitators for the network under study. We also found that homophily with respect to age, race, and gender were not good predictors of future link formation in these networks. Moreover, we examined the social causes and policy implications for the significance and insignificance of various facilitators including common jails on future co-offending. These findings provide important insights into the link-formation processes and the resilience of social networks. In addition, they can be used to aid in the prediction of future links. The methods described can also help in understanding the driving forces behind the formation and evolution of social networks facilitated by mobile and Web technologies.
    Date
    22. 3.2009 18:50:30
  2. Chen, H.; Zhang, Y.; Houston, A.L.: Semantic indexing and searching using a Hopfield net (1998) 0.04
    0.041345302 = product of:
      0.1240359 = sum of:
        0.040010586 = weight(_text_:computer in 5704) [ClassicSimilarity], result of:
          0.040010586 = score(doc=5704,freq=2.0), product of:
            0.16515417 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.045191888 = queryNorm
            0.24226204 = fieldWeight in 5704, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=5704)
        0.084025316 = weight(_text_:network in 5704) [ClassicSimilarity], result of:
          0.084025316 = score(doc=5704,freq=4.0), product of:
            0.2012564 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.045191888 = queryNorm
            0.41750383 = fieldWeight in 5704, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.046875 = fieldNorm(doc=5704)
      0.33333334 = coord(2/6)
    
    Abstract
    Presents a neural network approach to document semantic indexing. Reports results of a study to apply a Hopfield net algorithm to simulate human associative memory for concept exploration in the domain of computer science and engineering. The INSPEC database, consisting of 320.000 abstracts from leading periodical articles was used as the document test bed. Benchmark tests conformed that 3 parameters: maximum number of activated nodes; maximum allowable error; and maximum number of iterations; were useful in positively influencing network convergence behaviour without negatively impacting central processing unit performance. Another series of benchmark tests was performed to determine the effectiveness of various filtering techniques in reducing the negative impact of noisy input terms. Preliminary user tests conformed expectations that the Hopfield net is potentially useful as an associative memory technique to improve document recall and precision by solving discrepancies between indexer vocabularies and end user vocabularies
  3. Fu, T.; Abbasi, A.; Chen, H.: ¬A hybrid approach to Web forum interactional coherence analysis (2008) 0.03
    0.033141818 = product of:
      0.09942545 = sum of:
        0.040010586 = weight(_text_:computer in 1872) [ClassicSimilarity], result of:
          0.040010586 = score(doc=1872,freq=2.0), product of:
            0.16515417 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.045191888 = queryNorm
            0.24226204 = fieldWeight in 1872, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=1872)
        0.059414867 = weight(_text_:network in 1872) [ClassicSimilarity], result of:
          0.059414867 = score(doc=1872,freq=2.0), product of:
            0.2012564 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.045191888 = queryNorm
            0.29521978 = fieldWeight in 1872, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.046875 = fieldNorm(doc=1872)
      0.33333334 = coord(2/6)
    
    Abstract
    Despite the rapid growth of text-based computer-mediated communication (CMC), its limitations have rendered the media highly incoherent. This poses problems for content analysis of online discourse archives. Interactional coherence analysis (ICA) attempts to accurately identify and construct CMC interaction networks. In this study, we propose the Hybrid Interactional Coherence (HIC) algorithm for identification of web forum interaction. HIC utilizes a bevy of system and linguistic features, including message header information, quotations, direct address, and lexical relations. Furthermore, several similarity-based methods including a Lexical Match Algorithm (LMA) and a sliding window method are utilized to account for interactional idiosyncrasies. Experiments results on two web forums revealed that the proposed HIC algorithm significantly outperformed comparison techniques in terms of precision, recall, and F-measure at both the forum and thread levels. Additionally, an example was used to illustrate how the improved ICA results can facilitate enhanced social network and role analysis capabilities.
  4. Chen, H.: Intelligence and security informatics : Introduction to the special topic issue (2005) 0.02
    0.02419005 = product of:
      0.072570145 = sum of:
        0.023555377 = weight(_text_:services in 3232) [ClassicSimilarity], result of:
          0.023555377 = score(doc=3232,freq=2.0), product of:
            0.16591617 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.045191888 = queryNorm
            0.14197156 = fieldWeight in 3232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3232)
        0.04901477 = weight(_text_:network in 3232) [ClassicSimilarity], result of:
          0.04901477 = score(doc=3232,freq=4.0), product of:
            0.2012564 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.045191888 = queryNorm
            0.24354391 = fieldWeight in 3232, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3232)
      0.33333334 = coord(2/6)
    
    Abstract
    Making the Nation Safer: The Role of Science and Technology in Countering Terrorism The commitment of the scientific, engineering, and health communities to helping the United States and the world respond to security challenges became evident after September 11, 2001. The U.S. National Research Council's report an "Making the Nation Safer: The Role of Science and Technology in Countering Terrorism," (National Research Council, 2002, p. 1) explains the context of such a new commitment: Terrorism is a serious threat to the Security of the United States and indeed the world. The vulnerability of societies to terrorist attacks results in part from the proliferation of chemical, biological, and nuclear weapons of mass destruction, but it also is a consequence of the highly efficient and interconnected systems that we rely an for key services such as transportation, information, energy, and health care. The efficient functioning of these systems reflects great technological achievements of the past century, but interconnectedness within and across systems also means that infrastructures are vulnerable to local disruptions, which could lead to widespread or catastrophic failures. As terrorists seek to exploit these vulnerabilities, it is fitting that we harness the nation's exceptional scientific and technological capabilities to Counter terrorist threats. A committee of 24 of the leading scientific, engineering, medical, and policy experts in the United States conducted the study described in the report. Eight panels were separately appointed and asked to provide input to the committee. The panels included: (a) biological sciences, (b) chemical issues, (c) nuclear and radiological issues, (d) information technology, (e) transportation, (f) energy facilities, Cities, and fixed infrastructure, (g) behavioral, social, and institutional issues, and (h) systems analysis and systems engineering. The focus of the committee's work was to make the nation safer from emerging terrorist threats that sought to inflict catastrophic damage an the nation's people, its infrastructure, or its economy. The committee considered nine areas, each of which is discussed in a separate chapter in the report: nuclear and radiological materials, human and agricultural health systems, toxic chemicals and explosive materials, information technology, energy systems, transportation systems, Cities and fixed infrastructure, the response of people to terrorism, and complex and interdependent systems. The chapter an information technology (IT) is particularly relevant to this special issue. The report recommends that "a strategic long-term research and development agenda should be established to address three primary counterterrorismrelated areas in IT: information and network security, the IT needs of emergency responders, and information fusion and management" (National Research Council, 2002, pp. 11 -12). The MD in information and network security should include approaches and architectures for prevention, identification, and containment of cyber-intrusions and recovery from them. The R&D to address IT needs of emergency responders should include ensuring interoperability, maintaining and expanding communications capability during an emergency, communicating with the public during an emergency, and providing support for decision makers. The R&D in information fusion and management for the intelligence, law enforcement, and emergency response communities should include data mining, data integration, language technologies, and processing of image and audio data. Much of the research reported in this special issue is related to information fusion and management for homeland security.
  5. Ku, Y.; Chiu, C.; Zhang, Y.; Chen, H.; Su, H.: Text mining self-disclosing health information for public health service (2014) 0.02
    0.02011343 = product of:
      0.060340293 = sum of:
        0.040380646 = weight(_text_:services in 1262) [ClassicSimilarity], result of:
          0.040380646 = score(doc=1262,freq=2.0), product of:
            0.16591617 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.045191888 = queryNorm
            0.2433798 = fieldWeight in 1262, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046875 = fieldNorm(doc=1262)
        0.01995965 = product of:
          0.0399193 = sum of:
            0.0399193 = weight(_text_:resources in 1262) [ClassicSimilarity], result of:
              0.0399193 = score(doc=1262,freq=2.0), product of:
                0.16496566 = queryWeight, product of:
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.045191888 = queryNorm
                0.2419855 = fieldWeight in 1262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1262)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Understanding specific patterns or knowledge of self-disclosing health information could support public health surveillance and healthcare. This study aimed to develop an analytical framework to identify self-disclosing health information with unusual messages on web forums by leveraging advanced text-mining techniques. To demonstrate the performance of the proposed analytical framework, we conducted an experimental study on 2 major human immunodeficiency virus (HIV)/acquired immune deficiency syndrome (AIDS) forums in Taiwan. The experimental results show that the classification accuracy increased significantly (up to 83.83%) when using features selected by the information gain technique. The results also show the importance of adopting domain-specific features in analyzing unusual messages on web forums. This study has practical implications for the prevention and support of HIV/AIDS healthcare. For example, public health agencies can re-allocate resources and deliver services to people who need help via social media sites. In addition, individuals can also join a social media site to get better suggestions and support from each other.
  6. Chen, H.; Ng, T.: ¬An algorithmic approach to concept exploration in a large knowledge network (automatic thesaurus consultation) : symbolic branch-and-bound search versus connectionist Hopfield Net Activation (1995) 0.02
    0.017151598 = product of:
      0.10290959 = sum of:
        0.10290959 = weight(_text_:network in 2203) [ClassicSimilarity], result of:
          0.10290959 = score(doc=2203,freq=6.0), product of:
            0.2012564 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.045191888 = queryNorm
            0.51133573 = fieldWeight in 2203, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.046875 = fieldNorm(doc=2203)
      0.16666667 = coord(1/6)
    
    Abstract
    Presents a framework for knowledge discovery and concept exploration. In order to enhance the concept exploration capability of knowledge based systems and to alleviate the limitation of the manual browsing approach, develops 2 spreading activation based algorithms for concept exploration in large, heterogeneous networks of concepts (eg multiple thesauri). One algorithm, which is based on the symbolic AI paradigma, performs a conventional branch-and-bound search on a semantic net representation to identify other highly relevant concepts (a serial, optimal search process). The 2nd algorithm, which is absed on the neural network approach, executes the Hopfield net parallel relaxation and convergence process to identify 'convergent' concepts for some initial queries (a parallel, heuristic search process). Tests these 2 algorithms on a large text-based knowledge network of about 13.000 nodes (terms) and 80.000 directed links in the area of computing technologies
  7. Jiang, S.; Gao, Q.; Chen, H.; Roco, M.C.: ¬The roles of sharing, transfer, and public funding in nanotechnology knowledge-diffusion networks (2015) 0.02
    0.017151598 = product of:
      0.10290959 = sum of:
        0.10290959 = weight(_text_:network in 1823) [ClassicSimilarity], result of:
          0.10290959 = score(doc=1823,freq=6.0), product of:
            0.2012564 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.045191888 = queryNorm
            0.51133573 = fieldWeight in 1823, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.046875 = fieldNorm(doc=1823)
      0.16666667 = coord(1/6)
    
    Abstract
    Understanding the knowledge-diffusion networks of patent inventors can help governments and businesses effectively use their investment to stimulate commercial science and technology development. Such inventor networks are usually large and complex. This study proposes a multidimensional network analysis framework that utilizes Exponential Random Graph Models (ERGMs) to simultaneously model knowledge-sharing and knowledge-transfer processes, examine their interactions, and evaluate the impacts of network structures and public funding on knowledge-diffusion networks. Experiments are conducted on a longitudinal data set that covers 2 decades (1991-2010) of nanotechnology-related US Patent and Trademark Office (USPTO) patents. The results show that knowledge sharing and knowledge transfer are closely interrelated. High degree centrality or boundary inventors play significant roles in the network, and National Science Foundation (NSF) public funding positively affects knowledge sharing despite its small fraction in overall funding and upstream research topics.
  8. Marshall, B.; Chen, H.; Kaza, S.: Using importance flooding to identify interesting networks of criminal activity (2008) 0.01
    0.014292996 = product of:
      0.08575798 = sum of:
        0.08575798 = weight(_text_:network in 2386) [ClassicSimilarity], result of:
          0.08575798 = score(doc=2386,freq=6.0), product of:
            0.2012564 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.045191888 = queryNorm
            0.42611307 = fieldWeight in 2386, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2386)
      0.16666667 = coord(1/6)
    
    Abstract
    Effectively harnessing available data to support homeland-security-related applications is a major focus in the emerging science of intelligence and security informatics (ISI). Many studies have focused on criminal-network analysis as a major challenge within the ISI domain. Though various methodologies have been proposed, none have been tested for usefulness in creating link charts. This study compares manually created link charts to suggestions made by the proposed importance-flooding algorithm. Mirroring manual investigational processes, our iterative computation employs association-strength metrics, incorporates path-based node importance heuristics, allows for case-specific notions of importance, and adjusts based on the accuracy of previous suggestions. Interesting items are identified by leveraging both node attributes and network structure in a single computation. Our data set was systematically constructed from heterogeneous sources and omits many privacy-sensitive data elements such as case narratives and phone numbers. The flooding algorithm improved on both manual and link-weight-only computations, and our results suggest that the approach is robust across different interpretations of the user-provided heuristics. This study demonstrates an interesting methodology for including user-provided heuristics in network-based analysis, and can help guide the development of ISI-related analysis tools.
  9. Chung, W.; Chen, H.: Browsing the underdeveloped Web : an experiment on the Arabic Medical Web Directory (2009) 0.01
    0.0127760945 = product of:
      0.076656565 = sum of:
        0.076656565 = sum of:
          0.0399193 = weight(_text_:resources in 2733) [ClassicSimilarity], result of:
            0.0399193 = score(doc=2733,freq=2.0), product of:
              0.16496566 = queryWeight, product of:
                3.650338 = idf(docFreq=3122, maxDocs=44218)
                0.045191888 = queryNorm
              0.2419855 = fieldWeight in 2733, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.650338 = idf(docFreq=3122, maxDocs=44218)
                0.046875 = fieldNorm(doc=2733)
          0.036737263 = weight(_text_:22 in 2733) [ClassicSimilarity], result of:
            0.036737263 = score(doc=2733,freq=2.0), product of:
              0.1582543 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045191888 = queryNorm
              0.23214069 = fieldWeight in 2733, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2733)
      0.16666667 = coord(1/6)
    
    Abstract
    While the Web has grown significantly in recent years, some portions of the Web remain largely underdeveloped, as shown in a lack of high-quality content and functionality. An example is the Arabic Web, in which a lack of well-structured Web directories limits users' ability to browse for Arabic resources. In this research, we proposed an approach to building Web directories for the underdeveloped Web and developed a proof-of-concept prototype called the Arabic Medical Web Directory (AMedDir) that supports browsing of over 5,000 Arabic medical Web sites and pages organized in a hierarchical structure. We conducted an experiment involving Arab participants and found that the AMedDir significantly outperformed two benchmark Arabic Web directories in terms of browsing effectiveness, efficiency, information quality, and user satisfaction. Participants expressed strong preference for the AMedDir and provided many positive comments. This research thus contributes to developing a useful Web directory for organizing the information in the Arabic medical domain and to a better understanding of how to support browsing on the underdeveloped Web.
    Date
    22. 3.2009 17:57:50
  10. Chen, H.: Machine learning for information retrieval : neural networks, symbolic learning, and genetic algorithms (1994) 0.01
    0.011552892 = product of:
      0.06931735 = sum of:
        0.06931735 = weight(_text_:network in 2657) [ClassicSimilarity], result of:
          0.06931735 = score(doc=2657,freq=2.0), product of:
            0.2012564 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.045191888 = queryNorm
            0.3444231 = fieldWeight in 2657, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2657)
      0.16666667 = coord(1/6)
    
    Abstract
    In the 1980s, knowledge-based techniques also made an impressive contribution to 'intelligent' information retrieval and indexing. More recently, researchers have turned to newer artificial intelligence based inductive learning techniques including neural networks, symbolic learning, and genetic algorithms grounded on diverse paradigms. These have provided great opportunities to enhance the capabilities of current information storage and retrieval systems. Provides an overview of these techniques and presents 3 popular methods: the connectionist Hopfield network; the symbolic ID3/ID5R; and evaluation based genetic algorithms in the context of information retrieval. The techniques are promising in their ability to analyze user queries, identify users' information needs, and suggest alternatives for search and can greatly complement the prevailing full text, keyword based, probabilistic, and knowledge based techniques
  11. Orwig, R.E.; Chen, H.; Nunamaker, J.F.: ¬A graphical, self-organizing approach to classifying electronic meeting output (1997) 0.01
    0.011552892 = product of:
      0.06931735 = sum of:
        0.06931735 = weight(_text_:network in 6928) [ClassicSimilarity], result of:
          0.06931735 = score(doc=6928,freq=2.0), product of:
            0.2012564 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.045191888 = queryNorm
            0.3444231 = fieldWeight in 6928, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6928)
      0.16666667 = coord(1/6)
    
    Abstract
    Describes research in the application of a Kohonen Self-Organizing Map (SOM) to the problem of classification of electronic brainstorming output and an evaluation of the results. Describes an electronic meeting system and describes the classification problem that exists in the group problem solving process. Surveys the literature concerning classification. Describes the application of the Kohonen SOM to the meeting output classification problem. Describes an experiment that evaluated the classification performed by the Kohonen SOM by comparing it with those of a human expert and a Hopfield neural network. Discusses conclusions and directions for future research
  12. Hu, P.J.-H.; Hsu, F.-M.; Hu, H.-f.; Chen, H.: Agency satisfaction with electronic record management systems : a large-scale survey (2010) 0.01
    0.011216847 = product of:
      0.06730108 = sum of:
        0.06730108 = weight(_text_:services in 4115) [ClassicSimilarity], result of:
          0.06730108 = score(doc=4115,freq=8.0), product of:
            0.16591617 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.045191888 = queryNorm
            0.405633 = fieldWeight in 4115, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4115)
      0.16666667 = coord(1/6)
    
    Abstract
    We investigated agency satisfaction with an electronic record management system (ERMS) that supports the electronic creation, archival, processing, transmittal, and sharing of records (documents) among autonomous government agencies. A factor model, explaining agency satisfaction with ERMS functionalities, offers hypotheses, which we tested empirically with a large-scale survey that involved more than 1,600 government agencies in Taiwan. The data showed a good fit to our model and supported all the hypotheses. Overall, agency satisfaction with ERMS functionalities appears jointly determined by regulatory compliance, job relevance, and satisfaction with support services. Among the determinants we studied, agency satisfaction with support services seems the strongest predictor of agency satisfaction with ERMS functionalities. Regulatory compliance also has important influences on agency satisfaction with ERMS, through its influence on job relevance and satisfaction with support services. Further analyses showed that satisfaction with support services partially mediated the impact of regulatory compliance on satisfaction with ERMS functionalities, and job relevance partially mediated the influence of regulatory compliance on satisfaction with ERMS functionalities. Our findings have important implications for research and practice, which we also discuss.
  13. Chen, H.; Beaudoin, C.E.; Hong, H.: Teen online information disclosure : empirical testing of a protection motivation and social capital model (2016) 0.01
    0.009902478 = product of:
      0.059414867 = sum of:
        0.059414867 = weight(_text_:network in 3203) [ClassicSimilarity], result of:
          0.059414867 = score(doc=3203,freq=2.0), product of:
            0.2012564 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.045191888 = queryNorm
            0.29521978 = fieldWeight in 3203, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.046875 = fieldNorm(doc=3203)
      0.16666667 = coord(1/6)
    
    Abstract
    With bases in protection motivation theory and social capital theory, this study investigates teen and parental factors that determine teens' online privacy concerns, online privacy protection behaviors, and subsequent online information disclosure on social network sites. With secondary data from a 2012 survey (N?=?622), the final well-fitting structural equation model revealed that teen online privacy concerns were primarily influenced by parental interpersonal trust and parental concerns about teens' online privacy, whereas teen privacy protection behaviors were primarily predicted by teen cost-benefit appraisal of online interactions. In turn, teen online privacy concerns predicted increased privacy protection behaviors and lower teen information disclosure. Finally, restrictive and instructive parental mediation exerted differential influences on teens' privacy protection behaviors and online information disclosure.
  14. Dang, Y.; Zhang, Y.; Chen, H.; Hu, P.J.-H.; Brown, S.A.; Larson, C.: Arizona Literature Mapper : an integrated approach to monitor and analyze global bioterrorism research literature (2009) 0.01
    0.008252066 = product of:
      0.049512394 = sum of:
        0.049512394 = weight(_text_:network in 2943) [ClassicSimilarity], result of:
          0.049512394 = score(doc=2943,freq=2.0), product of:
            0.2012564 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.045191888 = queryNorm
            0.2460165 = fieldWeight in 2943, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2943)
      0.16666667 = coord(1/6)
    
    Abstract
    Biomedical research is critical to biodefense, which is drawing increasing attention from governments globally as well as from various research communities. The U.S. government has been closely monitoring and regulating biomedical research activities, particularly those studying or involving bioterrorism agents or diseases. Effective surveillance requires comprehensive understanding of extant biomedical research and timely detection of new developments or emerging trends. The rapid knowledge expansion, technical breakthroughs, and spiraling collaboration networks demand greater support for literature search and sharing, which cannot be effectively supported by conventional literature search mechanisms or systems. In this study, we propose an integrated approach that integrates advanced techniques for content analysis, network analysis, and information visualization. We design and implement Arizona Literature Mapper, a Web-based portal that allows users to gain timely, comprehensive understanding of bioterrorism research, including leading scientists, research groups, institutions as well as insights about current mainstream interests or emerging trends. We conduct two user studies to evaluate Arizona Literature Mapper and include a well-known system for benchmarking purposes. According to our results, Arizona Literature Mapper is significantly more effective for supporting users' search of bioterrorism publications than PubMed. Users consider Arizona Literature Mapper more useful and easier to use than PubMed. Users are also more satisfied with Arizona Literature Mapper and show stronger intentions to use it in the future. Assessments of Arizona Literature Mapper's analysis functions are also positive, as our subjects consider them useful, easy to use, and satisfactory. Our results have important implications that are also discussed in the article.
  15. Liu, X.; Kaza, S.; Zhang, P.; Chen, H.: Determining inventor status and its effect on knowledge diffusion : a study on nanotechnology literature from China, Russia, and India (2011) 0.01
    0.008252066 = product of:
      0.049512394 = sum of:
        0.049512394 = weight(_text_:network in 4468) [ClassicSimilarity], result of:
          0.049512394 = score(doc=4468,freq=2.0), product of:
            0.2012564 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.045191888 = queryNorm
            0.2460165 = fieldWeight in 4468, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4468)
      0.16666667 = coord(1/6)
    
    Abstract
    In an increasingly global research landscape, it is important to identify the most prolific researchers in various institutions and their influence on the diffusion of knowledge. Knowledge diffusion within institutions is influenced by not just the status of individual researchers but also the collaborative culture that determines status. There are various methods to measure individual status, but few studies have compared them or explored the possible effects of different cultures on the status measures. In this article, we examine knowledge diffusion within science and technology-oriented research organizations. Using social network analysis metrics to measure individual status in large-scale coauthorship networks, we studied an individual's impact on the recombination of knowledge to produce innovation in nanotechnology. Data from the most productive and high-impact institutions in China (Chinese Academy of Sciences), Russia (Russian Academy of Sciences), and India (Indian Institutes of Technology) were used. We found that boundary-spanning individuals influenced knowledge diffusion in all countries. However, our results also indicate that cultural and institutional differences may influence knowledge diffusion.
  16. Zhu, B.; Chen, H.: Information visualization (2004) 0.01
    0.006737536 = product of:
      0.040425215 = sum of:
        0.040425215 = weight(_text_:computer in 4276) [ClassicSimilarity], result of:
          0.040425215 = score(doc=4276,freq=6.0), product of:
            0.16515417 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.045191888 = queryNorm
            0.24477258 = fieldWeight in 4276, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4276)
      0.16666667 = coord(1/6)
    
    Abstract
    Advanced technology has resulted in the generation of about one million terabytes of information every year. Ninety-reine percent of this is available in digital format (Keim, 2001). More information will be generated in the next three years than was created during all of previous human history (Keim, 2001). Collecting information is no longer a problem, but extracting value from information collections has become progressively more difficult. Various search engines have been developed to make it easier to locate information of interest, but these work well only for a person who has a specific goal and who understands what and how information is stored. This usually is not the Gase. Visualization was commonly thought of in terms of representing human mental processes (MacEachren, 1991; Miller, 1984). The concept is now associated with the amplification of these mental processes (Card, Mackinlay, & Shneiderman, 1999). Human eyes can process visual cues rapidly, whereas advanced information analysis techniques transform the computer into a powerful means of managing digitized information. Visualization offers a link between these two potent systems, the human eye and the computer (Gershon, Eick, & Card, 1998), helping to identify patterns and to extract insights from large amounts of information. The identification of patterns is important because it may lead to a scientific discovery, an interpretation of clues to solve a crime, the prediction of catastrophic weather, a successful financial investment, or a better understanding of human behavior in a computermediated environment. Visualization technology shows considerable promise for increasing the value of large-scale collections of information, as evidenced by several commercial applications of TreeMap (e.g., http://www.smartmoney.com) and Hyperbolic tree (e.g., http://www.inxight.com) to visualize large-scale hierarchical structures. Although the proliferation of visualization technologies dates from the 1990s where sophisticated hardware and software made increasingly faster generation of graphical objects possible, the role of visual aids in facilitating the construction of mental images has a long history. Visualization has been used to communicate ideas, to monitor trends implicit in data, and to explore large volumes of data for hypothesis generation. Imagine traveling to a strange place without a map, having to memorize physical and chemical properties of an element without Mendeleyev's periodic table, trying to understand the stock market without statistical diagrams, or browsing a collection of documents without interactive visual aids. A collection of information can lose its value simply because of the effort required for exhaustive exploration. Such frustrations can be overcome by visualization.
    Visualization can be classified as scientific visualization, software visualization, or information visualization. Although the data differ, the underlying techniques have much in common. They use the same elements (visual cues) and follow the same rules of combining visual cues to deliver patterns. They all involve understanding human perception (Encarnacao, Foley, Bryson, & Feiner, 1994) and require domain knowledge (Tufte, 1990). Because most decisions are based an unstructured information, such as text documents, Web pages, or e-mail messages, this chapter focuses an the visualization of unstructured textual documents. The chapter reviews information visualization techniques developed over the last decade and examines how they have been applied in different domains. The first section provides the background by describing visualization history and giving overviews of scientific, software, and information visualization as well as the perceptual aspects of visualization. The next section assesses important visualization techniques that convert abstract information into visual objects and facilitate navigation through displays an a computer screen. It also explores information analysis algorithms that can be applied to identify or extract salient visualizable structures from collections of information. Information visualization systems that integrate different types of technologies to address problems in different domains are then surveyed; and we move an to a survey and critique of visualization system evaluation studies. The chapter concludes with a summary and identification of future research directions.
  17. Chen, H.; Shankaranarayanan, G.; She, L.: ¬A machine learning approach to inductive query by examples : an experiment using relevance feedback, ID3, genetic algorithms, and simulated annealing (1998) 0.01
    0.006668431 = product of:
      0.040010586 = sum of:
        0.040010586 = weight(_text_:computer in 1148) [ClassicSimilarity], result of:
          0.040010586 = score(doc=1148,freq=2.0), product of:
            0.16515417 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.045191888 = queryNorm
            0.24226204 = fieldWeight in 1148, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=1148)
      0.16666667 = coord(1/6)
    
    Abstract
    Information retrieval using probabilistic techniques has attracted significant attention on the part of researchers in information and computer science over the past few decades. In the 1980s, knowledge-based techniques also made an impressive contribution to 'intelligent' information retrieval and indexing. More recently, information science researchers have tfurned to other newer inductive learning techniques including symbolic learning, genetic algorithms, and simulated annealing. These newer techniques, which are grounded in diverse paradigms, have provided great opportunities for researchers to enhance the information processing and retrieval capabilities of current information systems. In this article, we first provide an overview of these newer techniques and their use in information retrieval research. In order to femiliarize readers with the techniques, we present 3 promising methods: the symbolic ID3 algorithm, evolution-based genetic algorithms, and simulated annealing. We discuss their knowledge representations and algorithms in the unique context of information retrieval
  18. Chen, H.; Martinez, J.; Kirchhoff, A.; Ng, T.D.; Schatz, B.R.: Alleviating search uncertainty through concept associations : automatic indexing, co-occurence analysis, and parallel computing (1998) 0.01
    0.006668431 = product of:
      0.040010586 = sum of:
        0.040010586 = weight(_text_:computer in 5202) [ClassicSimilarity], result of:
          0.040010586 = score(doc=5202,freq=2.0), product of:
            0.16515417 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.045191888 = queryNorm
            0.24226204 = fieldWeight in 5202, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=5202)
      0.16666667 = coord(1/6)
    
    Abstract
    In this article, we report research on an algorithmic approach to alleviating search uncertainty in a large information space. Grounded on object filtering, automatic indexing, and co-occurence analysis, we performed a large-scale experiment using a parallel supercomputer (SGI Power Challenge) to analyze 400.000+ abstracts in an INSPEC computer engineering collection. Two system-generated thesauri, one based on a combined object filtering and automatic indexing method, and the other based on automatic indexing only, were compaed with the human-generated INSPEC subject thesaurus. Our user evaluation revealed that the system-generated thesauri were better than the INSPEC thesaurus in 'concept recall', but in 'concept precision' the 3 thesauri were comparable. Our analysis also revealed that the terms suggested by the 3 thesauri were complementary and could be used to significantly increase 'variety' in search terms the thereby reduce search uncertainty
  19. Chen, H.: Semantic research for digital libraries (1999) 0.01
    0.006668431 = product of:
      0.040010586 = sum of:
        0.040010586 = weight(_text_:computer in 1247) [ClassicSimilarity], result of:
          0.040010586 = score(doc=1247,freq=2.0), product of:
            0.16515417 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.045191888 = queryNorm
            0.24226204 = fieldWeight in 1247, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=1247)
      0.16666667 = coord(1/6)
    
    Abstract
    In this era of the Internet and distributed, multimedia computing, new and emerging classes of information systems applications have swept into the lives of office workers and people in general. From digital libraries, multimedia systems, geographic information systems, and collaborative computing to electronic commerce, virtual reality, and electronic video arts and games, these applications have created tremendous opportunities for information and computer science researchers and practitioners. As applications become more pervasive, pressing, and diverse, several well-known information retrieval (IR) problems have become even more urgent. Information overload, a result of the ease of information creation and transmission via the Internet and WWW, has become more troublesome (e.g., even stockbrokers and elementary school students, heavily exposed to various WWW search engines, are versed in such IR terminology as recall and precision). Significant variations in database formats and structures, the richness of information media (text, audio, and video), and an abundance of multilingual information content also have created severe information interoperability problems -- structural interoperability, media interoperability, and multilingual interoperability.
  20. Chau, M.; Wong, C.H.; Zhou, Y.; Qin, J.; Chen, H.: Evaluating the use of search engine development tools in IT education (2010) 0.01
    0.0055570262 = product of:
      0.033342157 = sum of:
        0.033342157 = weight(_text_:computer in 3325) [ClassicSimilarity], result of:
          0.033342157 = score(doc=3325,freq=2.0), product of:
            0.16515417 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.045191888 = queryNorm
            0.20188503 = fieldWeight in 3325, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3325)
      0.16666667 = coord(1/6)
    
    Abstract
    It is important for education in computer science and information systems to keep up to date with the latest development in technology. With the rapid development of the Internet and the Web, many schools have included Internet-related technologies, such as Web search engines and e-commerce, as part of their curricula. Previous research has shown that it is effective to use search engine development tools to facilitate students' learning. However, the effectiveness of these tools in the classroom has not been evaluated. In this article, we review the design of three search engine development tools, SpidersRUs, Greenstone, and Alkaline, followed by an evaluation study that compared the three tools in the classroom. In the study, 33 students were divided into 13 groups and each group used the three tools to develop three independent search engines in a class project. Our evaluation results showed that SpidersRUs performed better than the two other tools in overall satisfaction and the level of knowledge gained in their learning experience when using the tools for a class project on Internet applications development.