Search (34 results, page 1 of 2)

  • × author_ss:"Chen, H."
  1. Chen, H.; Beaudoin, C.E.; Hong, H.: Teen online information disclosure : empirical testing of a protection motivation and social capital model (2016) 0.03
    0.031710386 = product of:
      0.11098635 = sum of:
        0.08978786 = weight(_text_:interactions in 3203) [ClassicSimilarity], result of:
          0.08978786 = score(doc=3203,freq=2.0), product of:
            0.22965278 = queryWeight, product of:
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.038938753 = queryNorm
            0.39097226 = fieldWeight in 3203, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.046875 = fieldNorm(doc=3203)
        0.021198487 = weight(_text_:with in 3203) [ClassicSimilarity], result of:
          0.021198487 = score(doc=3203,freq=4.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.22591603 = fieldWeight in 3203, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.046875 = fieldNorm(doc=3203)
      0.2857143 = coord(2/7)
    
    Abstract
    With bases in protection motivation theory and social capital theory, this study investigates teen and parental factors that determine teens' online privacy concerns, online privacy protection behaviors, and subsequent online information disclosure on social network sites. With secondary data from a 2012 survey (N?=?622), the final well-fitting structural equation model revealed that teen online privacy concerns were primarily influenced by parental interpersonal trust and parental concerns about teens' online privacy, whereas teen privacy protection behaviors were primarily predicted by teen cost-benefit appraisal of online interactions. In turn, teen online privacy concerns predicted increased privacy protection behaviors and lower teen information disclosure. Finally, restrictive and instructive parental mediation exerted differential influences on teens' privacy protection behaviors and online information disclosure.
  2. Jiang, S.; Gao, Q.; Chen, H.; Roco, M.C.: ¬The roles of sharing, transfer, and public funding in nanotechnology knowledge-diffusion networks (2015) 0.01
    0.0128268385 = product of:
      0.08978786 = sum of:
        0.08978786 = weight(_text_:interactions in 1823) [ClassicSimilarity], result of:
          0.08978786 = score(doc=1823,freq=2.0), product of:
            0.22965278 = queryWeight, product of:
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.038938753 = queryNorm
            0.39097226 = fieldWeight in 1823, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.8977947 = idf(docFreq=329, maxDocs=44218)
              0.046875 = fieldNorm(doc=1823)
      0.14285715 = coord(1/7)
    
    Abstract
    Understanding the knowledge-diffusion networks of patent inventors can help governments and businesses effectively use their investment to stimulate commercial science and technology development. Such inventor networks are usually large and complex. This study proposes a multidimensional network analysis framework that utilizes Exponential Random Graph Models (ERGMs) to simultaneously model knowledge-sharing and knowledge-transfer processes, examine their interactions, and evaluate the impacts of network structures and public funding on knowledge-diffusion networks. Experiments are conducted on a longitudinal data set that covers 2 decades (1991-2010) of nanotechnology-related US Patent and Trademark Office (USPTO) patents. The results show that knowledge sharing and knowledge transfer are closely interrelated. High degree centrality or boundary inventors play significant roles in the network, and National Science Foundation (NSF) public funding positively affects knowledge sharing despite its small fraction in overall funding and upstream research topics.
  3. Carmel, E.; Crawford, S.; Chen, H.: Browsing in hypertext : a cognitive study (1992) 0.01
    0.009949936 = product of:
      0.034824774 = sum of:
        0.021635616 = weight(_text_:with in 7469) [ClassicSimilarity], result of:
          0.021635616 = score(doc=7469,freq=6.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.2305746 = fieldWeight in 7469, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0390625 = fieldNorm(doc=7469)
        0.013189158 = product of:
          0.026378317 = sum of:
            0.026378317 = weight(_text_:22 in 7469) [ClassicSimilarity], result of:
              0.026378317 = score(doc=7469,freq=2.0), product of:
                0.13635688 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038938753 = queryNorm
                0.19345059 = fieldWeight in 7469, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=7469)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    With the growth of hypertext and multimedia applications that support and encourage browsing it is time to take a penetrating look at browsing behaviour. Several dimensions of browsing are exemined, to find out: first, what is browsing and what cognitive processes are associated with it: second, is there a browsing strategy, and if so, are there any differences between how subject-area experts and novices browse; and finally, how can this knowledge be applied to improve the design of hypertext systems. Two groups of students, subject-area experts and novices, were studied while browsing a Macintosh HyperCard application on the subject The Vietnam War. A protocol analysis technique was used to gather and analyze data. Components of the GOMS model were used to describe the goals, operators, methods, and selection rules observed: Three browsing strategies were identified: (1) search-oriented browse, scanning and and reviewing information relevant to a fixed task; (2) review-browse, scanning and reviewing intersting information in the presence of transient browse goals that represent changing tasks, and (3) scan-browse, scanning for interesting information (without review). Most subjects primarily used review-browse interspersed with search-oriented browse. Within this strategy, comparisons between subject-area experts and novices revealed differences in tactics: experts browsed in more depth, seldom used referential links, selected different kinds of topics, and viewed information differently thatn did novices. Based on these findings, suggestions are made to hypertext developers
    Source
    IEEE transactions on systems, man and cybernetics. 22(1992) no.5, S.865-884
  4. Hu, D.; Kaza, S.; Chen, H.: Identifying significant facilitators of dark network evolution (2009) 0.01
    0.009949936 = product of:
      0.034824774 = sum of:
        0.021635616 = weight(_text_:with in 2753) [ClassicSimilarity], result of:
          0.021635616 = score(doc=2753,freq=6.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.2305746 = fieldWeight in 2753, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2753)
        0.013189158 = product of:
          0.026378317 = sum of:
            0.026378317 = weight(_text_:22 in 2753) [ClassicSimilarity], result of:
              0.026378317 = score(doc=2753,freq=2.0), product of:
                0.13635688 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038938753 = queryNorm
                0.19345059 = fieldWeight in 2753, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2753)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Social networks evolve over time with the addition and removal of nodes and links to survive and thrive in their environments. Previous studies have shown that the link-formation process in such networks is influenced by a set of facilitators. However, there have been few empirical evaluations to determine the important facilitators. In a research partnership with law enforcement agencies, we used dynamic social-network analysis methods to examine several plausible facilitators of co-offending relationships in a large-scale narcotics network consisting of individuals and vehicles. Multivariate Cox regression and a two-proportion z-test on cyclic and focal closures of the network showed that mutual acquaintance and vehicle affiliations were significant facilitators for the network under study. We also found that homophily with respect to age, race, and gender were not good predictors of future link formation in these networks. Moreover, we examined the social causes and policy implications for the significance and insignificance of various facilitators including common jails on future co-offending. These findings provide important insights into the link-formation processes and the resilience of social networks. In addition, they can be used to aid in the prediction of future links. The methods described can also help in understanding the driving forces behind the formation and evolution of social networks facilitated by mobile and Web technologies.
    Date
    22. 3.2009 18:50:30
  5. Leroy, G.; Chen, H.: Genescene: an ontology-enhanced integration of linguistic and co-occurrence based relations in biomedical texts (2005) 0.01
    0.00881559 = product of:
      0.030854564 = sum of:
        0.017665405 = weight(_text_:with in 5259) [ClassicSimilarity], result of:
          0.017665405 = score(doc=5259,freq=4.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.18826336 = fieldWeight in 5259, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5259)
        0.013189158 = product of:
          0.026378317 = sum of:
            0.026378317 = weight(_text_:22 in 5259) [ClassicSimilarity], result of:
              0.026378317 = score(doc=5259,freq=2.0), product of:
                0.13635688 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038938753 = queryNorm
                0.19345059 = fieldWeight in 5259, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5259)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The increasing amount of publicly available literature and experimental data in biomedicine makes it hard for biomedical researchers to stay up-to-date. Genescene is a toolkit that will help alleviate this problem by providing an overview of published literature content. We combined a linguistic parser with Concept Space, a co-occurrence based semantic net. Both techniques extract complementary biomedical relations between noun phrases from MEDLINE abstracts. The parser extracts precise and semantically rich relations from individual abstracts. Concept Space extracts relations that hold true for the collection of abstracts. The Gene Ontology, the Human Genome Nomenclature, and the Unified Medical Language System, are also integrated in Genescene. Currently, they are used to facilitate the integration of the two relation types, and to select the more interesting and high-quality relations for presentation. A user study focusing on p53 literature is discussed. All MEDLINE abstracts discussing p53 were processed in Genescene. Two researchers evaluated the terms and relations from several abstracts of interest to them. The results show that the terms were precise (precision 93%) and relevant, as were the parser relations (precision 95%). The Concept Space relations were more precise when selected with ontological knowledge (precision 78%) than without (60%).
    Date
    22. 7.2006 14:26:01
  6. Zheng, R.; Li, J.; Chen, H.; Huang, Z.: ¬A framework for authorship identification of online messages : writing-style features and classification techniques (2006) 0.01
    0.00881559 = product of:
      0.030854564 = sum of:
        0.017665405 = weight(_text_:with in 5276) [ClassicSimilarity], result of:
          0.017665405 = score(doc=5276,freq=4.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.18826336 = fieldWeight in 5276, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5276)
        0.013189158 = product of:
          0.026378317 = sum of:
            0.026378317 = weight(_text_:22 in 5276) [ClassicSimilarity], result of:
              0.026378317 = score(doc=5276,freq=2.0), product of:
                0.13635688 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038938753 = queryNorm
                0.19345059 = fieldWeight in 5276, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5276)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    With the rapid proliferation of Internet technologies and applications, misuse of online messages for inappropriate or illegal purposes has become a major concern for society. The anonymous nature of online-message distribution makes identity tracing a critical problem. We developed a framework for authorship identification of online messages to address the identity-tracing problem. In this framework, four types of writing-style features (lexical, syntactic, structural, and content-specific features) are extracted and inductive learning algorithms are used to build feature-based classification models to identify authorship of online messages. To examine this framework, we conducted experiments on English and Chinese online-newsgroup messages. We compared the discriminating power of the four types of features and of three classification techniques: decision trees, backpropagation neural networks, and support vector machines. The experimental results showed that the proposed approach was able to identify authors of online messages with satisfactory accuracy of 70 to 95%. All four types of message features contributed to discriminating authors of online messages. Support vector machines outperformed the other two classification techniques in our experiments. The high performance we achieved for both the English and Chinese datasets showed the potential of this approach in a multiple-language context.
    Date
    22. 7.2006 16:14:37
  7. Hu, P.J.-H.; Hsu, F.-M.; Hu, H.-f.; Chen, H.: Agency satisfaction with electronic record management systems : a large-scale survey (2010) 0.01
    0.006434018 = product of:
      0.045038123 = sum of:
        0.045038123 = weight(_text_:with in 4115) [ClassicSimilarity], result of:
          0.045038123 = score(doc=4115,freq=26.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.47997925 = fieldWeight in 4115, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4115)
      0.14285715 = coord(1/7)
    
    Abstract
    We investigated agency satisfaction with an electronic record management system (ERMS) that supports the electronic creation, archival, processing, transmittal, and sharing of records (documents) among autonomous government agencies. A factor model, explaining agency satisfaction with ERMS functionalities, offers hypotheses, which we tested empirically with a large-scale survey that involved more than 1,600 government agencies in Taiwan. The data showed a good fit to our model and supported all the hypotheses. Overall, agency satisfaction with ERMS functionalities appears jointly determined by regulatory compliance, job relevance, and satisfaction with support services. Among the determinants we studied, agency satisfaction with support services seems the strongest predictor of agency satisfaction with ERMS functionalities. Regulatory compliance also has important influences on agency satisfaction with ERMS, through its influence on job relevance and satisfaction with support services. Further analyses showed that satisfaction with support services partially mediated the impact of regulatory compliance on satisfaction with ERMS functionalities, and job relevance partially mediated the influence of regulatory compliance on satisfaction with ERMS functionalities. Our findings have important implications for research and practice, which we also discuss.
  8. Vishwanath, A.; Chen, H.: Personal communication technologies as an extension of the self : a cross-cultural comparison of people's associations with technology and their symbolic proximity with others (2008) 0.01
    0.0056655346 = product of:
      0.03965874 = sum of:
        0.03965874 = weight(_text_:with in 2355) [ClassicSimilarity], result of:
          0.03965874 = score(doc=2355,freq=14.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.42265022 = fieldWeight in 2355, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.046875 = fieldNorm(doc=2355)
      0.14285715 = coord(1/7)
    
    Abstract
    Increasingly, individuals use communication technologies such as e-mail, IMs, blogs, and cell phones to locate, learn about, and communicate with one another. Not much, however, is known about how individuals relate to various personal technologies, their preferences for each, or their extensional associations with them. Even less is known about the cultural differences in these preferences. The current study used the Galileo system of multidimensional scaling to systematically map the extensional associations with nine personal communication technologies across three cultures: U.S., Germany, and Singapore. Across the three cultures, the technologies closest to the self were similar, suggesting a universality of associations with certain technologies. In contrast, the technologies farther from the self were significantly different across cultures. Moreover, the magnitude of associations with each technology differed based on the extensional association or distance from the self. Also, and more importantly, the antecedents to these associations differed significantly across cultures, suggesting a stronger influence of cultural norms on personal-technology choice.
  9. Schumaker, R.P.; Chen, H.: Evaluating a news-aware quantitative trader : the effect of momentum and contrarian stock selection strategies (2008) 0.00
    0.0035330812 = product of:
      0.024731567 = sum of:
        0.024731567 = weight(_text_:with in 1352) [ClassicSimilarity], result of:
          0.024731567 = score(doc=1352,freq=4.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.2635687 = fieldWeight in 1352, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1352)
      0.14285715 = coord(1/7)
    
    Abstract
    We study the coupling of basic quantitative portfolio selection strategies with a financial news article prediction system, AZFinText. By varying the degrees of portfolio formation time, we found that a hybrid system using both quantitative strategy and a full set of financial news articles performed the best. With a 1-week portfolio formation period, we achieved a 20.79% trading return using a Momentum strategy and a 4.54% return using a Contrarian strategy over a 5-week holding period. We also found that trader overreaction to these events led AZFinText to capitalize on these short-term surges in price.
  10. Huang, C.; Fu, T.; Chen, H.: Text-based video content classification for online video-sharing sites (2010) 0.00
    0.0030908023 = product of:
      0.021635616 = sum of:
        0.021635616 = weight(_text_:with in 3452) [ClassicSimilarity], result of:
          0.021635616 = score(doc=3452,freq=6.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.2305746 = fieldWeight in 3452, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3452)
      0.14285715 = coord(1/7)
    
    Abstract
    With the emergence of Web 2.0, sharing personal content, communicating ideas, and interacting with other online users in Web 2.0 communities have become daily routines for online users. User-generated data from Web 2.0 sites provide rich personal information (e.g., personal preferences and interests) and can be utilized to obtain insight about cyber communities and their social networks. Many studies have focused on leveraging user-generated information to analyze blogs and forums, but few studies have applied this approach to video-sharing Web sites. In this study, we propose a text-based framework for video content classification of online-video sharing Web sites. Different types of user-generated data (e.g., titles, descriptions, and comments) were used as proxies for online videos, and three types of text features (lexical, syntactic, and content-specific features) were extracted. Three feature-based classification techniques (C4.5, Naïve Bayes, and Support Vector Machine) were used to classify videos. To evaluate the proposed framework, user-generated data from candidate videos, which were identified by searching user-given keywords on YouTube, were first collected. Then, a subset of the collected data was randomly selected and manually tagged by users as our experiment data. The experimental results showed that the proposed approach was able to classify online videos based on users' interests with accuracy rates up to 87.2%, and all three types of text features contributed to discriminating videos. Support Vector Machine outperformed C4.5 and Naïve Bayes techniques in our experiments. In addition, our case study further demonstrated that accurate video-classification results are very useful for identifying implicit cyber communities on video-sharing Web sites.
  11. Fu, T.; Abbasi, A.; Chen, H.: ¬A focused crawler for Dark Web forums (2010) 0.00
    0.0030908023 = product of:
      0.021635616 = sum of:
        0.021635616 = weight(_text_:with in 3471) [ClassicSimilarity], result of:
          0.021635616 = score(doc=3471,freq=6.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.2305746 = fieldWeight in 3471, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3471)
      0.14285715 = coord(1/7)
    
    Abstract
    The unprecedented growth of the Internet has given rise to the Dark Web, the problematic facet of the Web associated with cybercrime, hate, and extremism. Despite the need for tools to collect and analyze Dark Web forums, the covert nature of this part of the Internet makes traditional Web crawling techniques insufficient for capturing such content. In this study, we propose a novel crawling system designed to collect Dark Web forum content. The system uses a human-assisted accessibility approach to gain access to Dark Web forums. Several URL ordering features and techniques enable efficient extraction of forum postings. The system also includes an incremental crawler coupled with a recall-improvement mechanism intended to facilitate enhanced retrieval and updating of collected content. Experiments conducted to evaluate the effectiveness of the human-assisted accessibility approach and the recall-improvement-based, incremental-update procedure yielded favorable results. The human-assisted approach significantly improved access to Dark Web forums while the incremental crawler with recall improvement also outperformed standard periodic- and incremental-update approaches. Using the system, we were able to collect over 100 Dark Web forums from three regions. A case study encompassing link and content analysis of collected forums was used to illustrate the value and importance of gathering and analyzing content from such online communities.
  12. Chen, H.; Yim, T.; Fye, D.: Automatic thesaurus generation for an electronic community system (1995) 0.00
    0.0025236295 = product of:
      0.017665405 = sum of:
        0.017665405 = weight(_text_:with in 2918) [ClassicSimilarity], result of:
          0.017665405 = score(doc=2918,freq=4.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.18826336 = fieldWeight in 2918, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2918)
      0.14285715 = coord(1/7)
    
    Abstract
    Reports an algorithmic approach to the automatic generation of thesauri for electronic community systems. The techniques used included terms filtering, automatic indexing, and cluster analysis. The testbed for the research was the Worm Community System, which contains a comprehensive library of specialized community data and literature, currently in use by molecular biologists who study the nematode worm. The resulting worm thesaurus included 2709 researchers' names, 798 gene names, 20 experimental methods, and 4302 subject descriptors. On average, each term had about 90 weighted neighbouring terms indicating relevant concepts. The thesaurus was developed as an online search aide. Tests the worm thesaurus in an experiment with 6 worm researchers of varying degrees of expertise and background. The experiment showed that the thesaurus was an excellent 'memory jogging' device and that it supported learning and serendipitous browsing. Despite some occurrences of obvious noise, the system was useful in suggesting relevant concepts for the researchers' queries and it helped improve concept recall. With a simple browsing interface, an automatic thesaurus can become a useful tool for online search and can assist researchers in exploring and traversing a dynamic and complex electronic community system
  13. Chen, H.; Fan, H.; Chau, M.; Zeng, D.: MetaSpider : meta-searching and categorization on the Web (2001) 0.00
    0.0025236295 = product of:
      0.017665405 = sum of:
        0.017665405 = weight(_text_:with in 6849) [ClassicSimilarity], result of:
          0.017665405 = score(doc=6849,freq=4.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.18826336 = fieldWeight in 6849, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6849)
      0.14285715 = coord(1/7)
    
    Abstract
    It has become increasingly difficult to locate relevant information on the Web, even with the help of Web search engines. Two approaches to addressing the low precision and poor presentation of search results of current search tools are studied: meta-search and document categorization. Meta-search engines improve precision by selecting and integrating search results from generic or domain-specific Web search engines or other resources. Document categorization promises better organization and presentation of retrieved results. This article introduces MetaSpider, a meta-search engine that has real-time indexing and categorizing functions. We report in this paper the major components of MetaSpider and discuss related technical approaches. Initial results of a user evaluation study comparing Meta-Spider, NorthernLight, and MetaCrawler in terms of clustering performance and of time and effort expended show that MetaSpider performed best in precision rate, but disclose no statistically significant differences in recall rate and time requirements. Our experimental study also reveals that MetaSpider exhibited a higher level of automation than the other two systems and facilitated efficient searching by providing the user with an organized, comprehensive view of the retrieved documents.
  14. Chau, M.; Wong, C.H.; Zhou, Y.; Qin, J.; Chen, H.: Evaluating the use of search engine development tools in IT education (2010) 0.00
    0.0025236295 = product of:
      0.017665405 = sum of:
        0.017665405 = weight(_text_:with in 3325) [ClassicSimilarity], result of:
          0.017665405 = score(doc=3325,freq=4.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.18826336 = fieldWeight in 3325, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3325)
      0.14285715 = coord(1/7)
    
    Abstract
    It is important for education in computer science and information systems to keep up to date with the latest development in technology. With the rapid development of the Internet and the Web, many schools have included Internet-related technologies, such as Web search engines and e-commerce, as part of their curricula. Previous research has shown that it is effective to use search engine development tools to facilitate students' learning. However, the effectiveness of these tools in the classroom has not been evaluated. In this article, we review the design of three search engine development tools, SpidersRUs, Greenstone, and Alkaline, followed by an evaluation study that compared the three tools in the classroom. In the study, 33 students were divided into 13 groups and each group used the three tools to develop three independent search engines in a class project. Our evaluation results showed that SpidersRUs performed better than the two other tools in overall satisfaction and the level of knowledge gained in their learning experience when using the tools for a class project on Internet applications development.
  15. Suakkaphong, N.; Zhang, Z.; Chen, H.: Disease named entity recognition using semisupervised learning and conditional random fields (2011) 0.00
    0.0025236295 = product of:
      0.017665405 = sum of:
        0.017665405 = weight(_text_:with in 4367) [ClassicSimilarity], result of:
          0.017665405 = score(doc=4367,freq=4.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.18826336 = fieldWeight in 4367, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4367)
      0.14285715 = coord(1/7)
    
    Abstract
    Information extraction is an important text-mining task that aims at extracting prespecified types of information from large text collections and making them available in structured representations such as databases. In the biomedical domain, information extraction can be applied to help biologists make the most use of their digital-literature archives. Currently, there are large amounts of biomedical literature that contain rich information about biomedical substances. Extracting such knowledge requires a good named entity recognition technique. In this article, we combine conditional random fields (CRFs), a state-of-the-art sequence-labeling algorithm, with two semisupervised learning techniques, bootstrapping and feature sampling, to recognize disease names from biomedical literature. Two data-processing strategies for each technique also were analyzed: one sequentially processing unlabeled data partitions and another one processing unlabeled data partitions in a round-robin fashion. The experimental results showed the advantage of semisupervised learning techniques given limited labeled training data. Specifically, CRFs with bootstrapping implemented in sequential fashion outperformed strictly supervised CRFs for disease name recognition. The project was supported by NIH/NLM Grant R33 LM07299-01, 2002-2005.
  16. Orwig, R.E.; Chen, H.; Nunamaker, J.F.: ¬A graphical, self-organizing approach to classifying electronic meeting output (1997) 0.00
    0.0024982654 = product of:
      0.017487857 = sum of:
        0.017487857 = weight(_text_:with in 6928) [ClassicSimilarity], result of:
          0.017487857 = score(doc=6928,freq=2.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.1863712 = fieldWeight in 6928, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6928)
      0.14285715 = coord(1/7)
    
    Abstract
    Describes research in the application of a Kohonen Self-Organizing Map (SOM) to the problem of classification of electronic brainstorming output and an evaluation of the results. Describes an electronic meeting system and describes the classification problem that exists in the group problem solving process. Surveys the literature concerning classification. Describes the application of the Kohonen SOM to the meeting output classification problem. Describes an experiment that evaluated the classification performed by the Kohonen SOM by comparing it with those of a human expert and a Hopfield neural network. Discusses conclusions and directions for future research
  17. Chen, H.: ¬An analysis of image queries in the field of art history (2001) 0.00
    0.0024982654 = product of:
      0.017487857 = sum of:
        0.017487857 = weight(_text_:with in 5187) [ClassicSimilarity], result of:
          0.017487857 = score(doc=5187,freq=2.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.1863712 = fieldWeight in 5187, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5187)
      0.14285715 = coord(1/7)
    
    Abstract
    Chen arranged with an Art History instructor to require 20 medieval art images in papers received from 29 students. Participants completed a self administered presearch and postsearch questionnaire, and were interviewed after questionnaire analysis, in order to collect both the keywords and phrases they planned to use, and those actually used. Three MLIS student reviewers then mapped the queries to Enser and McGregor's four categories, Jorgensen's 12 classes, and Fidel's 12 feature data and object poles providing a degree of match on a seven point scale (one not at all to 7 exact). The reviewers give highest scores to Enser and McGregor;'s categories. Modifications to both the Enser and McGregor and Jorgensen schemes are suggested
  18. Chung, W.; Chen, H.: Browsing the underdeveloped Web : an experiment on the Arabic Medical Web Directory (2009) 0.00
    0.0022609986 = product of:
      0.015826989 = sum of:
        0.015826989 = product of:
          0.031653978 = sum of:
            0.031653978 = weight(_text_:22 in 2733) [ClassicSimilarity], result of:
              0.031653978 = score(doc=2733,freq=2.0), product of:
                0.13635688 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038938753 = queryNorm
                0.23214069 = fieldWeight in 2733, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2733)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 3.2009 17:57:50
  19. Chen, H.; Shankaranarayanan, G.; She, L.: ¬A machine learning approach to inductive query by examples : an experiment using relevance feedback, ID3, genetic algorithms, and simulated annealing (1998) 0.00
    0.0021413704 = product of:
      0.014989593 = sum of:
        0.014989593 = weight(_text_:with in 1148) [ClassicSimilarity], result of:
          0.014989593 = score(doc=1148,freq=2.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.15974675 = fieldWeight in 1148, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.046875 = fieldNorm(doc=1148)
      0.14285715 = coord(1/7)
    
    Abstract
    Information retrieval using probabilistic techniques has attracted significant attention on the part of researchers in information and computer science over the past few decades. In the 1980s, knowledge-based techniques also made an impressive contribution to 'intelligent' information retrieval and indexing. More recently, information science researchers have tfurned to other newer inductive learning techniques including symbolic learning, genetic algorithms, and simulated annealing. These newer techniques, which are grounded in diverse paradigms, have provided great opportunities for researchers to enhance the information processing and retrieval capabilities of current information systems. In this article, we first provide an overview of these newer techniques and their use in information retrieval research. In order to femiliarize readers with the techniques, we present 3 promising methods: the symbolic ID3 algorithm, evolution-based genetic algorithms, and simulated annealing. We discuss their knowledge representations and algorithms in the unique context of information retrieval
  20. Chen, H.; Martinez, J.; Kirchhoff, A.; Ng, T.D.; Schatz, B.R.: Alleviating search uncertainty through concept associations : automatic indexing, co-occurence analysis, and parallel computing (1998) 0.00
    0.0021413704 = product of:
      0.014989593 = sum of:
        0.014989593 = weight(_text_:with in 5202) [ClassicSimilarity], result of:
          0.014989593 = score(doc=5202,freq=2.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.15974675 = fieldWeight in 5202, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.046875 = fieldNorm(doc=5202)
      0.14285715 = coord(1/7)
    
    Abstract
    In this article, we report research on an algorithmic approach to alleviating search uncertainty in a large information space. Grounded on object filtering, automatic indexing, and co-occurence analysis, we performed a large-scale experiment using a parallel supercomputer (SGI Power Challenge) to analyze 400.000+ abstracts in an INSPEC computer engineering collection. Two system-generated thesauri, one based on a combined object filtering and automatic indexing method, and the other based on automatic indexing only, were compaed with the human-generated INSPEC subject thesaurus. Our user evaluation revealed that the system-generated thesauri were better than the INSPEC thesaurus in 'concept recall', but in 'concept precision' the 3 thesauri were comparable. Our analysis also revealed that the terms suggested by the 3 thesauri were complementary and could be used to significantly increase 'variety' in search terms the thereby reduce search uncertainty