Search (59 results, page 1 of 3)

  • × author_ss:"Chen, H."
  1. Carmel, E.; Crawford, S.; Chen, H.: Browsing in hypertext : a cognitive study (1992) 0.06
    0.061509836 = product of:
      0.10251639 = sum of:
        0.017369429 = weight(_text_:of in 7469) [ClassicSimilarity], result of:
          0.017369429 = score(doc=7469,freq=14.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.22855641 = fieldWeight in 7469, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=7469)
        0.068685904 = weight(_text_:subject in 7469) [ClassicSimilarity], result of:
          0.068685904 = score(doc=7469,freq=8.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.39516178 = fieldWeight in 7469, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=7469)
        0.016461061 = product of:
          0.032922123 = sum of:
            0.032922123 = weight(_text_:22 in 7469) [ClassicSimilarity], result of:
              0.032922123 = score(doc=7469,freq=2.0), product of:
                0.17018363 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04859849 = queryNorm
                0.19345059 = fieldWeight in 7469, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=7469)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    With the growth of hypertext and multimedia applications that support and encourage browsing it is time to take a penetrating look at browsing behaviour. Several dimensions of browsing are exemined, to find out: first, what is browsing and what cognitive processes are associated with it: second, is there a browsing strategy, and if so, are there any differences between how subject-area experts and novices browse; and finally, how can this knowledge be applied to improve the design of hypertext systems. Two groups of students, subject-area experts and novices, were studied while browsing a Macintosh HyperCard application on the subject The Vietnam War. A protocol analysis technique was used to gather and analyze data. Components of the GOMS model were used to describe the goals, operators, methods, and selection rules observed: Three browsing strategies were identified: (1) search-oriented browse, scanning and and reviewing information relevant to a fixed task; (2) review-browse, scanning and reviewing intersting information in the presence of transient browse goals that represent changing tasks, and (3) scan-browse, scanning for interesting information (without review). Most subjects primarily used review-browse interspersed with search-oriented browse. Within this strategy, comparisons between subject-area experts and novices revealed differences in tactics: experts browsed in more depth, seldom used referential links, selected different kinds of topics, and viewed information differently thatn did novices. Based on these findings, suggestions are made to hypertext developers
    Source
    IEEE transactions on systems, man and cybernetics. 22(1992) no.5, S.865-884
  2. Chen, H.; Houston, A.L.; Sewell, R.R.; Schatz, B.R.: Internet browsing and searching : user evaluations of category map and concept space techniques (1998) 0.03
    0.032434322 = product of:
      0.0810858 = sum of:
        0.019651264 = weight(_text_:of in 869) [ClassicSimilarity], result of:
          0.019651264 = score(doc=869,freq=28.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.25858206 = fieldWeight in 869, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=869)
        0.06143454 = weight(_text_:subject in 869) [ClassicSimilarity], result of:
          0.06143454 = score(doc=869,freq=10.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.35344344 = fieldWeight in 869, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03125 = fieldNorm(doc=869)
      0.4 = coord(2/5)
    
    Abstract
    The Internet provides an exceptional testbed for developing algorithms that can improve bowsing and searching large information spaces. Browsing and searching tasks are susceptible to problems of information overload and vocabulary differences. Much of the current research is aimed at the development and refinement of algorithms to improve browsing and searching by addressing these problems. Our research was focused on discovering whether two of the algorithms our research group has developed, a Kohonen algorithm category map for browsing, and an automatically generated concept space algorithm for searching, can help improve browsing and / or searching the Internet. Our results indicate that a Kohonen self-organizing map (SOM)-based algorithm can successfully categorize a large and eclectic Internet information space (the Entertainment subcategory of Yahoo!) into manageable sub-spaces that users can successfully navigate to locate a homepage of interest to them. The SOM algorithm worked best with browsing tasks that were very broad, and in which subjects skipped around between categories. Subjects especially liked the visual and graphical aspects of the map. Subjects who tried to do a directed search, and those that wanted to use the more familiar mental models (alphabetic or hierarchical organization) for browsing, found that the work did not work well. The results from the concept space experiment were especially encouraging. There were no significant differences among the precision measures for the set of documents identified by subject-suggested terms, thesaurus-suggested terms, and the combination of subject- and thesaurus-suggested terms. The recall measures indicated that the combination of subject- and thesaurs-suggested terms exhibited significantly better recall than subject-suggested terms alone. Furthermore, analysis of the homepages indicated that there was limited overlap between the homepages retrieved by the subject-suggested and thesaurus-suggested terms. Since the retrieval homepages for the most part were different, this suggests that a user can enhance a keyword-based search by using an automatically generated concept space. Subejcts especially liked the level of control that they could exert over the search, and the fact that the terms suggested by the thesaurus were 'real' (i.e., orininating in the homepages) and therefore guaranteed to have retrieval success
    Source
    Journal of the American Society for Information Science. 49(1998) no.7, S.582-603
  3. Chen, H.: Explaining and alleviating information management indeterminism : a knowledge-based framework (1994) 0.03
    0.030382723 = product of:
      0.07595681 = sum of:
        0.021008085 = weight(_text_:of in 8221) [ClassicSimilarity], result of:
          0.021008085 = score(doc=8221,freq=8.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.27643585 = fieldWeight in 8221, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=8221)
        0.05494872 = weight(_text_:subject in 8221) [ClassicSimilarity], result of:
          0.05494872 = score(doc=8221,freq=2.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.31612942 = fieldWeight in 8221, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0625 = fieldNorm(doc=8221)
      0.4 = coord(2/5)
    
    Abstract
    Attempts to identify the nature and causes of information management indeterminism in an online research environment and proposes solutions for alleviating this indeterminism. Conducts two empirical studies of information management activities. The first identified the types and nature of information management indeterminism by evaluating archived text. The second focused on four sources of indeterminism: subject area knowledge, classification knowledge, system knowledge, and collaboration knowledge. Proposes a knowledge based design for alleviating indeterminism, which contains a system generated thesaurus and an inferencing engine
  4. Qu, B.; Cong, G.; Li, C.; Sun, A.; Chen, H.: ¬An evaluation of classification models for question topic categorization (2012) 0.02
    0.023562813 = product of:
      0.058907032 = sum of:
        0.024564082 = weight(_text_:of in 237) [ClassicSimilarity], result of:
          0.024564082 = score(doc=237,freq=28.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.32322758 = fieldWeight in 237, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=237)
        0.034342952 = weight(_text_:subject in 237) [ClassicSimilarity], result of:
          0.034342952 = score(doc=237,freq=2.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.19758089 = fieldWeight in 237, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=237)
      0.4 = coord(2/5)
    
    Abstract
    We study the problem of question topic classification using a very large real-world Community Question Answering (CQA) dataset from Yahoo! Answers. The dataset comprises 3.9 million questions and these questions are organized into more than 1,000 categories in a hierarchy. To the best knowledge, this is the first systematic evaluation of the performance of different classification methods on question topic classification as well as short texts. Specifically, we empirically evaluate the following in classifying questions into CQA categories: (a) the usefulness of n-gram features and bag-of-word features; (b) the performance of three standard classification algorithms (naive Bayes, maximum entropy, and support vector machines); (c) the performance of the state-of-the-art hierarchical classification algorithms; (d) the effect of training data size on performance; and (e) the effectiveness of the different components of CQA data, including subject, content, asker, and the best answer. The experimental results show what aspects are important for question topic classification in terms of both effectiveness and efficiency. We believe that the experimental findings from this study will be useful in real-world classification problems.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.5, S.889-903
  5. Chen, H.; Ng, T.D.; Martinez, J.; Schatz, B.R.: ¬A concept space approach to addressing the vocabulary problem in scientific information retrieval : an experiment on the Worm Community System (1997) 0.02
    0.021615213 = product of:
      0.054038033 = sum of:
        0.019695079 = weight(_text_:of in 6492) [ClassicSimilarity], result of:
          0.019695079 = score(doc=6492,freq=18.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.25915858 = fieldWeight in 6492, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6492)
        0.034342952 = weight(_text_:subject in 6492) [ClassicSimilarity], result of:
          0.034342952 = score(doc=6492,freq=2.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.19758089 = fieldWeight in 6492, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6492)
      0.4 = coord(2/5)
    
    Abstract
    This research presents an algorithmic approach to addressing the vocabulary problem in scientific information retrieval and information sharing, using the molecular biology domain as an example. We first present a literature review of cognitive studies related to the vocabulary problem and vocabulary-based search aids (thesauri) and then discuss techniques for building robust and domain-specific thesauri to assist in cross-domain scientific information retrieval. Using a variation of the automatic thesaurus generation techniques, which we refer to as the concept space approach, we recently conducted an experiment in the molecular biology domain in which we created a C. elegans worm thesaurus of 7.657 worm-specific terms and a Drosophila fly thesaurus of 15.626 terms. About 30% of these terms overlapped, which created vocabulary paths from one subject domain to the other. Based on a cognitve study of term association involving 4 biologists, we found that a large percentage (59,6-85,6%) of the terms suggested by the subjects were identified in the cojoined fly-worm thesaurus. However, we found only a small percentage (8,4-18,1%) of the associations suggested by the subjects in the thesaurus
    Source
    Journal of the American Society for Information Science. 48(1997) no.1, S.17-31
  6. Chen, H.; Yim, T.; Fye, D.: Automatic thesaurus generation for an electronic community system (1995) 0.02
    0.02016957 = product of:
      0.05042392 = sum of:
        0.016080966 = weight(_text_:of in 2918) [ClassicSimilarity], result of:
          0.016080966 = score(doc=2918,freq=12.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.21160212 = fieldWeight in 2918, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2918)
        0.034342952 = weight(_text_:subject in 2918) [ClassicSimilarity], result of:
          0.034342952 = score(doc=2918,freq=2.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.19758089 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2918)
      0.4 = coord(2/5)
    
    Abstract
    Reports an algorithmic approach to the automatic generation of thesauri for electronic community systems. The techniques used included terms filtering, automatic indexing, and cluster analysis. The testbed for the research was the Worm Community System, which contains a comprehensive library of specialized community data and literature, currently in use by molecular biologists who study the nematode worm. The resulting worm thesaurus included 2709 researchers' names, 798 gene names, 20 experimental methods, and 4302 subject descriptors. On average, each term had about 90 weighted neighbouring terms indicating relevant concepts. The thesaurus was developed as an online search aide. Tests the worm thesaurus in an experiment with 6 worm researchers of varying degrees of expertise and background. The experiment showed that the thesaurus was an excellent 'memory jogging' device and that it supported learning and serendipitous browsing. Despite some occurrences of obvious noise, the system was useful in suggesting relevant concepts for the researchers' queries and it helped improve concept recall. With a simple browsing interface, an automatic thesaurus can become a useful tool for online search and can assist researchers in exploring and traversing a dynamic and complex electronic community system
    Source
    Journal of the American Society for Information Science. 46(1995) no.3, S.175-193
  7. Chen, H.; Martinez, J.; Kirchhoff, A.; Ng, T.D.; Schatz, B.R.: Alleviating search uncertainty through concept associations : automatic indexing, co-occurence analysis, and parallel computing (1998) 0.02
    0.019635828 = product of:
      0.04908957 = sum of:
        0.0078780325 = weight(_text_:of in 5202) [ClassicSimilarity], result of:
          0.0078780325 = score(doc=5202,freq=2.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.103663445 = fieldWeight in 5202, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5202)
        0.041211538 = weight(_text_:subject in 5202) [ClassicSimilarity], result of:
          0.041211538 = score(doc=5202,freq=2.0), product of:
            0.17381717 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04859849 = queryNorm
            0.23709705 = fieldWeight in 5202, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=5202)
      0.4 = coord(2/5)
    
    Abstract
    In this article, we report research on an algorithmic approach to alleviating search uncertainty in a large information space. Grounded on object filtering, automatic indexing, and co-occurence analysis, we performed a large-scale experiment using a parallel supercomputer (SGI Power Challenge) to analyze 400.000+ abstracts in an INSPEC computer engineering collection. Two system-generated thesauri, one based on a combined object filtering and automatic indexing method, and the other based on automatic indexing only, were compaed with the human-generated INSPEC subject thesaurus. Our user evaluation revealed that the system-generated thesauri were better than the INSPEC thesaurus in 'concept recall', but in 'concept precision' the 3 thesauri were comparable. Our analysis also revealed that the terms suggested by the 3 thesauri were complementary and could be used to significantly increase 'variety' in search terms the thereby reduce search uncertainty
    Source
    Journal of the American Society for Information Science. 49(1998) no.3, S.206-216
  8. Zheng, R.; Li, J.; Chen, H.; Huang, Z.: ¬A framework for authorship identification of online messages : writing-style features and classification techniques (2006) 0.02
    0.017088467 = product of:
      0.042721167 = sum of:
        0.026260108 = weight(_text_:of in 5276) [ClassicSimilarity], result of:
          0.026260108 = score(doc=5276,freq=32.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.34554482 = fieldWeight in 5276, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5276)
        0.016461061 = product of:
          0.032922123 = sum of:
            0.032922123 = weight(_text_:22 in 5276) [ClassicSimilarity], result of:
              0.032922123 = score(doc=5276,freq=2.0), product of:
                0.17018363 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04859849 = queryNorm
                0.19345059 = fieldWeight in 5276, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5276)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    With the rapid proliferation of Internet technologies and applications, misuse of online messages for inappropriate or illegal purposes has become a major concern for society. The anonymous nature of online-message distribution makes identity tracing a critical problem. We developed a framework for authorship identification of online messages to address the identity-tracing problem. In this framework, four types of writing-style features (lexical, syntactic, structural, and content-specific features) are extracted and inductive learning algorithms are used to build feature-based classification models to identify authorship of online messages. To examine this framework, we conducted experiments on English and Chinese online-newsgroup messages. We compared the discriminating power of the four types of features and of three classification techniques: decision trees, backpropagation neural networks, and support vector machines. The experimental results showed that the proposed approach was able to identify authors of online messages with satisfactory accuracy of 70 to 95%. All four types of message features contributed to discriminating authors of online messages. Support vector machines outperformed the other two classification techniques in our experiments. The high performance we achieved for both the English and Chinese datasets showed the potential of this approach in a multiple-language context.
    Date
    22. 7.2006 16:14:37
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.3, S.378-393
  9. Chung, W.; Chen, H.: Browsing the underdeveloped Web : an experiment on the Arabic Medical Web Directory (2009) 0.02
    0.016814284 = product of:
      0.04203571 = sum of:
        0.022282438 = weight(_text_:of in 2733) [ClassicSimilarity], result of:
          0.022282438 = score(doc=2733,freq=16.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.2932045 = fieldWeight in 2733, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2733)
        0.019753272 = product of:
          0.039506543 = sum of:
            0.039506543 = weight(_text_:22 in 2733) [ClassicSimilarity], result of:
              0.039506543 = score(doc=2733,freq=2.0), product of:
                0.17018363 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04859849 = queryNorm
                0.23214069 = fieldWeight in 2733, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2733)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    While the Web has grown significantly in recent years, some portions of the Web remain largely underdeveloped, as shown in a lack of high-quality content and functionality. An example is the Arabic Web, in which a lack of well-structured Web directories limits users' ability to browse for Arabic resources. In this research, we proposed an approach to building Web directories for the underdeveloped Web and developed a proof-of-concept prototype called the Arabic Medical Web Directory (AMedDir) that supports browsing of over 5,000 Arabic medical Web sites and pages organized in a hierarchical structure. We conducted an experiment involving Arab participants and found that the AMedDir significantly outperformed two benchmark Arabic Web directories in terms of browsing effectiveness, efficiency, information quality, and user satisfaction. Participants expressed strong preference for the AMedDir and provided many positive comments. This research thus contributes to developing a useful Web directory for organizing the information in the Arabic medical domain and to a better understanding of how to support browsing on the underdeveloped Web.
    Date
    22. 3.2009 17:57:50
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.3, S.595-607
  10. Hu, D.; Kaza, S.; Chen, H.: Identifying significant facilitators of dark network evolution (2009) 0.02
    0.015681192 = product of:
      0.03920298 = sum of:
        0.022741921 = weight(_text_:of in 2753) [ClassicSimilarity], result of:
          0.022741921 = score(doc=2753,freq=24.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.2992506 = fieldWeight in 2753, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2753)
        0.016461061 = product of:
          0.032922123 = sum of:
            0.032922123 = weight(_text_:22 in 2753) [ClassicSimilarity], result of:
              0.032922123 = score(doc=2753,freq=2.0), product of:
                0.17018363 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04859849 = queryNorm
                0.19345059 = fieldWeight in 2753, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2753)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Social networks evolve over time with the addition and removal of nodes and links to survive and thrive in their environments. Previous studies have shown that the link-formation process in such networks is influenced by a set of facilitators. However, there have been few empirical evaluations to determine the important facilitators. In a research partnership with law enforcement agencies, we used dynamic social-network analysis methods to examine several plausible facilitators of co-offending relationships in a large-scale narcotics network consisting of individuals and vehicles. Multivariate Cox regression and a two-proportion z-test on cyclic and focal closures of the network showed that mutual acquaintance and vehicle affiliations were significant facilitators for the network under study. We also found that homophily with respect to age, race, and gender were not good predictors of future link formation in these networks. Moreover, we examined the social causes and policy implications for the significance and insignificance of various facilitators including common jails on future co-offending. These findings provide important insights into the link-formation processes and the resilience of social networks. In addition, they can be used to aid in the prediction of future links. The methods described can also help in understanding the driving forces behind the formation and evolution of social networks facilitated by mobile and Web technologies.
    Date
    22. 3.2009 18:50:30
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.4, S.655-665
  11. Leroy, G.; Chen, H.: Genescene: an ontology-enhanced integration of linguistic and co-occurrence based relations in biomedical texts (2005) 0.01
    0.013532196 = product of:
      0.03383049 = sum of:
        0.017369429 = weight(_text_:of in 5259) [ClassicSimilarity], result of:
          0.017369429 = score(doc=5259,freq=14.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.22855641 = fieldWeight in 5259, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5259)
        0.016461061 = product of:
          0.032922123 = sum of:
            0.032922123 = weight(_text_:22 in 5259) [ClassicSimilarity], result of:
              0.032922123 = score(doc=5259,freq=2.0), product of:
                0.17018363 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04859849 = queryNorm
                0.19345059 = fieldWeight in 5259, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5259)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The increasing amount of publicly available literature and experimental data in biomedicine makes it hard for biomedical researchers to stay up-to-date. Genescene is a toolkit that will help alleviate this problem by providing an overview of published literature content. We combined a linguistic parser with Concept Space, a co-occurrence based semantic net. Both techniques extract complementary biomedical relations between noun phrases from MEDLINE abstracts. The parser extracts precise and semantically rich relations from individual abstracts. Concept Space extracts relations that hold true for the collection of abstracts. The Gene Ontology, the Human Genome Nomenclature, and the Unified Medical Language System, are also integrated in Genescene. Currently, they are used to facilitate the integration of the two relation types, and to select the more interesting and high-quality relations for presentation. A user study focusing on p53 literature is discussed. All MEDLINE abstracts discussing p53 were processed in Genescene. Two researchers evaluated the terms and relations from several abstracts of interest to them. The results show that the terms were precise (precision 93%) and relevant, as were the parser relations (precision 95%). The Concept Space relations were more precise when selected with ontological knowledge (precision 78%) than without (60%).
    Date
    22. 7.2006 14:26:01
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.5, S.457-468
  12. Zhu, B.; Chen, H.: Information visualization (2004) 0.01
    0.005437491 = product of:
      0.027187455 = sum of:
        0.027187455 = weight(_text_:of in 4276) [ClassicSimilarity], result of:
          0.027187455 = score(doc=4276,freq=70.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.35774738 = fieldWeight in 4276, product of:
              8.3666 = tf(freq=70.0), with freq of:
                70.0 = termFreq=70.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4276)
      0.2 = coord(1/5)
    
    Abstract
    Advanced technology has resulted in the generation of about one million terabytes of information every year. Ninety-reine percent of this is available in digital format (Keim, 2001). More information will be generated in the next three years than was created during all of previous human history (Keim, 2001). Collecting information is no longer a problem, but extracting value from information collections has become progressively more difficult. Various search engines have been developed to make it easier to locate information of interest, but these work well only for a person who has a specific goal and who understands what and how information is stored. This usually is not the Gase. Visualization was commonly thought of in terms of representing human mental processes (MacEachren, 1991; Miller, 1984). The concept is now associated with the amplification of these mental processes (Card, Mackinlay, & Shneiderman, 1999). Human eyes can process visual cues rapidly, whereas advanced information analysis techniques transform the computer into a powerful means of managing digitized information. Visualization offers a link between these two potent systems, the human eye and the computer (Gershon, Eick, & Card, 1998), helping to identify patterns and to extract insights from large amounts of information. The identification of patterns is important because it may lead to a scientific discovery, an interpretation of clues to solve a crime, the prediction of catastrophic weather, a successful financial investment, or a better understanding of human behavior in a computermediated environment. Visualization technology shows considerable promise for increasing the value of large-scale collections of information, as evidenced by several commercial applications of TreeMap (e.g., http://www.smartmoney.com) and Hyperbolic tree (e.g., http://www.inxight.com) to visualize large-scale hierarchical structures. Although the proliferation of visualization technologies dates from the 1990s where sophisticated hardware and software made increasingly faster generation of graphical objects possible, the role of visual aids in facilitating the construction of mental images has a long history. Visualization has been used to communicate ideas, to monitor trends implicit in data, and to explore large volumes of data for hypothesis generation. Imagine traveling to a strange place without a map, having to memorize physical and chemical properties of an element without Mendeleyev's periodic table, trying to understand the stock market without statistical diagrams, or browsing a collection of documents without interactive visual aids. A collection of information can lose its value simply because of the effort required for exhaustive exploration. Such frustrations can be overcome by visualization.
    Visualization can be classified as scientific visualization, software visualization, or information visualization. Although the data differ, the underlying techniques have much in common. They use the same elements (visual cues) and follow the same rules of combining visual cues to deliver patterns. They all involve understanding human perception (Encarnacao, Foley, Bryson, & Feiner, 1994) and require domain knowledge (Tufte, 1990). Because most decisions are based an unstructured information, such as text documents, Web pages, or e-mail messages, this chapter focuses an the visualization of unstructured textual documents. The chapter reviews information visualization techniques developed over the last decade and examines how they have been applied in different domains. The first section provides the background by describing visualization history and giving overviews of scientific, software, and information visualization as well as the perceptual aspects of visualization. The next section assesses important visualization techniques that convert abstract information into visual objects and facilitate navigation through displays an a computer screen. It also explores information analysis algorithms that can be applied to identify or extract salient visualizable structures from collections of information. Information visualization systems that integrate different types of technologies to address problems in different domains are then surveyed; and we move an to a survey and critique of visualization system evaluation studies. The chapter concludes with a summary and identification of future research directions.
    Source
    Annual review of information science and technology. 39(2005), S.139-177
  13. Zhu, B.; Chen, H.: Validating a geographical image retrieval system (2000) 0.01
    0.0052256957 = product of:
      0.026128478 = sum of:
        0.026128478 = weight(_text_:of in 4769) [ClassicSimilarity], result of:
          0.026128478 = score(doc=4769,freq=22.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.34381276 = fieldWeight in 4769, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4769)
      0.2 = coord(1/5)
    
    Abstract
    This paper summarizes a prototype geographical image retrieval system that demonstrates how to integrate image processing and information analysis techniques to support large-scale content-based image retrieval. By using an image as its interface, the prototype system addresses a troublesome aspect of traditional retrieval models, which require users to have complete knowledge of the low-level features of an image. In addition we describe an experiment to validate against that of human subjects in an effort to address the scarcity of research evaluating performance of an algorithm against that of human beings. The results of the experiment indicate that the system could do as well as human subjects in accomplishing the tasks of similarity analysis and image categorization. We also found that under some circumstances texture features of an image are insufficient to represent an geographic image. We believe, however, that our image retrieval system provides a promising approach to integrating image processing techniques and information retrieval algorithms
    Source
    Journal of the American Society for Information Science. 51(2000) no.7, S.625-634
  14. Chen, H.; Dhar, V.: Cognitive process as a basis for intelligent retrieval system design (1991) 0.01
    0.005145909 = product of:
      0.025729544 = sum of:
        0.025729544 = weight(_text_:of in 3845) [ClassicSimilarity], result of:
          0.025729544 = score(doc=3845,freq=12.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.33856338 = fieldWeight in 3845, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=3845)
      0.2 = coord(1/5)
    
    Abstract
    2 studies were conducted to investigate the cognitive processes involved in online document-based information retrieval. These studies led to the development of 5 computerised models of online document retrieval. These models were incorporated into a design of an 'intelligent' document-based retrieval system. Following a discussion of this system, discusses the broader implications of the research for the design of information retrieval sysems
  15. Chen, H.; Chung, W.; Qin, J.; Reid, E.; Sageman, M.; Weimann, G.: Uncovering the dark Web : a case study of Jihad on the Web (2008) 0.00
    0.004982505 = product of:
      0.024912525 = sum of:
        0.024912525 = weight(_text_:of in 1880) [ClassicSimilarity], result of:
          0.024912525 = score(doc=1880,freq=20.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.32781258 = fieldWeight in 1880, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1880)
      0.2 = coord(1/5)
    
    Abstract
    While the Web has become a worldwide platform for communication, terrorists share their ideology and communicate with members on the Dark Web - the reverse side of the Web used by terrorists. Currently, the problems of information overload and difficulty to obtain a comprehensive picture of terrorist activities hinder effective and efficient analysis of terrorist information on the Web. To improve understanding of terrorist activities, we have developed a novel methodology for collecting and analyzing Dark Web information. The methodology incorporates information collection, analysis, and visualization techniques, and exploits various Web information sources. We applied it to collecting and analyzing information of 39 Jihad Web sites and developed visualization of their site contents, relationships, and activity levels. An expert evaluation showed that the methodology is very useful and promising, having a high potential to assist in investigation and understanding of terrorist activities by producing results that could potentially help guide both policymaking and intelligence research.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.8, S.1347-1359
  16. Chen, H.; Baptista Nunes, J.M.; Ragsdell, G.; An, X.: Somatic and cultural knowledge : drivers of a habitus-driven model of tacit knowledge acquisition (2019) 0.00
    0.0049495255 = product of:
      0.024747627 = sum of:
        0.024747627 = weight(_text_:of in 5460) [ClassicSimilarity], result of:
          0.024747627 = score(doc=5460,freq=58.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.32564276 = fieldWeight in 5460, product of:
              7.615773 = tf(freq=58.0), with freq of:
                58.0 = termFreq=58.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5460)
      0.2 = coord(1/5)
    
    Abstract
    The purpose of this paper is to identify and explain the role of individual learning and development in acquiring tacit knowledge in the context of the inexorable and intense continuous change (technological and otherwise) that characterizes our society today, and also to investigate the software (SW) sector, which is at the core of contemporary continuous change and is a paradigm of effective and intrinsic knowledge sharing (KS). This makes the SW sector unique and different from others where KS is so hard to implement. Design/methodology/approach The study employed an inductive qualitative approach based on a multi-case study approach, composed of three successful SW companies in China. These companies are representative of the fabric of the sector, namely a small- and medium-sized enterprise, a large private company and a large state-owned enterprise. The fieldwork included 44 participants who were interviewed using a semi-structured script. The interview data were coded and interpreted following the Straussian grounded theory pattern of open coding, axial coding and selective coding. The process of interviewing was stopped when theoretical saturation was achieved after a careful process of theoretical sampling.
    Findings The findings of this research suggest that individual learning and development are deemed to be the fundamental feature for professional success and survival in the continuously changing environment of the SW industry today. However, individual learning was described by the participants as much more than a mere individual process. It involves a collective and participatory effort within the organization and the sector as a whole, and a KS process that transcends organizational, cultural and national borders. Individuals in particular are mostly motivated by the pressing need to face and adapt to the dynamic and changeable environments of today's digital society that is led by the sector. Software practitioners are continuously in need of learning, refreshing and accumulating tacit knowledge, partly because it is required by their companies, but also due to a sound awareness of continuous technical and technological changes that seem only to increase with the advances of information technology. This led to a clear theoretical understanding that the continuous change that faces the sector has led to individual acquisition of culture and somatic knowledge that in turn lay the foundation for not only the awareness of the need for continuous individual professional development but also for the creation of habitus related to KS and continuous learning. Originality/value The study reported in this paper shows that there is a theoretical link between the existence of conducive organizational and sector-wide somatic and cultural knowledge, and the success of KS practices that lead to individual learning and development. Therefore, the theory proposed suggests that somatic and cultural knowledge are crucial drivers for the creation of habitus of individual tacit knowledge acquisition. The paper further proposes a habitus-driven individual development (HDID) Theoretical Model that can be of use to both academics and practitioners interested in fostering and developing processes of KS and individual development in knowledge-intensive organizations.
    Source
    Journal of documentation. 75(2019) no.5, S.927-953
  17. Ramsey, M.C.; Chen, H.; Zhu, B.; Schatz, B.R.: ¬A collection of visual thesauri for browsing large collections of geographic images (1999) 0.00
    0.0048634405 = product of:
      0.024317201 = sum of:
        0.024317201 = weight(_text_:of in 3922) [ClassicSimilarity], result of:
          0.024317201 = score(doc=3922,freq=14.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.31997898 = fieldWeight in 3922, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3922)
      0.2 = coord(1/5)
    
    Abstract
    Digital libraries of geo-spatial multimedia content are currently deficient in providing fuzzy, concept-based retrieval mechanisms to users. The main challenge is that indexing and thesaurus creation are extremely labor-intensive processes for text documents and especially for images. Recently, 800.000 declassified staellite photographs were made available by the US Geological Survey. Additionally, millions of satellite and aerial photographs are archived in national and local map libraries. Such enormous collections make human indexing and thesaurus generation methods impossible to utilize. In this article we propose a scalable method to automatically generate visual thesauri of large collections of geo-spatial media using fuzzy, unsupervised machine-learning techniques
    Source
    Journal of the American Society for Information Science. 50(1999) no.9, S.826-834
  18. Orwig, R.E.; Chen, H.; Nunamaker, J.F.: ¬A graphical, self-organizing approach to classifying electronic meeting output (1997) 0.00
    0.0048634405 = product of:
      0.024317201 = sum of:
        0.024317201 = weight(_text_:of in 6928) [ClassicSimilarity], result of:
          0.024317201 = score(doc=6928,freq=14.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.31997898 = fieldWeight in 6928, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6928)
      0.2 = coord(1/5)
    
    Abstract
    Describes research in the application of a Kohonen Self-Organizing Map (SOM) to the problem of classification of electronic brainstorming output and an evaluation of the results. Describes an electronic meeting system and describes the classification problem that exists in the group problem solving process. Surveys the literature concerning classification. Describes the application of the Kohonen SOM to the meeting output classification problem. Describes an experiment that evaluated the classification performed by the Kohonen SOM by comparing it with those of a human expert and a Hopfield neural network. Discusses conclusions and directions for future research
    Source
    Journal of the American Society for Information Science. 48(1997) no.2, S.157-170
  19. Vishwanath, A.; Chen, H.: Technology clusters : using multidimensional scaling to evaluate and structure technology clusters (2006) 0.00
    0.004734108 = product of:
      0.023670541 = sum of:
        0.023670541 = weight(_text_:of in 6006) [ClassicSimilarity], result of:
          0.023670541 = score(doc=6006,freq=26.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.31146988 = fieldWeight in 6006, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6006)
      0.2 = coord(1/5)
    
    Abstract
    Empirical evidence suggests that the ownership of related products that form a technology cluster is signifIcantly better than the attributes of an innovation at predicting adoption. The treatment of technology clusters, however, has been ad hoc and study specific: Researchers often make a priori assumptions about the relationships between technologies and measure ownership using lists of functionally related technology, without any systematic reasoning. Hence, the authors set out to examine empirically the composition of technology clusters and the differences, if any, in clusters of technologies formed by adopters and nonadopters. Using the Galileo system of multidimensional scaling and the associational diffusion framework, the dissimilarities between 30 technology concepts were scored by adopters and nonadopters. Results indicate clear differences in conceptualization of clusters: Adopters tend to relate technologies based an their functional similarity; here, innovations are perceived to be complementary, and hence, adoption of one technology spurs the adoption of related technologies. On the other hand, nonadopters tend to relate technologies using a stricter ascendancy of association where the adoption of an innovation makes subsequent innovations redundant. The results question the measurement approaches and present an alternative methodology.
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.11, S.1451-1460
  20. Chen, H.; Zhang, Y.; Houston, A.L.: Semantic indexing and searching using a Hopfield net (1998) 0.00
    0.004726819 = product of:
      0.023634095 = sum of:
        0.023634095 = weight(_text_:of in 5704) [ClassicSimilarity], result of:
          0.023634095 = score(doc=5704,freq=18.0), product of:
            0.07599624 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04859849 = queryNorm
            0.3109903 = fieldWeight in 5704, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5704)
      0.2 = coord(1/5)
    
    Abstract
    Presents a neural network approach to document semantic indexing. Reports results of a study to apply a Hopfield net algorithm to simulate human associative memory for concept exploration in the domain of computer science and engineering. The INSPEC database, consisting of 320.000 abstracts from leading periodical articles was used as the document test bed. Benchmark tests conformed that 3 parameters: maximum number of activated nodes; maximum allowable error; and maximum number of iterations; were useful in positively influencing network convergence behaviour without negatively impacting central processing unit performance. Another series of benchmark tests was performed to determine the effectiveness of various filtering techniques in reducing the negative impact of noisy input terms. Preliminary user tests conformed expectations that the Hopfield net is potentially useful as an associative memory technique to improve document recall and precision by solving discrepancies between indexer vocabularies and end user vocabularies
    Source
    Journal of information science. 24(1998) no.1, S.3-18

Types

  • a 59
  • el 1
  • More… Less…