Search (61 results, page 1 of 4)

  • × author_ss:"Chen, H."
  1. Dumais, S.; Chen, H.: Hierarchical classification of Web content (2000) 0.03
    0.032967843 = product of:
      0.0549464 = sum of:
        0.0058515854 = product of:
          0.05266427 = sum of:
            0.05266427 = weight(_text_:p in 492) [ClassicSimilarity], result of:
              0.05266427 = score(doc=492,freq=2.0), product of:
                0.11047626 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03072615 = queryNorm
                0.47670212 = fieldWeight in 492, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.09375 = fieldNorm(doc=492)
          0.11111111 = coord(1/9)
        0.005416122 = weight(_text_:a in 492) [ClassicSimilarity], result of:
          0.005416122 = score(doc=492,freq=2.0), product of:
            0.035428695 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03072615 = queryNorm
            0.15287387 = fieldWeight in 492, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=492)
        0.043678693 = weight(_text_:u in 492) [ClassicSimilarity], result of:
          0.043678693 = score(doc=492,freq=2.0), product of:
            0.10061107 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03072615 = queryNorm
            0.43413407 = fieldWeight in 492, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.09375 = fieldNorm(doc=492)
      0.6 = coord(3/5)
    
    Source
    Proceedings of ACM SIGIR 23rd International Conference on Research and Development in Information Retrieval. Ed. by N.J. Belkin, P. Ingwersen u. M.K. Leong
    Type
    a
  2. Chen, H.; Martinez, J.; Kirchhoff, A.; Ng, T.D.; Schatz, B.R.: Alleviating search uncertainty through concept associations : automatic indexing, co-occurence analysis, and parallel computing (1998) 0.03
    0.029422797 = product of:
      0.049037993 = sum of:
        0.006633367 = weight(_text_:a in 5202) [ClassicSimilarity], result of:
          0.006633367 = score(doc=5202,freq=12.0), product of:
            0.035428695 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03072615 = queryNorm
            0.18723148 = fieldWeight in 5202, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=5202)
        0.021839347 = weight(_text_:u in 5202) [ClassicSimilarity], result of:
          0.021839347 = score(doc=5202,freq=2.0), product of:
            0.10061107 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03072615 = queryNorm
            0.21706703 = fieldWeight in 5202, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.046875 = fieldNorm(doc=5202)
        0.020565277 = weight(_text_:j in 5202) [ClassicSimilarity], result of:
          0.020565277 = score(doc=5202,freq=2.0), product of:
            0.09763223 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.03072615 = queryNorm
            0.21064025 = fieldWeight in 5202, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.046875 = fieldNorm(doc=5202)
      0.6 = coord(3/5)
    
    Abstract
    In this article, we report research on an algorithmic approach to alleviating search uncertainty in a large information space. Grounded on object filtering, automatic indexing, and co-occurence analysis, we performed a large-scale experiment using a parallel supercomputer (SGI Power Challenge) to analyze 400.000+ abstracts in an INSPEC computer engineering collection. Two system-generated thesauri, one based on a combined object filtering and automatic indexing method, and the other based on automatic indexing only, were compaed with the human-generated INSPEC subject thesaurus. Our user evaluation revealed that the system-generated thesauri were better than the INSPEC thesaurus in 'concept recall', but in 'concept precision' the 3 thesauri were comparable. Our analysis also revealed that the terms suggested by the 3 thesauri were complementary and could be used to significantly increase 'variety' in search terms the thereby reduce search uncertainty
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
    Type
    a
  3. Zheng, R.; Li, J.; Chen, H.; Huang, Z.: ¬A framework for authorship identification of online messages : writing-style features and classification techniques (2006) 0.02
    0.019843776 = product of:
      0.03307296 = sum of:
        0.0055278065 = weight(_text_:a in 5276) [ClassicSimilarity], result of:
          0.0055278065 = score(doc=5276,freq=12.0), product of:
            0.035428695 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03072615 = queryNorm
            0.15602624 = fieldWeight in 5276, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5276)
        0.01713773 = weight(_text_:j in 5276) [ClassicSimilarity], result of:
          0.01713773 = score(doc=5276,freq=2.0), product of:
            0.09763223 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.03072615 = queryNorm
            0.17553353 = fieldWeight in 5276, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5276)
        0.010407423 = product of:
          0.020814845 = sum of:
            0.020814845 = weight(_text_:22 in 5276) [ClassicSimilarity], result of:
              0.020814845 = score(doc=5276,freq=2.0), product of:
                0.10759774 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03072615 = queryNorm
                0.19345059 = fieldWeight in 5276, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5276)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    With the rapid proliferation of Internet technologies and applications, misuse of online messages for inappropriate or illegal purposes has become a major concern for society. The anonymous nature of online-message distribution makes identity tracing a critical problem. We developed a framework for authorship identification of online messages to address the identity-tracing problem. In this framework, four types of writing-style features (lexical, syntactic, structural, and content-specific features) are extracted and inductive learning algorithms are used to build feature-based classification models to identify authorship of online messages. To examine this framework, we conducted experiments on English and Chinese online-newsgroup messages. We compared the discriminating power of the four types of features and of three classification techniques: decision trees, backpropagation neural networks, and support vector machines. The experimental results showed that the proposed approach was able to identify authors of online messages with satisfactory accuracy of 70 to 95%. All four types of message features contributed to discriminating authors of online messages. Support vector machines outperformed the other two classification techniques in our experiments. The high performance we achieved for both the English and Chinese datasets showed the potential of this approach in a multiple-language context.
    Date
    22. 7.2006 16:14:37
    Type
    a
  4. Schatz, B.R.; Johnson, E.H.; Cochrane, P.A.; Chen, H.: Interactive term suggestion for users of digital libraries : using thesauri and co-occurrence lists for information retrieval (1996) 0.02
    0.01636494 = product of:
      0.04091235 = sum of:
        0.0045134346 = weight(_text_:a in 6417) [ClassicSimilarity], result of:
          0.0045134346 = score(doc=6417,freq=2.0), product of:
            0.035428695 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03072615 = queryNorm
            0.12739488 = fieldWeight in 6417, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=6417)
        0.036398914 = weight(_text_:u in 6417) [ClassicSimilarity], result of:
          0.036398914 = score(doc=6417,freq=2.0), product of:
            0.10061107 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03072615 = queryNorm
            0.3617784 = fieldWeight in 6417, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.078125 = fieldNorm(doc=6417)
      0.4 = coord(2/5)
    
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
    Type
    a
  5. Schroeder, J.; Xu, J.; Chen, H.; Chau, M.: Automated criminal link analysis based on domain knowledge (2007) 0.01
    0.013799927 = product of:
      0.034499817 = sum of:
        0.005416122 = weight(_text_:a in 275) [ClassicSimilarity], result of:
          0.005416122 = score(doc=275,freq=8.0), product of:
            0.035428695 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03072615 = queryNorm
            0.15287387 = fieldWeight in 275, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=275)
        0.029083695 = weight(_text_:j in 275) [ClassicSimilarity], result of:
          0.029083695 = score(doc=275,freq=4.0), product of:
            0.09763223 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.03072615 = queryNorm
            0.2978903 = fieldWeight in 275, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.046875 = fieldNorm(doc=275)
      0.4 = coord(2/5)
    
    Abstract
    Link (association) analysis has been used in the criminal justice domain to search large datasets for associations between crime entities in order to facilitate crime investigations. However, link analysis still faces many challenging problems, such as information overload, high search complexity, and heavy reliance on domain knowledge. To address these challenges, this article proposes several techniques for automated, effective, and efficient link analysis. These techniques include the co-occurrence analysis, the shortest path algorithm, and a heuristic approach to identifying associations and determining their importance. We developed a prototype system called CrimeLink Explorer based on the proposed techniques. Results of a user study with 10 crime investigators from the Tucson Police Department showed that our system could help subjects conduct link analysis more efficiently than traditional single-level link analysis tools. Moreover, subjects believed that association paths found based on the heuristic approach were more accurate than those found based solely on the co-occurrence analysis and that the automated link analysis system would be of great help in crime investigations.
    Type
    a
  6. Chen, H.; Zhang, Y.; Houston, A.L.: Semantic indexing and searching using a Hopfield net (1998) 0.01
    0.011157903 = product of:
      0.027894756 = sum of:
        0.0060554086 = weight(_text_:a in 5704) [ClassicSimilarity], result of:
          0.0060554086 = score(doc=5704,freq=10.0), product of:
            0.035428695 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03072615 = queryNorm
            0.1709182 = fieldWeight in 5704, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=5704)
        0.021839347 = weight(_text_:u in 5704) [ClassicSimilarity], result of:
          0.021839347 = score(doc=5704,freq=2.0), product of:
            0.10061107 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03072615 = queryNorm
            0.21706703 = fieldWeight in 5704, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.046875 = fieldNorm(doc=5704)
      0.4 = coord(2/5)
    
    Abstract
    Presents a neural network approach to document semantic indexing. Reports results of a study to apply a Hopfield net algorithm to simulate human associative memory for concept exploration in the domain of computer science and engineering. The INSPEC database, consisting of 320.000 abstracts from leading periodical articles was used as the document test bed. Benchmark tests conformed that 3 parameters: maximum number of activated nodes; maximum allowable error; and maximum number of iterations; were useful in positively influencing network convergence behaviour without negatively impacting central processing unit performance. Another series of benchmark tests was performed to determine the effectiveness of various filtering techniques in reducing the negative impact of noisy input terms. Preliminary user tests conformed expectations that the Hopfield net is potentially useful as an associative memory technique to improve document recall and precision by solving discrepancies between indexer vocabularies and end user vocabularies
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
    Type
    a
  7. Li, J.; Zhang, Z.; Li, X.; Chen, H.: Kernel-based learning for biomedical relation extraction (2008) 0.01
    0.010879458 = product of:
      0.027198644 = sum of:
        0.006633367 = weight(_text_:a in 1611) [ClassicSimilarity], result of:
          0.006633367 = score(doc=1611,freq=12.0), product of:
            0.035428695 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03072615 = queryNorm
            0.18723148 = fieldWeight in 1611, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1611)
        0.020565277 = weight(_text_:j in 1611) [ClassicSimilarity], result of:
          0.020565277 = score(doc=1611,freq=2.0), product of:
            0.09763223 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.03072615 = queryNorm
            0.21064025 = fieldWeight in 1611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.046875 = fieldNorm(doc=1611)
      0.4 = coord(2/5)
    
    Abstract
    Relation extraction is the process of scanning text for relationships between named entities. Recently, significant studies have focused on automatically extracting relations from biomedical corpora. Most existing biomedical relation extractors require manual creation of biomedical lexicons or parsing templates based on domain knowledge. In this study, we propose to use kernel-based learning methods to automatically extract biomedical relations from literature text. We develop a framework of kernel-based learning for biomedical relation extraction. In particular, we modified the standard tree kernel function by incorporating a trace kernel to capture richer contextual information. In our experiments on a biomedical corpus, we compare different kernel functions for biomedical relation detection and classification. The experimental results show that a tree kernel outperforms word and sequence kernels for relation detection, our trace-tree kernel outperforms the standard tree kernel, and a composite kernel outperforms individual kernels for relation extraction.
    Type
    a
  8. Chen, H.; Chung, W.; Qin, J.; Reid, E.; Sageman, M.; Weimann, G.: Uncovering the dark Web : a case study of Jihad on the Web (2008) 0.01
    0.010879458 = product of:
      0.027198644 = sum of:
        0.006633367 = weight(_text_:a in 1880) [ClassicSimilarity], result of:
          0.006633367 = score(doc=1880,freq=12.0), product of:
            0.035428695 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03072615 = queryNorm
            0.18723148 = fieldWeight in 1880, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1880)
        0.020565277 = weight(_text_:j in 1880) [ClassicSimilarity], result of:
          0.020565277 = score(doc=1880,freq=2.0), product of:
            0.09763223 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.03072615 = queryNorm
            0.21064025 = fieldWeight in 1880, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.046875 = fieldNorm(doc=1880)
      0.4 = coord(2/5)
    
    Abstract
    While the Web has become a worldwide platform for communication, terrorists share their ideology and communicate with members on the Dark Web - the reverse side of the Web used by terrorists. Currently, the problems of information overload and difficulty to obtain a comprehensive picture of terrorist activities hinder effective and efficient analysis of terrorist information on the Web. To improve understanding of terrorist activities, we have developed a novel methodology for collecting and analyzing Dark Web information. The methodology incorporates information collection, analysis, and visualization techniques, and exploits various Web information sources. We applied it to collecting and analyzing information of 39 Jihad Web sites and developed visualization of their site contents, relationships, and activity levels. An expert evaluation showed that the methodology is very useful and promising, having a high potential to assist in investigation and understanding of terrorist activities by producing results that could potentially help guide both policymaking and intelligence research.
    Type
    a
  9. Chen, H.; Ng, T.D.; Martinez, J.; Schatz, B.R.: ¬A concept space approach to addressing the vocabulary problem in scientific information retrieval : an experiment on the Worm Community System (1997) 0.01
    0.009563154 = product of:
      0.023907883 = sum of:
        0.0067701526 = weight(_text_:a in 6492) [ClassicSimilarity], result of:
          0.0067701526 = score(doc=6492,freq=18.0), product of:
            0.035428695 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03072615 = queryNorm
            0.19109234 = fieldWeight in 6492, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6492)
        0.01713773 = weight(_text_:j in 6492) [ClassicSimilarity], result of:
          0.01713773 = score(doc=6492,freq=2.0), product of:
            0.09763223 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.03072615 = queryNorm
            0.17553353 = fieldWeight in 6492, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6492)
      0.4 = coord(2/5)
    
    Abstract
    This research presents an algorithmic approach to addressing the vocabulary problem in scientific information retrieval and information sharing, using the molecular biology domain as an example. We first present a literature review of cognitive studies related to the vocabulary problem and vocabulary-based search aids (thesauri) and then discuss techniques for building robust and domain-specific thesauri to assist in cross-domain scientific information retrieval. Using a variation of the automatic thesaurus generation techniques, which we refer to as the concept space approach, we recently conducted an experiment in the molecular biology domain in which we created a C. elegans worm thesaurus of 7.657 worm-specific terms and a Drosophila fly thesaurus of 15.626 terms. About 30% of these terms overlapped, which created vocabulary paths from one subject domain to the other. Based on a cognitve study of term association involving 4 biologists, we found that a large percentage (59,6-85,6%) of the terms suggested by the subjects were identified in the cojoined fly-worm thesaurus. However, we found only a small percentage (8,4-18,1%) of the associations suggested by the subjects in the thesaurus
    Type
    a
  10. Chen, H.; Lally, A.M.; Zhu, B.; Chau, M.: HelpfulMed : Intelligent searching for medical information over the Internet (2003) 0.01
    0.009490906 = product of:
      0.023727264 = sum of:
        0.0055278065 = weight(_text_:a in 1615) [ClassicSimilarity], result of:
          0.0055278065 = score(doc=1615,freq=12.0), product of:
            0.035428695 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03072615 = queryNorm
            0.15602624 = fieldWeight in 1615, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1615)
        0.018199457 = weight(_text_:u in 1615) [ClassicSimilarity], result of:
          0.018199457 = score(doc=1615,freq=2.0), product of:
            0.10061107 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03072615 = queryNorm
            0.1808892 = fieldWeight in 1615, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1615)
      0.4 = coord(2/5)
    
    Abstract
    The Medical professionals and researchers need information from reputable sources to accomplish their work. Unfortunately, the Web has a large number of documents that are irrelevant to their work, even those documents that purport to be "medically-related." This paper describes an architecture designed to integrate advanced searching and indexing algorithms, an automatic thesaurus, or "concept space," and Kohonen-based Self-Organizing Map (SOM) technologies to provide searchers with finegrained results. Initial results indicate that these systems provide complementary retrieval functionalities. HelpfulMed not only allows users to search Web pages and other online databases, but also allows them to build searches through the use of an automatic thesaurus and browse a graphical display of medical-related topics. Evaluation results for each of the different components are included. Our spidering algorithm outperformed both breadth-first search and PageRank spiders an a test collection of 100,000 Web pages. The automatically generated thesaurus performed as well as both MeSH and UMLS-systems which require human mediation for currency. Lastly, a variant of the Kohonen SOM was comparable to MeSH terms in perceived cluster precision and significantly better at perceived cluster recall.
    Footnote
    Teil eines Themenheftes: "Web retrieval and mining: A machine learning perspective"
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
    Type
    a
  11. Qin, J.; Zhou, Y.; Chau, M.; Chen, H.: Multilingual Web retrieval : an experiment in English-Chinese business intelligence (2006) 0.01
    0.0092433775 = product of:
      0.023108443 = sum of:
        0.005970713 = weight(_text_:a in 5054) [ClassicSimilarity], result of:
          0.005970713 = score(doc=5054,freq=14.0), product of:
            0.035428695 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03072615 = queryNorm
            0.1685276 = fieldWeight in 5054, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5054)
        0.01713773 = weight(_text_:j in 5054) [ClassicSimilarity], result of:
          0.01713773 = score(doc=5054,freq=2.0), product of:
            0.09763223 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.03072615 = queryNorm
            0.17553353 = fieldWeight in 5054, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5054)
      0.4 = coord(2/5)
    
    Abstract
    As increasing numbers of non-English resources have become available on the Web, the interesting and important issue of how Web users can retrieve documents in different languages has arisen. Cross-language information retrieval (CLIP), the study of retrieving information in one language by queries expressed in another language, is a promising approach to the problem. Cross-language information retrieval has attracted much attention in recent years. Most research systems have achieved satisfactory performance on standard Text REtrieval Conference (TREC) collections such as news articles, but CLIR techniques have not been widely studied and evaluated for applications such as Web portals. In this article, the authors present their research in developing and evaluating a multilingual English-Chinese Web portal that incorporates various CLIP techniques for use in the business domain. A dictionary-based approach was adopted and combines phrasal translation, co-occurrence analysis, and pre- and posttranslation query expansion. The portal was evaluated by domain experts, using a set of queries in both English and Chinese. The experimental results showed that co-occurrence-based phrasal translation achieved a 74.6% improvement in precision over simple word-byword translation. When used together, pre- and posttranslation query expansion improved the performance slightly, achieving a 78.0% improvement over the baseline word-by-word translation approach. In general, applying CLIR techniques in Web applications shows promise.
    Type
    a
  12. Chau, M.; Wong, C.H.; Zhou, Y.; Qin, J.; Chen, H.: Evaluating the use of search engine development tools in IT education (2010) 0.01
    0.008418593 = product of:
      0.02104648 = sum of:
        0.0039087497 = weight(_text_:a in 3325) [ClassicSimilarity], result of:
          0.0039087497 = score(doc=3325,freq=6.0), product of:
            0.035428695 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03072615 = queryNorm
            0.11032722 = fieldWeight in 3325, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3325)
        0.01713773 = weight(_text_:j in 3325) [ClassicSimilarity], result of:
          0.01713773 = score(doc=3325,freq=2.0), product of:
            0.09763223 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.03072615 = queryNorm
            0.17553353 = fieldWeight in 3325, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3325)
      0.4 = coord(2/5)
    
    Abstract
    It is important for education in computer science and information systems to keep up to date with the latest development in technology. With the rapid development of the Internet and the Web, many schools have included Internet-related technologies, such as Web search engines and e-commerce, as part of their curricula. Previous research has shown that it is effective to use search engine development tools to facilitate students' learning. However, the effectiveness of these tools in the classroom has not been evaluated. In this article, we review the design of three search engine development tools, SpidersRUs, Greenstone, and Alkaline, followed by an evaluation study that compared the three tools in the classroom. In the study, 33 students were divided into 13 groups and each group used the three tools to develop three independent search engines in a class project. Our evaluation results showed that SpidersRUs performed better than the two other tools in overall satisfaction and the level of knowledge gained in their learning experience when using the tools for a class project on Internet applications development.
    Type
    a
  13. Chung, W.; Chen, H.: Browsing the underdeveloped Web : an experiment on the Arabic Medical Web Directory (2009) 0.01
    0.007861505 = product of:
      0.019653762 = sum of:
        0.007164856 = weight(_text_:a in 2733) [ClassicSimilarity], result of:
          0.007164856 = score(doc=2733,freq=14.0), product of:
            0.035428695 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03072615 = queryNorm
            0.20223314 = fieldWeight in 2733, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2733)
        0.012488906 = product of:
          0.024977813 = sum of:
            0.024977813 = weight(_text_:22 in 2733) [ClassicSimilarity], result of:
              0.024977813 = score(doc=2733,freq=2.0), product of:
                0.10759774 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03072615 = queryNorm
                0.23214069 = fieldWeight in 2733, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2733)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    While the Web has grown significantly in recent years, some portions of the Web remain largely underdeveloped, as shown in a lack of high-quality content and functionality. An example is the Arabic Web, in which a lack of well-structured Web directories limits users' ability to browse for Arabic resources. In this research, we proposed an approach to building Web directories for the underdeveloped Web and developed a proof-of-concept prototype called the Arabic Medical Web Directory (AMedDir) that supports browsing of over 5,000 Arabic medical Web sites and pages organized in a hierarchical structure. We conducted an experiment involving Arab participants and found that the AMedDir significantly outperformed two benchmark Arabic Web directories in terms of browsing effectiveness, efficiency, information quality, and user satisfaction. Participants expressed strong preference for the AMedDir and provided many positive comments. This research thus contributes to developing a useful Web directory for organizing the information in the Arabic medical domain and to a better understanding of how to support browsing on the underdeveloped Web.
    Date
    22. 3.2009 17:57:50
    Type
    a
  14. Carmel, E.; Crawford, S.; Chen, H.: Browsing in hypertext : a cognitive study (1992) 0.01
    0.0065512545 = product of:
      0.016378136 = sum of:
        0.005970713 = weight(_text_:a in 7469) [ClassicSimilarity], result of:
          0.005970713 = score(doc=7469,freq=14.0), product of:
            0.035428695 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03072615 = queryNorm
            0.1685276 = fieldWeight in 7469, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=7469)
        0.010407423 = product of:
          0.020814845 = sum of:
            0.020814845 = weight(_text_:22 in 7469) [ClassicSimilarity], result of:
              0.020814845 = score(doc=7469,freq=2.0), product of:
                0.10759774 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03072615 = queryNorm
                0.19345059 = fieldWeight in 7469, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=7469)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    With the growth of hypertext and multimedia applications that support and encourage browsing it is time to take a penetrating look at browsing behaviour. Several dimensions of browsing are exemined, to find out: first, what is browsing and what cognitive processes are associated with it: second, is there a browsing strategy, and if so, are there any differences between how subject-area experts and novices browse; and finally, how can this knowledge be applied to improve the design of hypertext systems. Two groups of students, subject-area experts and novices, were studied while browsing a Macintosh HyperCard application on the subject The Vietnam War. A protocol analysis technique was used to gather and analyze data. Components of the GOMS model were used to describe the goals, operators, methods, and selection rules observed: Three browsing strategies were identified: (1) search-oriented browse, scanning and and reviewing information relevant to a fixed task; (2) review-browse, scanning and reviewing intersting information in the presence of transient browse goals that represent changing tasks, and (3) scan-browse, scanning for interesting information (without review). Most subjects primarily used review-browse interspersed with search-oriented browse. Within this strategy, comparisons between subject-area experts and novices revealed differences in tactics: experts browsed in more depth, seldom used referential links, selected different kinds of topics, and viewed information differently thatn did novices. Based on these findings, suggestions are made to hypertext developers
    Source
    IEEE transactions on systems, man and cybernetics. 22(1992) no.5, S.865-884
    Type
    a
  15. Leroy, G.; Chen, H.: Genescene: an ontology-enhanced integration of linguistic and co-occurrence based relations in biomedical texts (2005) 0.01
    0.006181439 = product of:
      0.015453597 = sum of:
        0.005046174 = weight(_text_:a in 5259) [ClassicSimilarity], result of:
          0.005046174 = score(doc=5259,freq=10.0), product of:
            0.035428695 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03072615 = queryNorm
            0.14243183 = fieldWeight in 5259, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5259)
        0.010407423 = product of:
          0.020814845 = sum of:
            0.020814845 = weight(_text_:22 in 5259) [ClassicSimilarity], result of:
              0.020814845 = score(doc=5259,freq=2.0), product of:
                0.10759774 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03072615 = queryNorm
                0.19345059 = fieldWeight in 5259, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5259)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The increasing amount of publicly available literature and experimental data in biomedicine makes it hard for biomedical researchers to stay up-to-date. Genescene is a toolkit that will help alleviate this problem by providing an overview of published literature content. We combined a linguistic parser with Concept Space, a co-occurrence based semantic net. Both techniques extract complementary biomedical relations between noun phrases from MEDLINE abstracts. The parser extracts precise and semantically rich relations from individual abstracts. Concept Space extracts relations that hold true for the collection of abstracts. The Gene Ontology, the Human Genome Nomenclature, and the Unified Medical Language System, are also integrated in Genescene. Currently, they are used to facilitate the integration of the two relation types, and to select the more interesting and high-quality relations for presentation. A user study focusing on p53 literature is discussed. All MEDLINE abstracts discussing p53 were processed in Genescene. Two researchers evaluated the terms and relations from several abstracts of interest to them. The results show that the terms were precise (precision 93%) and relevant, as were the parser relations (precision 95%). The Concept Space relations were more precise when selected with ontological knowledge (precision 78%) than without (60%).
    Date
    22. 7.2006 14:26:01
    Type
    a
  16. Hu, D.; Kaza, S.; Chen, H.: Identifying significant facilitators of dark network evolution (2009) 0.01
    0.006181439 = product of:
      0.015453597 = sum of:
        0.005046174 = weight(_text_:a in 2753) [ClassicSimilarity], result of:
          0.005046174 = score(doc=2753,freq=10.0), product of:
            0.035428695 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03072615 = queryNorm
            0.14243183 = fieldWeight in 2753, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2753)
        0.010407423 = product of:
          0.020814845 = sum of:
            0.020814845 = weight(_text_:22 in 2753) [ClassicSimilarity], result of:
              0.020814845 = score(doc=2753,freq=2.0), product of:
                0.10759774 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03072615 = queryNorm
                0.19345059 = fieldWeight in 2753, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2753)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Social networks evolve over time with the addition and removal of nodes and links to survive and thrive in their environments. Previous studies have shown that the link-formation process in such networks is influenced by a set of facilitators. However, there have been few empirical evaluations to determine the important facilitators. In a research partnership with law enforcement agencies, we used dynamic social-network analysis methods to examine several plausible facilitators of co-offending relationships in a large-scale narcotics network consisting of individuals and vehicles. Multivariate Cox regression and a two-proportion z-test on cyclic and focal closures of the network showed that mutual acquaintance and vehicle affiliations were significant facilitators for the network under study. We also found that homophily with respect to age, race, and gender were not good predictors of future link formation in these networks. Moreover, we examined the social causes and policy implications for the significance and insignificance of various facilitators including common jails on future co-offending. These findings provide important insights into the link-formation processes and the resilience of social networks. In addition, they can be used to aid in the prediction of future links. The methods described can also help in understanding the driving forces behind the formation and evolution of social networks facilitated by mobile and Web technologies.
    Date
    22. 3.2009 18:50:30
    Type
    a
  17. Chen, H.: Intelligence and security informatics : Introduction to the special topic issue (2005) 0.00
    0.002469914 = product of:
      0.006174785 = sum of:
        0.0017067124 = product of:
          0.015360411 = sum of:
            0.015360411 = weight(_text_:p in 3232) [ClassicSimilarity], result of:
              0.015360411 = score(doc=3232,freq=2.0), product of:
                0.11047626 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03072615 = queryNorm
                0.13903812 = fieldWeight in 3232, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3232)
          0.11111111 = coord(1/9)
        0.0044680727 = weight(_text_:a in 3232) [ClassicSimilarity], result of:
          0.0044680727 = score(doc=3232,freq=16.0), product of:
            0.035428695 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03072615 = queryNorm
            0.12611452 = fieldWeight in 3232, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3232)
      0.4 = coord(2/5)
    
    Abstract
    Making the Nation Safer: The Role of Science and Technology in Countering Terrorism The commitment of the scientific, engineering, and health communities to helping the United States and the world respond to security challenges became evident after September 11, 2001. The U.S. National Research Council's report an "Making the Nation Safer: The Role of Science and Technology in Countering Terrorism," (National Research Council, 2002, p. 1) explains the context of such a new commitment: Terrorism is a serious threat to the Security of the United States and indeed the world. The vulnerability of societies to terrorist attacks results in part from the proliferation of chemical, biological, and nuclear weapons of mass destruction, but it also is a consequence of the highly efficient and interconnected systems that we rely an for key services such as transportation, information, energy, and health care. The efficient functioning of these systems reflects great technological achievements of the past century, but interconnectedness within and across systems also means that infrastructures are vulnerable to local disruptions, which could lead to widespread or catastrophic failures. As terrorists seek to exploit these vulnerabilities, it is fitting that we harness the nation's exceptional scientific and technological capabilities to Counter terrorist threats. A committee of 24 of the leading scientific, engineering, medical, and policy experts in the United States conducted the study described in the report. Eight panels were separately appointed and asked to provide input to the committee. The panels included: (a) biological sciences, (b) chemical issues, (c) nuclear and radiological issues, (d) information technology, (e) transportation, (f) energy facilities, Cities, and fixed infrastructure, (g) behavioral, social, and institutional issues, and (h) systems analysis and systems engineering. The focus of the committee's work was to make the nation safer from emerging terrorist threats that sought to inflict catastrophic damage an the nation's people, its infrastructure, or its economy. The committee considered nine areas, each of which is discussed in a separate chapter in the report: nuclear and radiological materials, human and agricultural health systems, toxic chemicals and explosive materials, information technology, energy systems, transportation systems, Cities and fixed infrastructure, the response of people to terrorism, and complex and interdependent systems. The chapter an information technology (IT) is particularly relevant to this special issue. The report recommends that "a strategic long-term research and development agenda should be established to address three primary counterterrorismrelated areas in IT: information and network security, the IT needs of emergency responders, and information fusion and management" (National Research Council, 2002, pp. 11 -12). The MD in information and network security should include approaches and architectures for prevention, identification, and containment of cyber-intrusions and recovery from them. The R&D to address IT needs of emergency responders should include ensuring interoperability, maintaining and expanding communications capability during an emergency, communicating with the public during an emergency, and providing support for decision makers. The R&D in information fusion and management for the intelligence, law enforcement, and emergency response communities should include data mining, data integration, language technologies, and processing of image and audio data. Much of the research reported in this special issue is related to information fusion and management for homeland security.
    Type
    a
  18. Liu, X.; Kaza, S.; Zhang, P.; Chen, H.: Determining inventor status and its effect on knowledge diffusion : a study on nanotechnology literature from China, Russia, and India (2011) 0.00
    0.0022518565 = product of:
      0.005629641 = sum of:
        0.0024381608 = product of:
          0.021943446 = sum of:
            0.021943446 = weight(_text_:p in 4468) [ClassicSimilarity], result of:
              0.021943446 = score(doc=4468,freq=2.0), product of:
                0.11047626 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03072615 = queryNorm
                0.19862589 = fieldWeight in 4468, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4468)
          0.11111111 = coord(1/9)
        0.0031914804 = weight(_text_:a in 4468) [ClassicSimilarity], result of:
          0.0031914804 = score(doc=4468,freq=4.0), product of:
            0.035428695 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03072615 = queryNorm
            0.090081796 = fieldWeight in 4468, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4468)
      0.4 = coord(2/5)
    
    Type
    a
  19. Schumaker, R.P.; Chen, H.: Evaluating a news-aware quantitative trader : the effect of momentum and contrarian stock selection strategies (2008) 0.00
    0.0020957119 = product of:
      0.010478559 = sum of:
        0.010478559 = weight(_text_:a in 1352) [ClassicSimilarity], result of:
          0.010478559 = score(doc=1352,freq=22.0), product of:
            0.035428695 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03072615 = queryNorm
            0.29576474 = fieldWeight in 1352, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1352)
      0.2 = coord(1/5)
    
    Abstract
    We study the coupling of basic quantitative portfolio selection strategies with a financial news article prediction system, AZFinText. By varying the degrees of portfolio formation time, we found that a hybrid system using both quantitative strategy and a full set of financial news articles performed the best. With a 1-week portfolio formation period, we achieved a 20.79% trading return using a Momentum strategy and a 4.54% return using a Contrarian strategy over a 5-week holding period. We also found that trader overreaction to these events led AZFinText to capitalize on these short-term surges in price.
    Type
    a
  20. Chen, H.; Ng, T.: ¬An algorithmic approach to concept exploration in a large knowledge network (automatic thesaurus consultation) : symbolic branch-and-bound search versus connectionist Hopfield Net Activation (1995) 0.00
    0.0015319105 = product of:
      0.0076595526 = sum of:
        0.0076595526 = weight(_text_:a in 2203) [ClassicSimilarity], result of:
          0.0076595526 = score(doc=2203,freq=16.0), product of:
            0.035428695 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03072615 = queryNorm
            0.2161963 = fieldWeight in 2203, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2203)
      0.2 = coord(1/5)
    
    Abstract
    Presents a framework for knowledge discovery and concept exploration. In order to enhance the concept exploration capability of knowledge based systems and to alleviate the limitation of the manual browsing approach, develops 2 spreading activation based algorithms for concept exploration in large, heterogeneous networks of concepts (eg multiple thesauri). One algorithm, which is based on the symbolic AI paradigma, performs a conventional branch-and-bound search on a semantic net representation to identify other highly relevant concepts (a serial, optimal search process). The 2nd algorithm, which is absed on the neural network approach, executes the Hopfield net parallel relaxation and convergence process to identify 'convergent' concepts for some initial queries (a parallel, heuristic search process). Tests these 2 algorithms on a large text-based knowledge network of about 13.000 nodes (terms) and 80.000 directed links in the area of computing technologies
    Type
    a

Types

  • a 61
  • el 1
  • More… Less…