Search (61 results, page 1 of 4)

  • × author_ss:"Chen, H."
  1. Chen, H.; Dhar, V.: Cognitive process as a basis for intelligent retrieval system design (1991) 0.03
    0.03272947 = product of:
      0.06545894 = sum of:
        0.02244363 = weight(_text_:information in 3845) [ClassicSimilarity], result of:
          0.02244363 = score(doc=3845,freq=6.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.2687516 = fieldWeight in 3845, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3845)
        0.043015312 = product of:
          0.086030625 = sum of:
            0.086030625 = weight(_text_:retrieval in 3845) [ClassicSimilarity], result of:
              0.086030625 = score(doc=3845,freq=10.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.59785134 = fieldWeight in 3845, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3845)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    2 studies were conducted to investigate the cognitive processes involved in online document-based information retrieval. These studies led to the development of 5 computerised models of online document retrieval. These models were incorporated into a design of an 'intelligent' document-based retrieval system. Following a discussion of this system, discusses the broader implications of the research for the design of information retrieval sysems
    Source
    Information processing and management. 27(1991) no.5, S.405-432
  2. Chen, H.: Knowledge-based document retrieval : framework and design (1992) 0.03
    0.032194868 = product of:
      0.064389735 = sum of:
        0.025915671 = weight(_text_:information in 5283) [ClassicSimilarity], result of:
          0.025915671 = score(doc=5283,freq=2.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.3103276 = fieldWeight in 5283, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.125 = fieldNorm(doc=5283)
        0.038474064 = product of:
          0.07694813 = sum of:
            0.07694813 = weight(_text_:retrieval in 5283) [ClassicSimilarity], result of:
              0.07694813 = score(doc=5283,freq=2.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.5347345 = fieldWeight in 5283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.125 = fieldNorm(doc=5283)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Journal of information science. 18(1992), S.293-314
  3. Chen, H.: Machine learning for information retrieval : neural networks, symbolic learning, and genetic algorithms (1994) 0.03
    0.030718692 = product of:
      0.061437383 = sum of:
        0.027772574 = weight(_text_:information in 2657) [ClassicSimilarity], result of:
          0.027772574 = score(doc=2657,freq=12.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.3325631 = fieldWeight in 2657, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2657)
        0.033664808 = product of:
          0.067329615 = sum of:
            0.067329615 = weight(_text_:retrieval in 2657) [ClassicSimilarity], result of:
              0.067329615 = score(doc=2657,freq=8.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.46789268 = fieldWeight in 2657, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2657)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In the 1980s, knowledge-based techniques also made an impressive contribution to 'intelligent' information retrieval and indexing. More recently, researchers have turned to newer artificial intelligence based inductive learning techniques including neural networks, symbolic learning, and genetic algorithms grounded on diverse paradigms. These have provided great opportunities to enhance the capabilities of current information storage and retrieval systems. Provides an overview of these techniques and presents 3 popular methods: the connectionist Hopfield network; the symbolic ID3/ID5R; and evaluation based genetic algorithms in the context of information retrieval. The techniques are promising in their ability to analyze user queries, identify users' information needs, and suggest alternatives for search and can greatly complement the prevailing full text, keyword based, probabilistic, and knowledge based techniques
    Source
    Journal of the American Society for Information Science. 46(1995) no.3, S.194-216
  4. Chen, H.; Shankaranarayanan, G.; She, L.: ¬A machine learning approach to inductive query by examples : an experiment using relevance feedback, ID3, genetic algorithms, and simulated annealing (1998) 0.03
    0.030708306 = product of:
      0.06141661 = sum of:
        0.02915513 = weight(_text_:information in 1148) [ClassicSimilarity], result of:
          0.02915513 = score(doc=1148,freq=18.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.34911853 = fieldWeight in 1148, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1148)
        0.032261483 = product of:
          0.06452297 = sum of:
            0.06452297 = weight(_text_:retrieval in 1148) [ClassicSimilarity], result of:
              0.06452297 = score(doc=1148,freq=10.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.44838852 = fieldWeight in 1148, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1148)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Information retrieval using probabilistic techniques has attracted significant attention on the part of researchers in information and computer science over the past few decades. In the 1980s, knowledge-based techniques also made an impressive contribution to 'intelligent' information retrieval and indexing. More recently, information science researchers have tfurned to other newer inductive learning techniques including symbolic learning, genetic algorithms, and simulated annealing. These newer techniques, which are grounded in diverse paradigms, have provided great opportunities for researchers to enhance the information processing and retrieval capabilities of current information systems. In this article, we first provide an overview of these newer techniques and their use in information retrieval research. In order to femiliarize readers with the techniques, we present 3 promising methods: the symbolic ID3 algorithm, evolution-based genetic algorithms, and simulated annealing. We discuss their knowledge representations and algorithms in the unique context of information retrieval
    Source
    Journal of the American Society for Information Science. 49(1998) no.8, S.693-705
  5. Dumais, S.; Chen, H.: Hierarchical classification of Web content (2000) 0.03
    0.030122329 = product of:
      0.060244657 = sum of:
        0.019436752 = weight(_text_:information in 492) [ClassicSimilarity], result of:
          0.019436752 = score(doc=492,freq=2.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.23274569 = fieldWeight in 492, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=492)
        0.040807907 = product of:
          0.08161581 = sum of:
            0.08161581 = weight(_text_:retrieval in 492) [ClassicSimilarity], result of:
              0.08161581 = score(doc=492,freq=4.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.5671716 = fieldWeight in 492, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.09375 = fieldNorm(doc=492)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Proceedings of ACM SIGIR 23rd International Conference on Research and Development in Information Retrieval. Ed. by N.J. Belkin, P. Ingwersen u. M.K. Leong
    Theme
    Klassifikationssysteme im Online-Retrieval
  6. Zhu, B.; Chen, H.: Validating a geographical image retrieval system (2000) 0.03
    0.026086703 = product of:
      0.052173406 = sum of:
        0.016832722 = weight(_text_:information in 4769) [ClassicSimilarity], result of:
          0.016832722 = score(doc=4769,freq=6.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.20156369 = fieldWeight in 4769, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4769)
        0.035340685 = product of:
          0.07068137 = sum of:
            0.07068137 = weight(_text_:retrieval in 4769) [ClassicSimilarity], result of:
              0.07068137 = score(doc=4769,freq=12.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.49118498 = fieldWeight in 4769, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4769)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper summarizes a prototype geographical image retrieval system that demonstrates how to integrate image processing and information analysis techniques to support large-scale content-based image retrieval. By using an image as its interface, the prototype system addresses a troublesome aspect of traditional retrieval models, which require users to have complete knowledge of the low-level features of an image. In addition we describe an experiment to validate against that of human subjects in an effort to address the scarcity of research evaluating performance of an algorithm against that of human beings. The results of the experiment indicate that the system could do as well as human subjects in accomplishing the tasks of similarity analysis and image categorization. We also found that under some circumstances texture features of an image are insufficient to represent an geographic image. We believe, however, that our image retrieval system provides a promising approach to integrating image processing techniques and information retrieval algorithms
    Source
    Journal of the American Society for Information Science. 51(2000) no.7, S.625-634
  7. Huang, Z.; Chung, Z.W.; Chen, H.: ¬A graph model for e-commerce recommender systems (2004) 0.03
    0.02529325 = product of:
      0.0505865 = sum of:
        0.021730952 = weight(_text_:information in 501) [ClassicSimilarity], result of:
          0.021730952 = score(doc=501,freq=10.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.2602176 = fieldWeight in 501, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=501)
        0.02885555 = product of:
          0.0577111 = sum of:
            0.0577111 = weight(_text_:retrieval in 501) [ClassicSimilarity], result of:
              0.0577111 = score(doc=501,freq=8.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.40105087 = fieldWeight in 501, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=501)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Information overload on the Web has created enormous challenges to customers selecting products for online purchases and to online businesses attempting to identify customers' preferences efficiently. Various recommender systems employing different data representations and recommendation methods are currently used to address these challenges. In this research, we developed a graph model that provides a generic data representation and can support different recommendation methods. To demonstrate its usefulness and flexibility, we developed three recommendation methods: direct retrieval, association mining, and high-degree association retrieval. We used a data set from an online bookstore as our research test-bed. Evaluation results showed that combining product content information and historical customer transaction information achieved more accurate predictions and relevant recommendations than using only collaborative information. However, comparisons among different methods showed that high-degree association retrieval did not perform significantly better than the association mining method or the direct retrieval method in our test-bed.
    Source
    Journal of the American Society for Information Science and technology. 55(2004) no.3, S.259-274
  8. Schatz, B.R.; Johnson, E.H.; Cochrane, P.A.; Chen, H.: Interactive term suggestion for users of digital libraries : using thesauri and co-occurrence lists for information retrieval (1996) 0.03
    0.025101941 = product of:
      0.050203882 = sum of:
        0.016197294 = weight(_text_:information in 6417) [ClassicSimilarity], result of:
          0.016197294 = score(doc=6417,freq=2.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.19395474 = fieldWeight in 6417, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=6417)
        0.03400659 = product of:
          0.06801318 = sum of:
            0.06801318 = weight(_text_:retrieval in 6417) [ClassicSimilarity], result of:
              0.06801318 = score(doc=6417,freq=4.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.47264296 = fieldWeight in 6417, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6417)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  9. Chen, H.: Introduction to the JASIST special topic section on Web retrieval and mining : A machine learning perspective (2003) 0.02
    0.023360297 = product of:
      0.046720594 = sum of:
        0.021730952 = weight(_text_:information in 1610) [ClassicSimilarity], result of:
          0.021730952 = score(doc=1610,freq=10.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.2602176 = fieldWeight in 1610, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1610)
        0.02498964 = product of:
          0.04997928 = sum of:
            0.04997928 = weight(_text_:retrieval in 1610) [ClassicSimilarity], result of:
              0.04997928 = score(doc=1610,freq=6.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.34732026 = fieldWeight in 1610, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1610)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Research in information retrieval (IR) has advanced significantly in the past few decades. Many tasks, such as indexing and text categorization, can be performed automatically with minimal human effort. Machine learning has played an important role in such automation by learning various patterns such as document topics, text structures, and user interests from examples. In recent years, it has become increasingly difficult to search for useful information an the World Wide Web because of its large size and unstructured nature. Useful information and resources are often hidden in the Web. While machine learning has been successfully applied to traditional IR systems, it poses some new challenges to apply these algorithms to the Web due to its large size, link structure, diversity in content and languages, and dynamic nature. On the other hand, such characteristics of the Web also provide interesting patterns and knowledge that do not present in traditional information retrieval systems.
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.7, S.621-624
  10. Chen, H.: Semantic research for digital libraries (1999) 0.02
    0.021791453 = product of:
      0.043582905 = sum of:
        0.02915513 = weight(_text_:information in 1247) [ClassicSimilarity], result of:
          0.02915513 = score(doc=1247,freq=18.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.34911853 = fieldWeight in 1247, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1247)
        0.014427775 = product of:
          0.02885555 = sum of:
            0.02885555 = weight(_text_:retrieval in 1247) [ClassicSimilarity], result of:
              0.02885555 = score(doc=1247,freq=2.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.20052543 = fieldWeight in 1247, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1247)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In this era of the Internet and distributed, multimedia computing, new and emerging classes of information systems applications have swept into the lives of office workers and people in general. From digital libraries, multimedia systems, geographic information systems, and collaborative computing to electronic commerce, virtual reality, and electronic video arts and games, these applications have created tremendous opportunities for information and computer science researchers and practitioners. As applications become more pervasive, pressing, and diverse, several well-known information retrieval (IR) problems have become even more urgent. Information overload, a result of the ease of information creation and transmission via the Internet and WWW, has become more troublesome (e.g., even stockbrokers and elementary school students, heavily exposed to various WWW search engines, are versed in such IR terminology as recall and precision). Significant variations in database formats and structures, the richness of information media (text, audio, and video), and an abundance of multilingual information content also have created severe information interoperability problems -- structural interoperability, media interoperability, and multilingual interoperability.
  11. Qin, J.; Zhou, Y.; Chau, M.; Chen, H.: Multilingual Web retrieval : an experiment in English-Chinese business intelligence (2006) 0.02
    0.021077707 = product of:
      0.042155415 = sum of:
        0.018109124 = weight(_text_:information in 5054) [ClassicSimilarity], result of:
          0.018109124 = score(doc=5054,freq=10.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.21684799 = fieldWeight in 5054, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5054)
        0.02404629 = product of:
          0.04809258 = sum of:
            0.04809258 = weight(_text_:retrieval in 5054) [ClassicSimilarity], result of:
              0.04809258 = score(doc=5054,freq=8.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.33420905 = fieldWeight in 5054, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5054)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    As increasing numbers of non-English resources have become available on the Web, the interesting and important issue of how Web users can retrieve documents in different languages has arisen. Cross-language information retrieval (CLIP), the study of retrieving information in one language by queries expressed in another language, is a promising approach to the problem. Cross-language information retrieval has attracted much attention in recent years. Most research systems have achieved satisfactory performance on standard Text REtrieval Conference (TREC) collections such as news articles, but CLIR techniques have not been widely studied and evaluated for applications such as Web portals. In this article, the authors present their research in developing and evaluating a multilingual English-Chinese Web portal that incorporates various CLIP techniques for use in the business domain. A dictionary-based approach was adopted and combines phrasal translation, co-occurrence analysis, and pre- and posttranslation query expansion. The portal was evaluated by domain experts, using a set of queries in both English and Chinese. The experimental results showed that co-occurrence-based phrasal translation achieved a 74.6% improvement in precision over simple word-byword translation. When used together, pre- and posttranslation query expansion improved the performance slightly, achieving a 78.0% improvement over the baseline word-by-word translation approach. In general, applying CLIR techniques in Web applications shows promise.
    Footnote
    Beitrag einer special topic section on multilingual information systems
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.5, S.671-683
  12. Chen, H.; Ng, T.D.; Martinez, J.; Schatz, B.R.: ¬A concept space approach to addressing the vocabulary problem in scientific information retrieval : an experiment on the Worm Community System (1997) 0.02
    0.01946691 = product of:
      0.03893382 = sum of:
        0.018109124 = weight(_text_:information in 6492) [ClassicSimilarity], result of:
          0.018109124 = score(doc=6492,freq=10.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.21684799 = fieldWeight in 6492, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6492)
        0.020824699 = product of:
          0.041649397 = sum of:
            0.041649397 = weight(_text_:retrieval in 6492) [ClassicSimilarity], result of:
              0.041649397 = score(doc=6492,freq=6.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.28943354 = fieldWeight in 6492, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6492)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This research presents an algorithmic approach to addressing the vocabulary problem in scientific information retrieval and information sharing, using the molecular biology domain as an example. We first present a literature review of cognitive studies related to the vocabulary problem and vocabulary-based search aids (thesauri) and then discuss techniques for building robust and domain-specific thesauri to assist in cross-domain scientific information retrieval. Using a variation of the automatic thesaurus generation techniques, which we refer to as the concept space approach, we recently conducted an experiment in the molecular biology domain in which we created a C. elegans worm thesaurus of 7.657 worm-specific terms and a Drosophila fly thesaurus of 15.626 terms. About 30% of these terms overlapped, which created vocabulary paths from one subject domain to the other. Based on a cognitve study of term association involving 4 biologists, we found that a large percentage (59,6-85,6%) of the terms suggested by the subjects were identified in the cojoined fly-worm thesaurus. However, we found only a small percentage (8,4-18,1%) of the associations suggested by the subjects in the thesaurus
    Source
    Journal of the American Society for Information Science. 48(1997) no.1, S.17-31
  13. Chung, W.; Chen, H.: Browsing the underdeveloped Web : an experiment on the Arabic Medical Web Directory (2009) 0.02
    0.019386295 = product of:
      0.03877259 = sum of:
        0.019436752 = weight(_text_:information in 2733) [ClassicSimilarity], result of:
          0.019436752 = score(doc=2733,freq=8.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.23274569 = fieldWeight in 2733, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2733)
        0.019335838 = product of:
          0.038671676 = sum of:
            0.038671676 = weight(_text_:22 in 2733) [ClassicSimilarity], result of:
              0.038671676 = score(doc=2733,freq=2.0), product of:
                0.16658723 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047571484 = queryNorm
                0.23214069 = fieldWeight in 2733, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2733)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    While the Web has grown significantly in recent years, some portions of the Web remain largely underdeveloped, as shown in a lack of high-quality content and functionality. An example is the Arabic Web, in which a lack of well-structured Web directories limits users' ability to browse for Arabic resources. In this research, we proposed an approach to building Web directories for the underdeveloped Web and developed a proof-of-concept prototype called the Arabic Medical Web Directory (AMedDir) that supports browsing of over 5,000 Arabic medical Web sites and pages organized in a hierarchical structure. We conducted an experiment involving Arab participants and found that the AMedDir significantly outperformed two benchmark Arabic Web directories in terms of browsing effectiveness, efficiency, information quality, and user satisfaction. Participants expressed strong preference for the AMedDir and provided many positive comments. This research thus contributes to developing a useful Web directory for organizing the information in the Arabic medical domain and to a better understanding of how to support browsing on the underdeveloped Web.
    Date
    22. 3.2009 17:57:50
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.3, S.595-607
    Theme
    Information Gateway
  14. Chen, H.; Lally, A.M.; Zhu, B.; Chau, M.: HelpfulMed : Intelligent searching for medical information over the Internet (2003) 0.02
    0.017425984 = product of:
      0.03485197 = sum of:
        0.01402727 = weight(_text_:information in 1615) [ClassicSimilarity], result of:
          0.01402727 = score(doc=1615,freq=6.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.16796975 = fieldWeight in 1615, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1615)
        0.020824699 = product of:
          0.041649397 = sum of:
            0.041649397 = weight(_text_:retrieval in 1615) [ClassicSimilarity], result of:
              0.041649397 = score(doc=1615,freq=6.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.28943354 = fieldWeight in 1615, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1615)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The Medical professionals and researchers need information from reputable sources to accomplish their work. Unfortunately, the Web has a large number of documents that are irrelevant to their work, even those documents that purport to be "medically-related." This paper describes an architecture designed to integrate advanced searching and indexing algorithms, an automatic thesaurus, or "concept space," and Kohonen-based Self-Organizing Map (SOM) technologies to provide searchers with finegrained results. Initial results indicate that these systems provide complementary retrieval functionalities. HelpfulMed not only allows users to search Web pages and other online databases, but also allows them to build searches through the use of an automatic thesaurus and browse a graphical display of medical-related topics. Evaluation results for each of the different components are included. Our spidering algorithm outperformed both breadth-first search and PageRank spiders an a test collection of 100,000 Web pages. The automatically generated thesaurus performed as well as both MeSH and UMLS-systems which require human mediation for currency. Lastly, a variant of the Kohonen SOM was comparable to MeSH terms in perceived cluster precision and significantly better at perceived cluster recall.
    Footnote
    Teil eines Themenheftes: "Web retrieval and mining: A machine learning perspective"
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.7, S.683-694
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  15. Carmel, E.; Crawford, S.; Chen, H.: Browsing in hypertext : a cognitive study (1992) 0.02
    0.016155247 = product of:
      0.032310493 = sum of:
        0.016197294 = weight(_text_:information in 7469) [ClassicSimilarity], result of:
          0.016197294 = score(doc=7469,freq=8.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.19395474 = fieldWeight in 7469, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=7469)
        0.0161132 = product of:
          0.0322264 = sum of:
            0.0322264 = weight(_text_:22 in 7469) [ClassicSimilarity], result of:
              0.0322264 = score(doc=7469,freq=2.0), product of:
                0.16658723 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047571484 = queryNorm
                0.19345059 = fieldWeight in 7469, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=7469)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    With the growth of hypertext and multimedia applications that support and encourage browsing it is time to take a penetrating look at browsing behaviour. Several dimensions of browsing are exemined, to find out: first, what is browsing and what cognitive processes are associated with it: second, is there a browsing strategy, and if so, are there any differences between how subject-area experts and novices browse; and finally, how can this knowledge be applied to improve the design of hypertext systems. Two groups of students, subject-area experts and novices, were studied while browsing a Macintosh HyperCard application on the subject The Vietnam War. A protocol analysis technique was used to gather and analyze data. Components of the GOMS model were used to describe the goals, operators, methods, and selection rules observed: Three browsing strategies were identified: (1) search-oriented browse, scanning and and reviewing information relevant to a fixed task; (2) review-browse, scanning and reviewing intersting information in the presence of transient browse goals that represent changing tasks, and (3) scan-browse, scanning for interesting information (without review). Most subjects primarily used review-browse interspersed with search-oriented browse. Within this strategy, comparisons between subject-area experts and novices revealed differences in tactics: experts browsed in more depth, seldom used referential links, selected different kinds of topics, and viewed information differently thatn did novices. Based on these findings, suggestions are made to hypertext developers
    Source
    IEEE transactions on systems, man and cybernetics. 22(1992) no.5, S.865-884
  16. Chen, H.; Martinez, J.; Kirchhoff, A.; Ng, T.D.; Schatz, B.R.: Alleviating search uncertainty through concept associations : automatic indexing, co-occurence analysis, and parallel computing (1998) 0.01
    0.014085817 = product of:
      0.028171634 = sum of:
        0.01374386 = weight(_text_:information in 5202) [ClassicSimilarity], result of:
          0.01374386 = score(doc=5202,freq=4.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.16457605 = fieldWeight in 5202, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5202)
        0.014427775 = product of:
          0.02885555 = sum of:
            0.02885555 = weight(_text_:retrieval in 5202) [ClassicSimilarity], result of:
              0.02885555 = score(doc=5202,freq=2.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.20052543 = fieldWeight in 5202, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5202)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In this article, we report research on an algorithmic approach to alleviating search uncertainty in a large information space. Grounded on object filtering, automatic indexing, and co-occurence analysis, we performed a large-scale experiment using a parallel supercomputer (SGI Power Challenge) to analyze 400.000+ abstracts in an INSPEC computer engineering collection. Two system-generated thesauri, one based on a combined object filtering and automatic indexing method, and the other based on automatic indexing only, were compaed with the human-generated INSPEC subject thesaurus. Our user evaluation revealed that the system-generated thesauri were better than the INSPEC thesaurus in 'concept recall', but in 'concept precision' the 3 thesauri were comparable. Our analysis also revealed that the terms suggested by the 3 thesauri were complementary and could be used to significantly increase 'variety' in search terms the thereby reduce search uncertainty
    Source
    Journal of the American Society for Information Science. 49(1998) no.3, S.206-216
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  17. Ramsey, M.C.; Chen, H.; Zhu, B.; Schatz, B.R.: ¬A collection of visual thesauri for browsing large collections of geographic images (1999) 0.01
    0.014085256 = product of:
      0.028170511 = sum of:
        0.011338106 = weight(_text_:information in 3922) [ClassicSimilarity], result of:
          0.011338106 = score(doc=3922,freq=2.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.13576832 = fieldWeight in 3922, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3922)
        0.016832404 = product of:
          0.033664808 = sum of:
            0.033664808 = weight(_text_:retrieval in 3922) [ClassicSimilarity], result of:
              0.033664808 = score(doc=3922,freq=2.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.23394634 = fieldWeight in 3922, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3922)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Digital libraries of geo-spatial multimedia content are currently deficient in providing fuzzy, concept-based retrieval mechanisms to users. The main challenge is that indexing and thesaurus creation are extremely labor-intensive processes for text documents and especially for images. Recently, 800.000 declassified staellite photographs were made available by the US Geological Survey. Additionally, millions of satellite and aerial photographs are archived in national and local map libraries. Such enormous collections make human indexing and thesaurus generation methods impossible to utilize. In this article we propose a scalable method to automatically generate visual thesauri of large collections of geo-spatial media using fuzzy, unsupervised machine-learning techniques
    Source
    Journal of the American Society for Information Science. 50(1999) no.9, S.826-834
  18. Chen, H.; Houston, A.L.; Sewell, R.R.; Schatz, B.R.: Internet browsing and searching : user evaluations of category map and concept space techniques (1998) 0.01
    0.013280235 = product of:
      0.02656047 = sum of:
        0.012957836 = weight(_text_:information in 869) [ClassicSimilarity], result of:
          0.012957836 = score(doc=869,freq=8.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.1551638 = fieldWeight in 869, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=869)
        0.013602636 = product of:
          0.027205272 = sum of:
            0.027205272 = weight(_text_:retrieval in 869) [ClassicSimilarity], result of:
              0.027205272 = score(doc=869,freq=4.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.18905719 = fieldWeight in 869, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=869)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The Internet provides an exceptional testbed for developing algorithms that can improve bowsing and searching large information spaces. Browsing and searching tasks are susceptible to problems of information overload and vocabulary differences. Much of the current research is aimed at the development and refinement of algorithms to improve browsing and searching by addressing these problems. Our research was focused on discovering whether two of the algorithms our research group has developed, a Kohonen algorithm category map for browsing, and an automatically generated concept space algorithm for searching, can help improve browsing and / or searching the Internet. Our results indicate that a Kohonen self-organizing map (SOM)-based algorithm can successfully categorize a large and eclectic Internet information space (the Entertainment subcategory of Yahoo!) into manageable sub-spaces that users can successfully navigate to locate a homepage of interest to them. The SOM algorithm worked best with browsing tasks that were very broad, and in which subjects skipped around between categories. Subjects especially liked the visual and graphical aspects of the map. Subjects who tried to do a directed search, and those that wanted to use the more familiar mental models (alphabetic or hierarchical organization) for browsing, found that the work did not work well. The results from the concept space experiment were especially encouraging. There were no significant differences among the precision measures for the set of documents identified by subject-suggested terms, thesaurus-suggested terms, and the combination of subject- and thesaurus-suggested terms. The recall measures indicated that the combination of subject- and thesaurs-suggested terms exhibited significantly better recall than subject-suggested terms alone. Furthermore, analysis of the homepages indicated that there was limited overlap between the homepages retrieved by the subject-suggested and thesaurus-suggested terms. Since the retrieval homepages for the most part were different, this suggests that a user can enhance a keyword-based search by using an automatically generated concept space. Subejcts especially liked the level of control that they could exert over the search, and the fact that the terms suggested by the thesaurus were 'real' (i.e., orininating in the homepages) and therefore guaranteed to have retrieval success
    Source
    Journal of the American Society for Information Science. 49(1998) no.7, S.582-603
  19. Leroy, G.; Chen, H.: Genescene: an ontology-enhanced integration of linguistic and co-occurrence based relations in biomedical texts (2005) 0.01
    0.012105923 = product of:
      0.024211846 = sum of:
        0.008098647 = weight(_text_:information in 5259) [ClassicSimilarity], result of:
          0.008098647 = score(doc=5259,freq=2.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.09697737 = fieldWeight in 5259, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5259)
        0.0161132 = product of:
          0.0322264 = sum of:
            0.0322264 = weight(_text_:22 in 5259) [ClassicSimilarity], result of:
              0.0322264 = score(doc=5259,freq=2.0), product of:
                0.16658723 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047571484 = queryNorm
                0.19345059 = fieldWeight in 5259, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5259)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 7.2006 14:26:01
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.5, S.457-468
  20. Zheng, R.; Li, J.; Chen, H.; Huang, Z.: ¬A framework for authorship identification of online messages : writing-style features and classification techniques (2006) 0.01
    0.012105923 = product of:
      0.024211846 = sum of:
        0.008098647 = weight(_text_:information in 5276) [ClassicSimilarity], result of:
          0.008098647 = score(doc=5276,freq=2.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.09697737 = fieldWeight in 5276, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5276)
        0.0161132 = product of:
          0.0322264 = sum of:
            0.0322264 = weight(_text_:22 in 5276) [ClassicSimilarity], result of:
              0.0322264 = score(doc=5276,freq=2.0), product of:
                0.16658723 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047571484 = queryNorm
                0.19345059 = fieldWeight in 5276, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5276)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 7.2006 16:14:37
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.3, S.378-393

Types

  • a 61
  • el 1
  • More… Less…