Search (61 results, page 2 of 4)

  • × author_ss:"Chen, H."
  1. Dumais, S.; Chen, H.: Hierarchical classification of Web content (2000) 0.01
    0.007058388 = product of:
      0.01764597 = sum of:
        0.008173384 = weight(_text_:a in 492) [ClassicSimilarity], result of:
          0.008173384 = score(doc=492,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 492, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=492)
        0.009472587 = product of:
          0.018945174 = sum of:
            0.018945174 = weight(_text_:information in 492) [ClassicSimilarity], result of:
              0.018945174 = score(doc=492,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23274569 = fieldWeight in 492, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.09375 = fieldNorm(doc=492)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    Proceedings of ACM SIGIR 23rd International Conference on Research and Development in Information Retrieval. Ed. by N.J. Belkin, P. Ingwersen u. M.K. Leong
    Type
    a
  2. Zhu, B.; Chen, H.: Validating a geographical image retrieval system (2000) 0.01
    0.0069366493 = product of:
      0.017341623 = sum of:
        0.009138121 = weight(_text_:a in 4769) [ClassicSimilarity], result of:
          0.009138121 = score(doc=4769,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1709182 = fieldWeight in 4769, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4769)
        0.008203502 = product of:
          0.016407004 = sum of:
            0.016407004 = weight(_text_:information in 4769) [ClassicSimilarity], result of:
              0.016407004 = score(doc=4769,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.20156369 = fieldWeight in 4769, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4769)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper summarizes a prototype geographical image retrieval system that demonstrates how to integrate image processing and information analysis techniques to support large-scale content-based image retrieval. By using an image as its interface, the prototype system addresses a troublesome aspect of traditional retrieval models, which require users to have complete knowledge of the low-level features of an image. In addition we describe an experiment to validate against that of human subjects in an effort to address the scarcity of research evaluating performance of an algorithm against that of human beings. The results of the experiment indicate that the system could do as well as human subjects in accomplishing the tasks of similarity analysis and image categorization. We also found that under some circumstances texture features of an image are insufficient to represent an geographic image. We believe, however, that our image retrieval system provides a promising approach to integrating image processing techniques and information retrieval algorithms
    Source
    Journal of the American Society for Information Science. 51(2000) no.7, S.625-634
    Type
    a
  3. Chung, W.; Chen, H.; Reid, E.: Business stakeholder analyzer : an experiment of classifying stakeholders on the Web (2009) 0.01
    0.0067616524 = product of:
      0.01690413 = sum of:
        0.009010308 = weight(_text_:a in 2699) [ClassicSimilarity], result of:
          0.009010308 = score(doc=2699,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1685276 = fieldWeight in 2699, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2699)
        0.007893822 = product of:
          0.015787644 = sum of:
            0.015787644 = weight(_text_:information in 2699) [ClassicSimilarity], result of:
              0.015787644 = score(doc=2699,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.19395474 = fieldWeight in 2699, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2699)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    As the Web is used increasingly to share and disseminate information, business analysts and managers are challenged to understand stakeholder relationships. Traditional stakeholder theories and frameworks employ a manual approach to analysis and do not scale up to accommodate the rapid growth of the Web. Unfortunately, existing business intelligence (BI) tools lack analysis capability, and research on BI systems is sparse. This research proposes a framework for designing BI systems to identify and to classify stakeholders on the Web, incorporating human knowledge and machine-learned information from Web pages. Based on the framework, we have developed a prototype called Business Stakeholder Analyzer (BSA) that helps managers and analysts to identify and to classify their stakeholders on the Web. Results from our experiment involving algorithm comparison, feature comparison, and a user study showed that the system achieved better within-class accuracies in widespread stakeholder types such as partner/sponsor/supplier and media/reviewer, and was more efficient than human classification. The student and practitioner subjects in our user study strongly agreed that such a system would save analysts' time and help to identify and classify stakeholders. This research contributes to a better understanding of how to integrate information technology with stakeholder theory, and enriches the knowledge base of BI system design.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.1, S.59-74
    Type
    a
  4. Chen, H.; Martinez, J.; Kirchhoff, A.; Ng, T.D.; Schatz, B.R.: Alleviating search uncertainty through concept associations : automatic indexing, co-occurence analysis, and parallel computing (1998) 0.01
    0.0066833766 = product of:
      0.016708441 = sum of:
        0.0100103095 = weight(_text_:a in 5202) [ClassicSimilarity], result of:
          0.0100103095 = score(doc=5202,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18723148 = fieldWeight in 5202, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=5202)
        0.0066981306 = product of:
          0.013396261 = sum of:
            0.013396261 = weight(_text_:information in 5202) [ClassicSimilarity], result of:
              0.013396261 = score(doc=5202,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16457605 = fieldWeight in 5202, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5202)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    In this article, we report research on an algorithmic approach to alleviating search uncertainty in a large information space. Grounded on object filtering, automatic indexing, and co-occurence analysis, we performed a large-scale experiment using a parallel supercomputer (SGI Power Challenge) to analyze 400.000+ abstracts in an INSPEC computer engineering collection. Two system-generated thesauri, one based on a combined object filtering and automatic indexing method, and the other based on automatic indexing only, were compaed with the human-generated INSPEC subject thesaurus. Our user evaluation revealed that the system-generated thesauri were better than the INSPEC thesaurus in 'concept recall', but in 'concept precision' the 3 thesauri were comparable. Our analysis also revealed that the terms suggested by the 3 thesauri were complementary and could be used to significantly increase 'variety' in search terms the thereby reduce search uncertainty
    Source
    Journal of the American Society for Information Science. 49(1998) no.3, S.206-216
    Type
    a
  5. Li, J.; Zhang, Z.; Li, X.; Chen, H.: Kernel-based learning for biomedical relation extraction (2008) 0.01
    0.0066833766 = product of:
      0.016708441 = sum of:
        0.0100103095 = weight(_text_:a in 1611) [ClassicSimilarity], result of:
          0.0100103095 = score(doc=1611,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18723148 = fieldWeight in 1611, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1611)
        0.0066981306 = product of:
          0.013396261 = sum of:
            0.013396261 = weight(_text_:information in 1611) [ClassicSimilarity], result of:
              0.013396261 = score(doc=1611,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16457605 = fieldWeight in 1611, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1611)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Relation extraction is the process of scanning text for relationships between named entities. Recently, significant studies have focused on automatically extracting relations from biomedical corpora. Most existing biomedical relation extractors require manual creation of biomedical lexicons or parsing templates based on domain knowledge. In this study, we propose to use kernel-based learning methods to automatically extract biomedical relations from literature text. We develop a framework of kernel-based learning for biomedical relation extraction. In particular, we modified the standard tree kernel function by incorporating a trace kernel to capture richer contextual information. In our experiments on a biomedical corpus, we compare different kernel functions for biomedical relation detection and classification. The experimental results show that a tree kernel outperforms word and sequence kernels for relation detection, our trace-tree kernel outperforms the standard tree kernel, and a composite kernel outperforms individual kernels for relation extraction.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.5, S.756-769
    Type
    a
  6. Fu, T.; Abbasi, A.; Chen, H.: ¬A hybrid approach to Web forum interactional coherence analysis (2008) 0.01
    0.0066833766 = product of:
      0.016708441 = sum of:
        0.0100103095 = weight(_text_:a in 1872) [ClassicSimilarity], result of:
          0.0100103095 = score(doc=1872,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18723148 = fieldWeight in 1872, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1872)
        0.0066981306 = product of:
          0.013396261 = sum of:
            0.013396261 = weight(_text_:information in 1872) [ClassicSimilarity], result of:
              0.013396261 = score(doc=1872,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16457605 = fieldWeight in 1872, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1872)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Despite the rapid growth of text-based computer-mediated communication (CMC), its limitations have rendered the media highly incoherent. This poses problems for content analysis of online discourse archives. Interactional coherence analysis (ICA) attempts to accurately identify and construct CMC interaction networks. In this study, we propose the Hybrid Interactional Coherence (HIC) algorithm for identification of web forum interaction. HIC utilizes a bevy of system and linguistic features, including message header information, quotations, direct address, and lexical relations. Furthermore, several similarity-based methods including a Lexical Match Algorithm (LMA) and a sliding window method are utilized to account for interactional idiosyncrasies. Experiments results on two web forums revealed that the proposed HIC algorithm significantly outperformed comparison techniques in terms of precision, recall, and F-measure at both the forum and thread levels. Additionally, an example was used to illustrate how the improved ICA results can facilitate enhanced social network and role analysis capabilities.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.8, S.1195-1209
    Type
    a
  7. Chung, W.; Zhang, Y.; Huang, Z.; Wang, G.; Ong, T.-H.; Chen, H.: Internet searching and browsing in a multilingual world : an experiment an the Chinese Business Intelligence Portal (CBizPort) (2004) 0.01
    0.0065874713 = product of:
      0.016468678 = sum of:
        0.009632425 = weight(_text_:a in 2393) [ClassicSimilarity], result of:
          0.009632425 = score(doc=2393,freq=16.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18016359 = fieldWeight in 2393, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2393)
        0.006836252 = product of:
          0.013672504 = sum of:
            0.013672504 = weight(_text_:information in 2393) [ClassicSimilarity], result of:
              0.013672504 = score(doc=2393,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16796975 = fieldWeight in 2393, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2393)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The rapid growth of the non-English-speaking Internet population has created a need for better searching and browsing capabilities in languages other than English. However, existing search engines may not serve the needs of many non-English-speaking Internet users. In this paper, we propose a generic and integrated approach to searching and browsing the Internet in a multilingual world. Based an this approach, we have developed the Chinese Business Intelligence Portal (CBizPort), a meta-search engine that searches for business information of mainland China, Taiwan, and Hong Kong. Additional functions provided by CBizPort include encoding conversion (between Simplified Chinese and Traditional Chinese), summarization, and categorization. Experimental results of our user evaluation study show that the searching and browsing performance of CBizPort was comparable to that of regional Chinese search engines, and CBizPort could significantly augment these search engines. Subjects' verbal comments indicate that CBizPort performed best in terms of analysis functions, cross-regional searching, and user-friendliness, whereas regional search engines were more efficient and more popular. Subjects especially liked CBizPort's summarizer and categorizer, which helped in understanding search results. These encouraging results suggest a promising future of our approach to Internet searching and browsing in a multilingual world.
    Footnote
    Teil eines Themenheftes zu: Information seeking research
    Source
    Journal of the American Society for Information Science and Technology. 55(2004) no.9, S.818-831
    Type
    a
  8. Hu, P.J.-H.; Lin, C.; Chen, H.: User acceptance of intelligence and security informatics technology : a study of COPLINK (2005) 0.01
    0.006550755 = product of:
      0.016376887 = sum of:
        0.008173384 = weight(_text_:a in 3233) [ClassicSimilarity], result of:
          0.008173384 = score(doc=3233,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 3233, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3233)
        0.008203502 = product of:
          0.016407004 = sum of:
            0.016407004 = weight(_text_:information in 3233) [ClassicSimilarity], result of:
              0.016407004 = score(doc=3233,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.20156369 = fieldWeight in 3233, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3233)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The importance of Intelligence and Security Informatics (ISI) has significantly increased with the rapid and largescale migration of local/national security information from physical media to electronic platforms, including the Internet and information systems. Motivated by the significance of ISI in law enforcement (particularly in the digital government context) and the limited investigations of officers' technology-acceptance decisionmaking, we developed and empirically tested a factor model for explaining law-enforcement officers' technology acceptance. Specifically, our empirical examination targeted the COPLINK technology and involved more than 280 police officers. Overall, our model shows a good fit to the data collected and exhibits satisfactory Power for explaining law-enforcement officers' technology acceptance decisions. Our findings have several implications for research and technology management practices in law enforcement, which are also discussed.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.3, S.235-244
    Type
    a
  9. Chen, H.: Introduction to the JASIST special topic section on Web retrieval and mining : A machine learning perspective (2003) 0.01
    0.006548052 = product of:
      0.01637013 = sum of:
        0.005779455 = weight(_text_:a in 1610) [ClassicSimilarity], result of:
          0.005779455 = score(doc=1610,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10809815 = fieldWeight in 1610, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1610)
        0.010590675 = product of:
          0.02118135 = sum of:
            0.02118135 = weight(_text_:information in 1610) [ClassicSimilarity], result of:
              0.02118135 = score(doc=1610,freq=10.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2602176 = fieldWeight in 1610, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1610)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Research in information retrieval (IR) has advanced significantly in the past few decades. Many tasks, such as indexing and text categorization, can be performed automatically with minimal human effort. Machine learning has played an important role in such automation by learning various patterns such as document topics, text structures, and user interests from examples. In recent years, it has become increasingly difficult to search for useful information an the World Wide Web because of its large size and unstructured nature. Useful information and resources are often hidden in the Web. While machine learning has been successfully applied to traditional IR systems, it poses some new challenges to apply these algorithms to the Web due to its large size, link structure, diversity in content and languages, and dynamic nature. On the other hand, such characteristics of the Web also provide interesting patterns and knowledge that do not present in traditional information retrieval systems.
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.7, S.621-624
    Type
    a
  10. Ku, Y.; Chiu, C.; Zhang, Y.; Chen, H.; Su, H.: Text mining self-disclosing health information for public health service (2014) 0.01
    0.006548052 = product of:
      0.01637013 = sum of:
        0.005779455 = weight(_text_:a in 1262) [ClassicSimilarity], result of:
          0.005779455 = score(doc=1262,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10809815 = fieldWeight in 1262, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1262)
        0.010590675 = product of:
          0.02118135 = sum of:
            0.02118135 = weight(_text_:information in 1262) [ClassicSimilarity], result of:
              0.02118135 = score(doc=1262,freq=10.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2602176 = fieldWeight in 1262, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1262)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Understanding specific patterns or knowledge of self-disclosing health information could support public health surveillance and healthcare. This study aimed to develop an analytical framework to identify self-disclosing health information with unusual messages on web forums by leveraging advanced text-mining techniques. To demonstrate the performance of the proposed analytical framework, we conducted an experimental study on 2 major human immunodeficiency virus (HIV)/acquired immune deficiency syndrome (AIDS) forums in Taiwan. The experimental results show that the classification accuracy increased significantly (up to 83.83%) when using features selected by the information gain technique. The results also show the importance of adopting domain-specific features in analyzing unusual messages on web forums. This study has practical implications for the prevention and support of HIV/AIDS healthcare. For example, public health agencies can re-allocate resources and deliver services to people who need help via social media sites. In addition, individuals can also join a social media site to get better suggestions and support from each other.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.5, S.928-947
    Type
    a
  11. Chen, H.; Ng, T.: ¬An algorithmic approach to concept exploration in a large knowledge network (automatic thesaurus consultation) : symbolic branch-and-bound search versus connectionist Hopfield Net Activation (1995) 0.01
    0.0065180818 = product of:
      0.016295204 = sum of:
        0.01155891 = weight(_text_:a in 2203) [ClassicSimilarity], result of:
          0.01155891 = score(doc=2203,freq=16.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.2161963 = fieldWeight in 2203, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2203)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 2203) [ClassicSimilarity], result of:
              0.009472587 = score(doc=2203,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 2203, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2203)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Presents a framework for knowledge discovery and concept exploration. In order to enhance the concept exploration capability of knowledge based systems and to alleviate the limitation of the manual browsing approach, develops 2 spreading activation based algorithms for concept exploration in large, heterogeneous networks of concepts (eg multiple thesauri). One algorithm, which is based on the symbolic AI paradigma, performs a conventional branch-and-bound search on a semantic net representation to identify other highly relevant concepts (a serial, optimal search process). The 2nd algorithm, which is absed on the neural network approach, executes the Hopfield net parallel relaxation and convergence process to identify 'convergent' concepts for some initial queries (a parallel, heuristic search process). Tests these 2 algorithms on a large text-based knowledge network of about 13.000 nodes (terms) and 80.000 directed links in the area of computing technologies
    Source
    Journal of the American Society for Information Science. 46(1995) no.5, S.348-369
    Type
    a
  12. Orwig, R.E.; Chen, H.; Nunamaker, J.F.: ¬A graphical, self-organizing approach to classifying electronic meeting output (1997) 0.01
    0.006474727 = product of:
      0.016186817 = sum of:
        0.010661141 = weight(_text_:a in 6928) [ClassicSimilarity], result of:
          0.010661141 = score(doc=6928,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.19940455 = fieldWeight in 6928, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6928)
        0.005525676 = product of:
          0.011051352 = sum of:
            0.011051352 = weight(_text_:information in 6928) [ClassicSimilarity], result of:
              0.011051352 = score(doc=6928,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13576832 = fieldWeight in 6928, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6928)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Describes research in the application of a Kohonen Self-Organizing Map (SOM) to the problem of classification of electronic brainstorming output and an evaluation of the results. Describes an electronic meeting system and describes the classification problem that exists in the group problem solving process. Surveys the literature concerning classification. Describes the application of the Kohonen SOM to the meeting output classification problem. Describes an experiment that evaluated the classification performed by the Kohonen SOM by comparing it with those of a human expert and a Hopfield neural network. Discusses conclusions and directions for future research
    Source
    Journal of the American Society for Information Science. 48(1997) no.2, S.157-170
    Type
    a
  13. Suakkaphong, N.; Zhang, Z.; Chen, H.: Disease named entity recognition using semisupervised learning and conditional random fields (2011) 0.01
    0.0062546856 = product of:
      0.015636714 = sum of:
        0.0068111527 = weight(_text_:a in 4367) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=4367,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 4367, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4367)
        0.008825562 = product of:
          0.017651124 = sum of:
            0.017651124 = weight(_text_:information in 4367) [ClassicSimilarity], result of:
              0.017651124 = score(doc=4367,freq=10.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.21684799 = fieldWeight in 4367, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4367)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Information extraction is an important text-mining task that aims at extracting prespecified types of information from large text collections and making them available in structured representations such as databases. In the biomedical domain, information extraction can be applied to help biologists make the most use of their digital-literature archives. Currently, there are large amounts of biomedical literature that contain rich information about biomedical substances. Extracting such knowledge requires a good named entity recognition technique. In this article, we combine conditional random fields (CRFs), a state-of-the-art sequence-labeling algorithm, with two semisupervised learning techniques, bootstrapping and feature sampling, to recognize disease names from biomedical literature. Two data-processing strategies for each technique also were analyzed: one sequentially processing unlabeled data partitions and another one processing unlabeled data partitions in a round-robin fashion. The experimental results showed the advantage of semisupervised learning techniques given limited labeled training data. Specifically, CRFs with bootstrapping implemented in sequential fashion outperformed strictly supervised CRFs for disease name recognition. The project was supported by NIH/NLM Grant R33 LM07299-01, 2002-2005.
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.4, S.727-737
    Type
    a
  14. Chen, H.: Intelligence and security informatics : Introduction to the special topic issue (2005) 0.01
    0.0061918236 = product of:
      0.015479559 = sum of:
        0.0067426977 = weight(_text_:a in 3232) [ClassicSimilarity], result of:
          0.0067426977 = score(doc=3232,freq=16.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12611452 = fieldWeight in 3232, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3232)
        0.008736861 = product of:
          0.017473722 = sum of:
            0.017473722 = weight(_text_:information in 3232) [ClassicSimilarity], result of:
              0.017473722 = score(doc=3232,freq=20.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.21466857 = fieldWeight in 3232, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3232)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Making the Nation Safer: The Role of Science and Technology in Countering Terrorism The commitment of the scientific, engineering, and health communities to helping the United States and the world respond to security challenges became evident after September 11, 2001. The U.S. National Research Council's report an "Making the Nation Safer: The Role of Science and Technology in Countering Terrorism," (National Research Council, 2002, p. 1) explains the context of such a new commitment: Terrorism is a serious threat to the Security of the United States and indeed the world. The vulnerability of societies to terrorist attacks results in part from the proliferation of chemical, biological, and nuclear weapons of mass destruction, but it also is a consequence of the highly efficient and interconnected systems that we rely an for key services such as transportation, information, energy, and health care. The efficient functioning of these systems reflects great technological achievements of the past century, but interconnectedness within and across systems also means that infrastructures are vulnerable to local disruptions, which could lead to widespread or catastrophic failures. As terrorists seek to exploit these vulnerabilities, it is fitting that we harness the nation's exceptional scientific and technological capabilities to Counter terrorist threats. A committee of 24 of the leading scientific, engineering, medical, and policy experts in the United States conducted the study described in the report. Eight panels were separately appointed and asked to provide input to the committee. The panels included: (a) biological sciences, (b) chemical issues, (c) nuclear and radiological issues, (d) information technology, (e) transportation, (f) energy facilities, Cities, and fixed infrastructure, (g) behavioral, social, and institutional issues, and (h) systems analysis and systems engineering. The focus of the committee's work was to make the nation safer from emerging terrorist threats that sought to inflict catastrophic damage an the nation's people, its infrastructure, or its economy. The committee considered nine areas, each of which is discussed in a separate chapter in the report: nuclear and radiological materials, human and agricultural health systems, toxic chemicals and explosive materials, information technology, energy systems, transportation systems, Cities and fixed infrastructure, the response of people to terrorism, and complex and interdependent systems. The chapter an information technology (IT) is particularly relevant to this special issue. The report recommends that "a strategic long-term research and development agenda should be established to address three primary counterterrorismrelated areas in IT: information and network security, the IT needs of emergency responders, and information fusion and management" (National Research Council, 2002, pp. 11 -12). The MD in information and network security should include approaches and architectures for prevention, identification, and containment of cyber-intrusions and recovery from them. The R&D to address IT needs of emergency responders should include ensuring interoperability, maintaining and expanding communications capability during an emergency, communicating with the public during an emergency, and providing support for decision makers. The R&D in information fusion and management for the intelligence, law enforcement, and emergency response communities should include data mining, data integration, language technologies, and processing of image and audio data. Much of the research reported in this special issue is related to information fusion and management for homeland security.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.3, S.217-220
    Type
    a
  15. Chen, H.; Lally, A.M.; Zhu, B.; Chau, M.: HelpfulMed : Intelligent searching for medical information over the Internet (2003) 0.01
    0.0060712704 = product of:
      0.015178176 = sum of:
        0.008341924 = weight(_text_:a in 1615) [ClassicSimilarity], result of:
          0.008341924 = score(doc=1615,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15602624 = fieldWeight in 1615, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1615)
        0.006836252 = product of:
          0.013672504 = sum of:
            0.013672504 = weight(_text_:information in 1615) [ClassicSimilarity], result of:
              0.013672504 = score(doc=1615,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16796975 = fieldWeight in 1615, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1615)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The Medical professionals and researchers need information from reputable sources to accomplish their work. Unfortunately, the Web has a large number of documents that are irrelevant to their work, even those documents that purport to be "medically-related." This paper describes an architecture designed to integrate advanced searching and indexing algorithms, an automatic thesaurus, or "concept space," and Kohonen-based Self-Organizing Map (SOM) technologies to provide searchers with finegrained results. Initial results indicate that these systems provide complementary retrieval functionalities. HelpfulMed not only allows users to search Web pages and other online databases, but also allows them to build searches through the use of an automatic thesaurus and browse a graphical display of medical-related topics. Evaluation results for each of the different components are included. Our spidering algorithm outperformed both breadth-first search and PageRank spiders an a test collection of 100,000 Web pages. The automatically generated thesaurus performed as well as both MeSH and UMLS-systems which require human mediation for currency. Lastly, a variant of the Kohonen SOM was comparable to MeSH terms in perceived cluster precision and significantly better at perceived cluster recall.
    Footnote
    Teil eines Themenheftes: "Web retrieval and mining: A machine learning perspective"
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.7, S.683-694
    Type
    a
  16. Chau, M.; Shiu, B.; Chan, M.; Chen, H.: Redips: backlink search and analysis on the Web for business intelligence analysis (2007) 0.01
    0.0060712704 = product of:
      0.015178176 = sum of:
        0.008341924 = weight(_text_:a in 142) [ClassicSimilarity], result of:
          0.008341924 = score(doc=142,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15602624 = fieldWeight in 142, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=142)
        0.006836252 = product of:
          0.013672504 = sum of:
            0.013672504 = weight(_text_:information in 142) [ClassicSimilarity], result of:
              0.013672504 = score(doc=142,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16796975 = fieldWeight in 142, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=142)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The World Wide Web presents significant opportunities for business intelligence analysis as it can provide information about a company's external environment and its stakeholders. Traditional business intelligence analysis on the Web has focused on simple keyword searching. Recently, it has been suggested that the incoming links, or backlinks, of a company's Web site (i.e., other Web pages that have a hyperlink pointing to the company of Interest) can provide important insights about the company's "online communities." Although analysis of these communities can provide useful signals for a company and information about its stakeholder groups, the manual analysis process can be very time-consuming for business analysts and consultants. In this article, we present a tool called Redips that automatically integrates backlink meta-searching and text-mining techniques to facilitate users in performing such business intelligence analysis on the Web. The architectural design and implementation of the tool are presented in the article. To evaluate the effectiveness, efficiency, and user satisfaction of Redips, an experiment was conducted to compare the tool with two popular business Intelligence analysis methods-using backlink search engines and manual browsing. The experiment results showed that Redips was statistically more effective than both benchmark methods (in terms of Recall and F-measure) but required more time in search tasks. In terms of user satisfaction, Redips scored statistically higher than backlink search engines in all five measures used, and also statistically higher than manual browsing in three measures.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.3, S.351-365
    Type
    a
  17. Chen, H.: ¬An analysis of image queries in the field of art history (2001) 0.01
    0.0060245167 = product of:
      0.015061291 = sum of:
        0.009535614 = weight(_text_:a in 5187) [ClassicSimilarity], result of:
          0.009535614 = score(doc=5187,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17835285 = fieldWeight in 5187, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5187)
        0.005525676 = product of:
          0.011051352 = sum of:
            0.011051352 = weight(_text_:information in 5187) [ClassicSimilarity], result of:
              0.011051352 = score(doc=5187,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13576832 = fieldWeight in 5187, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5187)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Chen arranged with an Art History instructor to require 20 medieval art images in papers received from 29 students. Participants completed a self administered presearch and postsearch questionnaire, and were interviewed after questionnaire analysis, in order to collect both the keywords and phrases they planned to use, and those actually used. Three MLIS student reviewers then mapped the queries to Enser and McGregor's four categories, Jorgensen's 12 classes, and Fidel's 12 feature data and object poles providing a degree of match on a seven point scale (one not at all to 7 exact). The reviewers give highest scores to Enser and McGregor;'s categories. Modifications to both the Enser and McGregor and Jorgensen schemes are suggested
    Source
    Journal of the American Society for Information Science and technology. 52(2001) no.3, S.260-273
    Type
    a
  18. Schroeder, J.; Xu, J.; Chen, H.; Chau, M.: Automated criminal link analysis based on domain knowledge (2007) 0.01
    0.005948606 = product of:
      0.014871514 = sum of:
        0.008173384 = weight(_text_:a in 275) [ClassicSimilarity], result of:
          0.008173384 = score(doc=275,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 275, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=275)
        0.0066981306 = product of:
          0.013396261 = sum of:
            0.013396261 = weight(_text_:information in 275) [ClassicSimilarity], result of:
              0.013396261 = score(doc=275,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16457605 = fieldWeight in 275, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=275)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Link (association) analysis has been used in the criminal justice domain to search large datasets for associations between crime entities in order to facilitate crime investigations. However, link analysis still faces many challenging problems, such as information overload, high search complexity, and heavy reliance on domain knowledge. To address these challenges, this article proposes several techniques for automated, effective, and efficient link analysis. These techniques include the co-occurrence analysis, the shortest path algorithm, and a heuristic approach to identifying associations and determining their importance. We developed a prototype system called CrimeLink Explorer based on the proposed techniques. Results of a user study with 10 crime investigators from the Tucson Police Department showed that our system could help subjects conduct link analysis more efficiently than traditional single-level link analysis tools. Moreover, subjects believed that association paths found based on the heuristic approach were more accurate than those found based solely on the co-occurrence analysis and that the automated link analysis system would be of great help in crime investigations.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.6, S.842-855
    Type
    a
  19. Schatz, B.R.; Johnson, E.H.; Cochrane, P.A.; Chen, H.: Interactive term suggestion for users of digital libraries : using thesauri and co-occurrence lists for information retrieval (1996) 0.01
    0.00588199 = product of:
      0.014704974 = sum of:
        0.0068111527 = weight(_text_:a in 6417) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=6417,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 6417, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=6417)
        0.007893822 = product of:
          0.015787644 = sum of:
            0.015787644 = weight(_text_:information in 6417) [ClassicSimilarity], result of:
              0.015787644 = score(doc=6417,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.19395474 = fieldWeight in 6417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6417)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Type
    a
  20. Chen, H.; Houston, A.L.; Sewell, R.R.; Schatz, B.R.: Internet browsing and searching : user evaluations of category map and concept space techniques (1998) 0.01
    0.0056083994 = product of:
      0.014020998 = sum of:
        0.00770594 = weight(_text_:a in 869) [ClassicSimilarity], result of:
          0.00770594 = score(doc=869,freq=16.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.14413087 = fieldWeight in 869, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=869)
        0.006315058 = product of:
          0.012630116 = sum of:
            0.012630116 = weight(_text_:information in 869) [ClassicSimilarity], result of:
              0.012630116 = score(doc=869,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1551638 = fieldWeight in 869, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=869)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The Internet provides an exceptional testbed for developing algorithms that can improve bowsing and searching large information spaces. Browsing and searching tasks are susceptible to problems of information overload and vocabulary differences. Much of the current research is aimed at the development and refinement of algorithms to improve browsing and searching by addressing these problems. Our research was focused on discovering whether two of the algorithms our research group has developed, a Kohonen algorithm category map for browsing, and an automatically generated concept space algorithm for searching, can help improve browsing and / or searching the Internet. Our results indicate that a Kohonen self-organizing map (SOM)-based algorithm can successfully categorize a large and eclectic Internet information space (the Entertainment subcategory of Yahoo!) into manageable sub-spaces that users can successfully navigate to locate a homepage of interest to them. The SOM algorithm worked best with browsing tasks that were very broad, and in which subjects skipped around between categories. Subjects especially liked the visual and graphical aspects of the map. Subjects who tried to do a directed search, and those that wanted to use the more familiar mental models (alphabetic or hierarchical organization) for browsing, found that the work did not work well. The results from the concept space experiment were especially encouraging. There were no significant differences among the precision measures for the set of documents identified by subject-suggested terms, thesaurus-suggested terms, and the combination of subject- and thesaurus-suggested terms. The recall measures indicated that the combination of subject- and thesaurs-suggested terms exhibited significantly better recall than subject-suggested terms alone. Furthermore, analysis of the homepages indicated that there was limited overlap between the homepages retrieved by the subject-suggested and thesaurus-suggested terms. Since the retrieval homepages for the most part were different, this suggests that a user can enhance a keyword-based search by using an automatically generated concept space. Subejcts especially liked the level of control that they could exert over the search, and the fact that the terms suggested by the thesaurus were 'real' (i.e., orininating in the homepages) and therefore guaranteed to have retrieval success
    Source
    Journal of the American Society for Information Science. 49(1998) no.7, S.582-603
    Type
    a

Types

  • a 61
  • el 1
  • More… Less…