Search (21 results, page 1 of 2)

  • × author_ss:"Chen, H."
  1. Chen, H.; Martinez, J.; Kirchhoff, A.; Ng, T.D.; Schatz, B.R.: Alleviating search uncertainty through concept associations : automatic indexing, co-occurence analysis, and parallel computing (1998) 0.01
    0.0065291426 = product of:
      0.058762282 = sum of:
        0.008435963 = weight(_text_:und in 5202) [ClassicSimilarity], result of:
          0.008435963 = score(doc=5202,freq=2.0), product of:
            0.0574165 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.025905682 = queryNorm
            0.14692576 = fieldWeight in 5202, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=5202)
        0.05032632 = weight(_text_:indexing in 5202) [ClassicSimilarity], result of:
          0.05032632 = score(doc=5202,freq=8.0), product of:
            0.099163525 = queryWeight, product of:
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.025905682 = queryNorm
            0.5075084 = fieldWeight in 5202, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.046875 = fieldNorm(doc=5202)
      0.11111111 = coord(2/18)
    
    Abstract
    In this article, we report research on an algorithmic approach to alleviating search uncertainty in a large information space. Grounded on object filtering, automatic indexing, and co-occurence analysis, we performed a large-scale experiment using a parallel supercomputer (SGI Power Challenge) to analyze 400.000+ abstracts in an INSPEC computer engineering collection. Two system-generated thesauri, one based on a combined object filtering and automatic indexing method, and the other based on automatic indexing only, were compaed with the human-generated INSPEC subject thesaurus. Our user evaluation revealed that the system-generated thesauri were better than the INSPEC thesaurus in 'concept recall', but in 'concept precision' the 3 thesauri were comparable. Our analysis also revealed that the terms suggested by the 3 thesauri were complementary and could be used to significantly increase 'variety' in search terms the thereby reduce search uncertainty
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  2. Ramsey, M.C.; Chen, H.; Zhu, B.; Schatz, B.R.: ¬A collection of visual thesauri for browsing large collections of geographic images (1999) 0.01
    0.005531235 = product of:
      0.049781114 = sum of:
        0.041517094 = weight(_text_:indexing in 3922) [ClassicSimilarity], result of:
          0.041517094 = score(doc=3922,freq=4.0), product of:
            0.099163525 = queryWeight, product of:
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.025905682 = queryNorm
            0.41867304 = fieldWeight in 3922, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3922)
        0.008264019 = product of:
          0.024792057 = sum of:
            0.024792057 = weight(_text_:29 in 3922) [ClassicSimilarity], result of:
              0.024792057 = score(doc=3922,freq=2.0), product of:
                0.09112809 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025905682 = queryNorm
                0.27205724 = fieldWeight in 3922, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3922)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    Digital libraries of geo-spatial multimedia content are currently deficient in providing fuzzy, concept-based retrieval mechanisms to users. The main challenge is that indexing and thesaurus creation are extremely labor-intensive processes for text documents and especially for images. Recently, 800.000 declassified staellite photographs were made available by the US Geological Survey. Additionally, millions of satellite and aerial photographs are archived in national and local map libraries. Such enormous collections make human indexing and thesaurus generation methods impossible to utilize. In this article we propose a scalable method to automatically generate visual thesauri of large collections of geo-spatial media using fuzzy, unsupervised machine-learning techniques
    Date
    21. 7.1999 13:48:29
  3. Chen, H.; Yim, T.; Fye, D.: Automatic thesaurus generation for an electronic community system (1995) 0.00
    0.0031110297 = product of:
      0.027999267 = sum of:
        0.0070299692 = weight(_text_:und in 2918) [ClassicSimilarity], result of:
          0.0070299692 = score(doc=2918,freq=2.0), product of:
            0.0574165 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.025905682 = queryNorm
            0.12243814 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2918)
        0.020969298 = weight(_text_:indexing in 2918) [ClassicSimilarity], result of:
          0.020969298 = score(doc=2918,freq=2.0), product of:
            0.099163525 = queryWeight, product of:
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.025905682 = queryNorm
            0.21146181 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2918)
      0.11111111 = coord(2/18)
    
    Abstract
    Reports an algorithmic approach to the automatic generation of thesauri for electronic community systems. The techniques used included terms filtering, automatic indexing, and cluster analysis. The testbed for the research was the Worm Community System, which contains a comprehensive library of specialized community data and literature, currently in use by molecular biologists who study the nematode worm. The resulting worm thesaurus included 2709 researchers' names, 798 gene names, 20 experimental methods, and 4302 subject descriptors. On average, each term had about 90 weighted neighbouring terms indicating relevant concepts. The thesaurus was developed as an online search aide. Tests the worm thesaurus in an experiment with 6 worm researchers of varying degrees of expertise and background. The experiment showed that the thesaurus was an excellent 'memory jogging' device and that it supported learning and serendipitous browsing. Despite some occurrences of obvious noise, the system was useful in suggesting relevant concepts for the researchers' queries and it helped improve concept recall. With a simple browsing interface, an automatic thesaurus can become a useful tool for online search and can assist researchers in exploring and traversing a dynamic and complex electronic community system
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  4. Orwig, R.E.; Chen, H.; Nunamaker, J.F.: ¬A graphical, self-organizing approach to classifying electronic meeting output (1997) 0.00
    0.0028319128 = product of:
      0.05097443 = sum of:
        0.05097443 = weight(_text_:automatisches in 6928) [ClassicSimilarity], result of:
          0.05097443 = score(doc=6928,freq=2.0), product of:
            0.13066888 = queryWeight, product of:
              5.044024 = idf(docFreq=774, maxDocs=44218)
              0.025905682 = queryNorm
            0.39010382 = fieldWeight in 6928, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.044024 = idf(docFreq=774, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6928)
      0.055555556 = coord(1/18)
    
    Theme
    Automatisches Klassifizieren
  5. Qu, B.; Cong, G.; Li, C.; Sun, A.; Chen, H.: ¬An evaluation of classification models for question topic categorization (2012) 0.00
    0.0020227947 = product of:
      0.036410306 = sum of:
        0.036410306 = weight(_text_:automatisches in 237) [ClassicSimilarity], result of:
          0.036410306 = score(doc=237,freq=2.0), product of:
            0.13066888 = queryWeight, product of:
              5.044024 = idf(docFreq=774, maxDocs=44218)
              0.025905682 = queryNorm
            0.27864558 = fieldWeight in 237, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.044024 = idf(docFreq=774, maxDocs=44218)
              0.0390625 = fieldNorm(doc=237)
      0.055555556 = coord(1/18)
    
    Theme
    Automatisches Klassifizieren
  6. Chen, H.; Zhang, Y.; Houston, A.L.: Semantic indexing and searching using a Hopfield net (1998) 0.00
    0.0019770046 = product of:
      0.03558608 = sum of:
        0.03558608 = weight(_text_:indexing in 5704) [ClassicSimilarity], result of:
          0.03558608 = score(doc=5704,freq=4.0), product of:
            0.099163525 = queryWeight, product of:
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.025905682 = queryNorm
            0.3588626 = fieldWeight in 5704, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.046875 = fieldNorm(doc=5704)
      0.055555556 = coord(1/18)
    
    Abstract
    Presents a neural network approach to document semantic indexing. Reports results of a study to apply a Hopfield net algorithm to simulate human associative memory for concept exploration in the domain of computer science and engineering. The INSPEC database, consisting of 320.000 abstracts from leading periodical articles was used as the document test bed. Benchmark tests conformed that 3 parameters: maximum number of activated nodes; maximum allowable error; and maximum number of iterations; were useful in positively influencing network convergence behaviour without negatively impacting central processing unit performance. Another series of benchmark tests was performed to determine the effectiveness of various filtering techniques in reducing the negative impact of noisy input terms. Preliminary user tests conformed expectations that the Hopfield net is potentially useful as an associative memory technique to improve document recall and precision by solving discrepancies between indexer vocabularies and end user vocabularies
  7. Chen, H.: Machine learning for information retrieval : neural networks, symbolic learning, and genetic algorithms (1994) 0.00
    0.0016309456 = product of:
      0.02935702 = sum of:
        0.02935702 = weight(_text_:indexing in 2657) [ClassicSimilarity], result of:
          0.02935702 = score(doc=2657,freq=2.0), product of:
            0.099163525 = queryWeight, product of:
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.025905682 = queryNorm
            0.29604656 = fieldWeight in 2657, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2657)
      0.055555556 = coord(1/18)
    
    Abstract
    In the 1980s, knowledge-based techniques also made an impressive contribution to 'intelligent' information retrieval and indexing. More recently, researchers have turned to newer artificial intelligence based inductive learning techniques including neural networks, symbolic learning, and genetic algorithms grounded on diverse paradigms. These have provided great opportunities to enhance the capabilities of current information storage and retrieval systems. Provides an overview of these techniques and presents 3 popular methods: the connectionist Hopfield network; the symbolic ID3/ID5R; and evaluation based genetic algorithms in the context of information retrieval. The techniques are promising in their ability to analyze user queries, identify users' information needs, and suggest alternatives for search and can greatly complement the prevailing full text, keyword based, probabilistic, and knowledge based techniques
  8. Chen, H.; Shankaranarayanan, G.; She, L.: ¬A machine learning approach to inductive query by examples : an experiment using relevance feedback, ID3, genetic algorithms, and simulated annealing (1998) 0.00
    0.0013979534 = product of:
      0.02516316 = sum of:
        0.02516316 = weight(_text_:indexing in 1148) [ClassicSimilarity], result of:
          0.02516316 = score(doc=1148,freq=2.0), product of:
            0.099163525 = queryWeight, product of:
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.025905682 = queryNorm
            0.2537542 = fieldWeight in 1148, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.046875 = fieldNorm(doc=1148)
      0.055555556 = coord(1/18)
    
    Abstract
    Information retrieval using probabilistic techniques has attracted significant attention on the part of researchers in information and computer science over the past few decades. In the 1980s, knowledge-based techniques also made an impressive contribution to 'intelligent' information retrieval and indexing. More recently, information science researchers have tfurned to other newer inductive learning techniques including symbolic learning, genetic algorithms, and simulated annealing. These newer techniques, which are grounded in diverse paradigms, have provided great opportunities for researchers to enhance the information processing and retrieval capabilities of current information systems. In this article, we first provide an overview of these newer techniques and their use in information retrieval research. In order to femiliarize readers with the techniques, we present 3 promising methods: the symbolic ID3 algorithm, evolution-based genetic algorithms, and simulated annealing. We discuss their knowledge representations and algorithms in the unique context of information retrieval
  9. Chen, H.: Introduction to the JASIST special topic section on Web retrieval and mining : A machine learning perspective (2003) 0.00
    0.0013979534 = product of:
      0.02516316 = sum of:
        0.02516316 = weight(_text_:indexing in 1610) [ClassicSimilarity], result of:
          0.02516316 = score(doc=1610,freq=2.0), product of:
            0.099163525 = queryWeight, product of:
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.025905682 = queryNorm
            0.2537542 = fieldWeight in 1610, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.046875 = fieldNorm(doc=1610)
      0.055555556 = coord(1/18)
    
    Abstract
    Research in information retrieval (IR) has advanced significantly in the past few decades. Many tasks, such as indexing and text categorization, can be performed automatically with minimal human effort. Machine learning has played an important role in such automation by learning various patterns such as document topics, text structures, and user interests from examples. In recent years, it has become increasingly difficult to search for useful information an the World Wide Web because of its large size and unstructured nature. Useful information and resources are often hidden in the Web. While machine learning has been successfully applied to traditional IR systems, it poses some new challenges to apply these algorithms to the Web due to its large size, link structure, diversity in content and languages, and dynamic nature. On the other hand, such characteristics of the Web also provide interesting patterns and knowledge that do not present in traditional information retrieval systems.
  10. Chen, H.: Generating, integrating and activating thesauri for concept-based document retrieval (1993) 0.00
    0.0012497723 = product of:
      0.022495901 = sum of:
        0.022495901 = weight(_text_:und in 7623) [ClassicSimilarity], result of:
          0.022495901 = score(doc=7623,freq=2.0), product of:
            0.0574165 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.025905682 = queryNorm
            0.39180204 = fieldWeight in 7623, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.125 = fieldNorm(doc=7623)
      0.055555556 = coord(1/18)
    
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  11. Chen, H.; Chung, Y.-M.; Ramsey, M.; Yang, C.C.: ¬A smart itsy bitsy spider for the Web (1998) 0.00
    0.001164961 = product of:
      0.020969298 = sum of:
        0.020969298 = weight(_text_:indexing in 871) [ClassicSimilarity], result of:
          0.020969298 = score(doc=871,freq=2.0), product of:
            0.099163525 = queryWeight, product of:
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.025905682 = queryNorm
            0.21146181 = fieldWeight in 871, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.0390625 = fieldNorm(doc=871)
      0.055555556 = coord(1/18)
    
    Abstract
    As part of the ongoing Illinois Digital Library Initiative project, this research proposes an intelligent agent approach to Web searching. In this experiment, we developed 2 Web personal spiders based on best first search and genetic algorithm techniques, respectively. These personal spiders can dynamically take a user's selected starting homepages and search for the most closely related homepages in the Web, based on the links and keyword indexing. A graphical, dynamic, Jav-based interface was developed and is available for Web access. A system architecture for implementing such an agent-spider is presented, followed by deteiled discussions of benchmark testing and user evaluation results. In benchmark testing, although the genetic algorithm spider did not outperform the best first search spider, we found both results to be comparable and complementary. In user evaluation, the genetic algorithm spider obtained significantly higher recall value than that of the best first search spider. However, their precision values were not statistically different. The mutation process introduced in genetic algorithms allows users to find other potential relevant homepages that cannot be explored via a conventional local search process. In addition, we found the Java-based interface to be a necessary component for design of a truly interactive and dynamic Web agent
  12. Chen, H.; Fan, H.; Chau, M.; Zeng, D.: MetaSpider : meta-searching and categorization on the Web (2001) 0.00
    0.001164961 = product of:
      0.020969298 = sum of:
        0.020969298 = weight(_text_:indexing in 6849) [ClassicSimilarity], result of:
          0.020969298 = score(doc=6849,freq=2.0), product of:
            0.099163525 = queryWeight, product of:
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.025905682 = queryNorm
            0.21146181 = fieldWeight in 6849, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6849)
      0.055555556 = coord(1/18)
    
    Abstract
    It has become increasingly difficult to locate relevant information on the Web, even with the help of Web search engines. Two approaches to addressing the low precision and poor presentation of search results of current search tools are studied: meta-search and document categorization. Meta-search engines improve precision by selecting and integrating search results from generic or domain-specific Web search engines or other resources. Document categorization promises better organization and presentation of retrieved results. This article introduces MetaSpider, a meta-search engine that has real-time indexing and categorizing functions. We report in this paper the major components of MetaSpider and discuss related technical approaches. Initial results of a user evaluation study comparing Meta-Spider, NorthernLight, and MetaCrawler in terms of clustering performance and of time and effort expended show that MetaSpider performed best in precision rate, but disclose no statistically significant differences in recall rate and time requirements. Our experimental study also reveals that MetaSpider exhibited a higher level of automation than the other two systems and facilitated efficient searching by providing the user with an organized, comprehensive view of the retrieved documents.
  13. Chen, H.; Lally, A.M.; Zhu, B.; Chau, M.: HelpfulMed : Intelligent searching for medical information over the Internet (2003) 0.00
    0.001164961 = product of:
      0.020969298 = sum of:
        0.020969298 = weight(_text_:indexing in 1615) [ClassicSimilarity], result of:
          0.020969298 = score(doc=1615,freq=2.0), product of:
            0.099163525 = queryWeight, product of:
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.025905682 = queryNorm
            0.21146181 = fieldWeight in 1615, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8278677 = idf(docFreq=2614, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1615)
      0.055555556 = coord(1/18)
    
    Abstract
    The Medical professionals and researchers need information from reputable sources to accomplish their work. Unfortunately, the Web has a large number of documents that are irrelevant to their work, even those documents that purport to be "medically-related." This paper describes an architecture designed to integrate advanced searching and indexing algorithms, an automatic thesaurus, or "concept space," and Kohonen-based Self-Organizing Map (SOM) technologies to provide searchers with finegrained results. Initial results indicate that these systems provide complementary retrieval functionalities. HelpfulMed not only allows users to search Web pages and other online databases, but also allows them to build searches through the use of an automatic thesaurus and browse a graphical display of medical-related topics. Evaluation results for each of the different components are included. Our spidering algorithm outperformed both breadth-first search and PageRank spiders an a test collection of 100,000 Web pages. The automatically generated thesaurus performed as well as both MeSH and UMLS-systems which require human mediation for currency. Lastly, a variant of the Kohonen SOM was comparable to MeSH terms in perceived cluster precision and significantly better at perceived cluster recall.
  14. Chen, H.; Ng, T.: ¬An algorithmic approach to concept exploration in a large knowledge network (automatic thesaurus consultation) : symbolic branch-and-bound search versus connectionist Hopfield Net Activation (1995) 0.00
    4.686646E-4 = product of:
      0.008435963 = sum of:
        0.008435963 = weight(_text_:und in 2203) [ClassicSimilarity], result of:
          0.008435963 = score(doc=2203,freq=2.0), product of:
            0.0574165 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.025905682 = queryNorm
            0.14692576 = fieldWeight in 2203, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=2203)
      0.055555556 = coord(1/18)
    
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  15. Chen, H.: ¬An analysis of image queries in the field of art history (2001) 0.00
    4.5911217E-4 = product of:
      0.008264019 = sum of:
        0.008264019 = product of:
          0.024792057 = sum of:
            0.024792057 = weight(_text_:29 in 5187) [ClassicSimilarity], result of:
              0.024792057 = score(doc=5187,freq=2.0), product of:
                0.09112809 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025905682 = queryNorm
                0.27205724 = fieldWeight in 5187, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5187)
          0.33333334 = coord(1/3)
      0.055555556 = coord(1/18)
    
    Abstract
    Chen arranged with an Art History instructor to require 20 medieval art images in papers received from 29 students. Participants completed a self administered presearch and postsearch questionnaire, and were interviewed after questionnaire analysis, in order to collect both the keywords and phrases they planned to use, and those actually used. Three MLIS student reviewers then mapped the queries to Enser and McGregor's four categories, Jorgensen's 12 classes, and Fidel's 12 feature data and object poles providing a degree of match on a seven point scale (one not at all to 7 exact). The reviewers give highest scores to Enser and McGregor;'s categories. Modifications to both the Enser and McGregor and Jorgensen schemes are suggested
  16. Jiang, S.; Gao, Q.; Chen, H.; Roco, M.C.: ¬The roles of sharing, transfer, and public funding in nanotechnology knowledge-diffusion networks (2015) 0.00
    3.935247E-4 = product of:
      0.007083445 = sum of:
        0.007083445 = product of:
          0.021250334 = sum of:
            0.021250334 = weight(_text_:29 in 1823) [ClassicSimilarity], result of:
              0.021250334 = score(doc=1823,freq=2.0), product of:
                0.09112809 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025905682 = queryNorm
                0.23319192 = fieldWeight in 1823, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1823)
          0.33333334 = coord(1/3)
      0.055555556 = coord(1/18)
    
    Date
    27. 4.2015 10:29:08
  17. Chung, W.; Chen, H.: Browsing the underdeveloped Web : an experiment on the Arabic Medical Web Directory (2009) 0.00
    3.8998472E-4 = product of:
      0.0070197247 = sum of:
        0.0070197247 = product of:
          0.021059174 = sum of:
            0.021059174 = weight(_text_:22 in 2733) [ClassicSimilarity], result of:
              0.021059174 = score(doc=2733,freq=2.0), product of:
                0.090717286 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025905682 = queryNorm
                0.23214069 = fieldWeight in 2733, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2733)
          0.33333334 = coord(1/3)
      0.055555556 = coord(1/18)
    
    Date
    22. 3.2009 17:57:50
  18. Carmel, E.; Crawford, S.; Chen, H.: Browsing in hypertext : a cognitive study (1992) 0.00
    3.2498725E-4 = product of:
      0.0058497707 = sum of:
        0.0058497707 = product of:
          0.017549312 = sum of:
            0.017549312 = weight(_text_:22 in 7469) [ClassicSimilarity], result of:
              0.017549312 = score(doc=7469,freq=2.0), product of:
                0.090717286 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025905682 = queryNorm
                0.19345059 = fieldWeight in 7469, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=7469)
          0.33333334 = coord(1/3)
      0.055555556 = coord(1/18)
    
    Source
    IEEE transactions on systems, man and cybernetics. 22(1992) no.5, S.865-884
  19. Leroy, G.; Chen, H.: Genescene: an ontology-enhanced integration of linguistic and co-occurrence based relations in biomedical texts (2005) 0.00
    3.2498725E-4 = product of:
      0.0058497707 = sum of:
        0.0058497707 = product of:
          0.017549312 = sum of:
            0.017549312 = weight(_text_:22 in 5259) [ClassicSimilarity], result of:
              0.017549312 = score(doc=5259,freq=2.0), product of:
                0.090717286 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025905682 = queryNorm
                0.19345059 = fieldWeight in 5259, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5259)
          0.33333334 = coord(1/3)
      0.055555556 = coord(1/18)
    
    Date
    22. 7.2006 14:26:01
  20. Zheng, R.; Li, J.; Chen, H.; Huang, Z.: ¬A framework for authorship identification of online messages : writing-style features and classification techniques (2006) 0.00
    3.2498725E-4 = product of:
      0.0058497707 = sum of:
        0.0058497707 = product of:
          0.017549312 = sum of:
            0.017549312 = weight(_text_:22 in 5276) [ClassicSimilarity], result of:
              0.017549312 = score(doc=5276,freq=2.0), product of:
                0.090717286 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025905682 = queryNorm
                0.19345059 = fieldWeight in 5276, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5276)
          0.33333334 = coord(1/3)
      0.055555556 = coord(1/18)
    
    Date
    22. 7.2006 16:14:37