Search (19 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Computerlinguistik"
  • × type_ss:"a"
  • × year_i:[2000 TO 2010}
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.07
    0.07284273 = sum of:
      0.054310877 = product of:
        0.21724351 = sum of:
          0.21724351 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.21724351 = score(doc=562,freq=2.0), product of:
              0.38654187 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.045593463 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.018531853 = product of:
        0.037063707 = sum of:
          0.037063707 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.037063707 = score(doc=562,freq=2.0), product of:
              0.15966053 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045593463 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Hammwöhner, R.: TransRouter revisited : Decision support in the routing of translation projects (2000) 0.05
    0.048942618 = product of:
      0.097885236 = sum of:
        0.097885236 = sum of:
          0.054644246 = weight(_text_:r in 5483) [ClassicSimilarity], result of:
            0.054644246 = score(doc=5483,freq=4.0), product of:
              0.15092614 = queryWeight, product of:
                3.3102584 = idf(docFreq=4387, maxDocs=44218)
                0.045593463 = queryNorm
              0.3620595 = fieldWeight in 5483, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.3102584 = idf(docFreq=4387, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5483)
          0.04324099 = weight(_text_:22 in 5483) [ClassicSimilarity], result of:
            0.04324099 = score(doc=5483,freq=2.0), product of:
              0.15966053 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045593463 = queryNorm
              0.2708308 = fieldWeight in 5483, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5483)
      0.5 = coord(1/2)
    
    Date
    10.12.2000 18:22:35
    Source
    Informationskompetenz - Basiskompetenz in der Informationsgesellschaft: Proceedings des 7. Internationalen Symposiums für Informationswissenschaft (ISI 2000), Hrsg.: G. Knorz u. R. Kuhlen
  3. Bakar, Z.A.; Sembok, T.M.T.; Yusoff, M.: ¬An evaluation of retrieval effectiveness using spelling-correction and string-similarity matching methods on Malay texts (2000) 0.02
    0.020281417 = product of:
      0.040562835 = sum of:
        0.040562835 = product of:
          0.08112567 = sum of:
            0.08112567 = weight(_text_:r in 4804) [ClassicSimilarity], result of:
              0.08112567 = score(doc=4804,freq=12.0), product of:
                0.15092614 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.045593463 = queryNorm
                0.537519 = fieldWeight in 4804, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4804)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article evaluates the effectiveness of spelling-correction and string-similarity matching methods in retrieving similar words in a Maly dictionary associated with a set of query words. The spelling-correction techniques used are SPEEDCOP, Soundex, Davidson, Phonic, and Hartlib. 2 dynamic-programming methods that measure longest common subsequence and edit-cost-distance are used. Several search combinations od query and doctionary words are performed in the experiments, the best being one that stems both query and dictionary words using an existing Malay stemming algorithm. the retrieval effectivness (E) and retrieved and relevant (R&R) mean measure are calculated from weighted combination of recall and precision values. Results from these experiments are then compared with available diagram, a string-similarity method. The best R&R and E results are given by using diagram. Editcost-distances produce the best E results, and both dynamic-programming methods rank second in finding R&R mean measures
  4. Humphreys, K.; Demetriou, G.; Gaizauskas, R.: Bioinformatics applications of information extraction from scientific journal articles (2000) 0.02
    0.01931966 = product of:
      0.03863932 = sum of:
        0.03863932 = product of:
          0.07727864 = sum of:
            0.07727864 = weight(_text_:r in 4545) [ClassicSimilarity], result of:
              0.07727864 = score(doc=4545,freq=2.0), product of:
                0.15092614 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.045593463 = queryNorm
                0.51202947 = fieldWeight in 4545, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4545)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  5. Figuerola, C.G.; Gomez, R.; Lopez de San Roman, E.: Stemming and n-grams in Spanish : an evaluation of their impact in information retrieval (2000) 0.02
    0.016559707 = product of:
      0.033119414 = sum of:
        0.033119414 = product of:
          0.06623883 = sum of:
            0.06623883 = weight(_text_:r in 6501) [ClassicSimilarity], result of:
              0.06623883 = score(doc=6501,freq=2.0), product of:
                0.15092614 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.045593463 = queryNorm
                0.4388824 = fieldWeight in 6501, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6501)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  6. Xu, J.; Weischedel, R.; Licuanan, A.: Evaluation of an extraction-based approach to answering definitional questions (2004) 0.01
    0.013799756 = product of:
      0.027599512 = sum of:
        0.027599512 = product of:
          0.055199023 = sum of:
            0.055199023 = weight(_text_:r in 4107) [ClassicSimilarity], result of:
              0.055199023 = score(doc=4107,freq=2.0), product of:
                0.15092614 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.045593463 = queryNorm
                0.36573532 = fieldWeight in 4107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4107)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  7. Perera, P.; Witte, R.: ¬A self-learning context-aware lemmatizer for German (2005) 0.01
    0.011039805 = product of:
      0.02207961 = sum of:
        0.02207961 = product of:
          0.04415922 = sum of:
            0.04415922 = weight(_text_:r in 4638) [ClassicSimilarity], result of:
              0.04415922 = score(doc=4638,freq=2.0), product of:
                0.15092614 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.045593463 = queryNorm
                0.29258826 = fieldWeight in 4638, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4638)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  8. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.01
    0.010919999 = product of:
      0.021839999 = sum of:
        0.021839999 = product of:
          0.043679997 = sum of:
            0.043679997 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
              0.043679997 = score(doc=2541,freq=4.0), product of:
                0.15966053 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045593463 = queryNorm
                0.27358043 = fieldWeight in 2541, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2541)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
  9. Schneider, J.W.; Borlund, P.: ¬A bibliometric-based semiautomatic approach to identification of candidate thesaurus terms : parsing and filtering of noun phrases from citation contexts (2005) 0.01
    0.010810248 = product of:
      0.021620495 = sum of:
        0.021620495 = product of:
          0.04324099 = sum of:
            0.04324099 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
              0.04324099 = score(doc=156,freq=2.0), product of:
                0.15966053 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045593463 = queryNorm
                0.2708308 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8. 3.2007 19:55:22
  10. Paolillo, J.C.: Linguistics and the information sciences (2009) 0.01
    0.010810248 = product of:
      0.021620495 = sum of:
        0.021620495 = product of:
          0.04324099 = sum of:
            0.04324099 = weight(_text_:22 in 3840) [ClassicSimilarity], result of:
              0.04324099 = score(doc=3840,freq=2.0), product of:
                0.15966053 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045593463 = queryNorm
                0.2708308 = fieldWeight in 3840, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3840)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 8.2011 14:22:33
  11. Green, R.: Automated identification of frame semantic relational structures (2000) 0.01
    0.00965983 = product of:
      0.01931966 = sum of:
        0.01931966 = product of:
          0.03863932 = sum of:
            0.03863932 = weight(_text_:r in 110) [ClassicSimilarity], result of:
              0.03863932 = score(doc=110,freq=2.0), product of:
                0.15092614 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.045593463 = queryNorm
                0.25601473 = fieldWeight in 110, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=110)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  12. Bian, G.-W.; Chen, H.-H.: Cross-language information access to multilingual collections on the Internet (2000) 0.01
    0.009265927 = product of:
      0.018531853 = sum of:
        0.018531853 = product of:
          0.037063707 = sum of:
            0.037063707 = weight(_text_:22 in 4436) [ClassicSimilarity], result of:
              0.037063707 = score(doc=4436,freq=2.0), product of:
                0.15966053 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045593463 = queryNorm
                0.23214069 = fieldWeight in 4436, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4436)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    16. 2.2000 14:22:39
  13. Goller, C.; Löning, J.; Will, T.; Wolff, W.: Automatic document classification : a thourough evaluation of various methods (2000) 0.01
    0.0082798535 = product of:
      0.016559707 = sum of:
        0.016559707 = product of:
          0.033119414 = sum of:
            0.033119414 = weight(_text_:r in 5480) [ClassicSimilarity], result of:
              0.033119414 = score(doc=5480,freq=2.0), product of:
                0.15092614 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.045593463 = queryNorm
                0.2194412 = fieldWeight in 5480, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5480)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Informationskompetenz - Basiskompetenz in der Informationsgesellschaft: Proceedings des 7. Internationalen Symposiums für Informationswissenschaft (ISI 2000), Hrsg.: G. Knorz u. R. Kuhlen
  14. Cimiano, P.; Völker, J.; Studer, R.: Ontologies on demand? : a description of the state-of-the-art, applications, challenges and trends for ontology learning from text (2006) 0.01
    0.0082798535 = product of:
      0.016559707 = sum of:
        0.016559707 = product of:
          0.033119414 = sum of:
            0.033119414 = weight(_text_:r in 6014) [ClassicSimilarity], result of:
              0.033119414 = score(doc=6014,freq=2.0), product of:
                0.15092614 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.045593463 = queryNorm
                0.2194412 = fieldWeight in 6014, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6014)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  15. Kim, W.; Wilbur, W.J.: Corpus-based statistical screening for content-bearing terms (2001) 0.01
    0.0078063207 = product of:
      0.015612641 = sum of:
        0.015612641 = product of:
          0.031225283 = sum of:
            0.031225283 = weight(_text_:r in 5188) [ClassicSimilarity], result of:
              0.031225283 = score(doc=5188,freq=4.0), product of:
                0.15092614 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.045593463 = queryNorm
                0.20689115 = fieldWeight in 5188, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5188)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Kim and Wilber present three techniques for the algorithmic identification in text of content bearing terms and phrases intended for human use as entry points or hyperlinks. Using a set of 1,075 terms from MEDLINE evaluated on a zero to four, stop word to definite content word scale, they evaluate the ranked lists of their three methods based on their placement of content words in the top ranks. Data consist of the natural language elements of 304,057 MEDLINE records from 1996, and 173,252 Wall Street Journal records from the TIPSTER collection. Phrases are extracted by breaking at punctuation marks and stop words, normalized by lower casing, replacement of nonalphanumerics with spaces, and the reduction of multiple spaces. In the ``strength of context'' approach each document is a vector of binary values for each word or word pair. The words or word pairs are removed from all documents, and the Robertson, Spark Jones relevance weight for each term computed, negative weights replaced with zero, those below a randomness threshold ignored, and the remainder summed for each document, to yield a score for the document and finally to assign to the term the average document score for documents in which it occurred. The average of these word scores is assigned to the original phrase. The ``frequency clumping'' approach defines a random phrase as one whose distribution among documents is Poisson in character. A pvalue, the probability that a phrase frequency of occurrence would be equal to, or less than, Poisson expectations is computed, and a score assigned which is the negative log of that value. In the ``database comparison'' approach if a phrase occurring in a document allows prediction that the document is in MEDLINE rather that in the Wall Street Journal, it is considered to be content bearing for MEDLINE. The score is computed by dividing the number of occurrences of the term in MEDLINE by occurrences in the Journal, and taking the product of all these values. The one hundred top and bottom ranked phrases that occurred in at least 500 documents were collected for each method. The union set had 476 phrases. A second selection was made of two word phrases occurring each in only three documents with a union of 599 phrases. A judge then ranked the two sets of terms as to subject specificity on a 0 to 4 scale. Precision was the average subject specificity of the first r ranks and recall the fraction of the subject specific phrases in the first r ranks and eleven point average precision was used as a summary measure. The three methods all move content bearing terms forward in the lists as does the use of the sum of the logs of the three methods.
  16. Chandrasekar, R.; Bangalore, S.: Glean : using syntactic information in document filtering (2002) 0.01
    0.006899878 = product of:
      0.013799756 = sum of:
        0.013799756 = product of:
          0.027599512 = sum of:
            0.027599512 = weight(_text_:r in 4257) [ClassicSimilarity], result of:
              0.027599512 = score(doc=4257,freq=2.0), product of:
                0.15092614 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.045593463 = queryNorm
                0.18286766 = fieldWeight in 4257, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4257)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  17. Kettunen, K.; Kunttu, T.; Järvelin, K.: To stem or lemmatize a highly inflectional language in a probabilistic IR environment? (2005) 0.01
    0.006899878 = product of:
      0.013799756 = sum of:
        0.013799756 = product of:
          0.027599512 = sum of:
            0.027599512 = weight(_text_:r in 4395) [ClassicSimilarity], result of:
              0.027599512 = score(doc=4395,freq=2.0), product of:
                0.15092614 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.045593463 = queryNorm
                0.18286766 = fieldWeight in 4395, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4395)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - To show that stem generation compares well with lemmatization as a morphological tool for a highly inflectional language for IR purposes in a best-match retrieval system. Design/methodology/approach - Effects of three different morphological methods - lemmatization, stemming and stem production - for Finnish are compared in a probabilistic IR environment (INQUERY). Evaluation is done using a four-point relevance scale which is partitioned differently in different test settings. Findings - Results show that stem production, a lighter method than morphological lemmatization, compares well with lemmatization in a best-match IR environment. Differences in performance between stem production and lemmatization are small and they are not statistically significant in most of the tested settings. It is also shown that hitherto a rather neglected method of morphological processing for Finnish, stemming, performs reasonably well although the stemmer used - a Porter stemmer implementation - is far from optimal for a morphologically complex language like Finnish. In another series of tests, the effects of compound splitting and derivational expansion of queries are tested. Practical implications - Usefulness of morphological lemmatization and stem generation for IR purposes can be estimated with many factors. On the average P-R level they seem to behave very close to each other in a probabilistic IR system. Thus, the choice of the used method with highly inflectional languages needs to be estimated along other dimensions too. Originality/value - Results are achieved using Finnish as an example of a highly inflectional language. The results are of interest for anyone who is interested in processing of morphological variation of a highly inflected language for IR purposes.
  18. Bird, S.; Dale, R.; Dorr, B.; Gibson, B.; Joseph, M.; Kan, M.-Y.; Lee, D.; Powley, B.; Radev, D.; Tan, Y.F.: ¬The ACL Anthology Reference Corpus : a reference dataset for bibliographic research in computational linguistics (2008) 0.01
    0.0055199023 = product of:
      0.011039805 = sum of:
        0.011039805 = product of:
          0.02207961 = sum of:
            0.02207961 = weight(_text_:r in 2804) [ClassicSimilarity], result of:
              0.02207961 = score(doc=2804,freq=2.0), product of:
                0.15092614 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.045593463 = queryNorm
                0.14629413 = fieldWeight in 2804, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2804)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  19. Yang, C.C.; Luk, J.: Automatic generation of English/Chinese thesaurus based on a parallel corpus in laws (2003) 0.01
    0.005405124 = product of:
      0.010810248 = sum of:
        0.010810248 = product of:
          0.021620495 = sum of:
            0.021620495 = weight(_text_:22 in 1616) [ClassicSimilarity], result of:
              0.021620495 = score(doc=1616,freq=2.0), product of:
                0.15966053 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045593463 = queryNorm
                0.1354154 = fieldWeight in 1616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1616)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The information available in languages other than English in the World Wide Web is increasing significantly. According to a report from Computer Economics in 1999, 54% of Internet users are English speakers ("English Will Dominate Web for Only Three More Years," Computer Economics, July 9, 1999, http://www.computereconomics. com/new4/pr/pr990610.html). However, it is predicted that there will be only 60% increase in Internet users among English speakers verses a 150% growth among nonEnglish speakers for the next five years. By 2005, 57% of Internet users will be non-English speakers. A report by CNN.com in 2000 showed that the number of Internet users in China had been increased from 8.9 million to 16.9 million from January to June in 2000 ("Report: China Internet users double to 17 million," CNN.com, July, 2000, http://cnn.org/2000/TECH/computing/07/27/ china.internet.reut/index.html). According to Nielsen/ NetRatings, there was a dramatic leap from 22.5 millions to 56.6 millions Internet users from 2001 to 2002. China had become the second largest global at-home Internet population in 2002 (US's Internet population was 166 millions) (Robyn Greenspan, "China Pulls Ahead of Japan," Internet.com, April 22, 2002, http://cyberatias.internet.com/big-picture/geographics/article/0,,5911_1013841,00. html). All of the evidences reveal the importance of crosslingual research to satisfy the needs in the near future. Digital library research has been focusing in structural and semantic interoperability in the past. Searching and retrieving objects across variations in protocols, formats and disciplines are widely explored (Schatz, B., & Chen, H. (1999). Digital libraries: technological advances and social impacts. IEEE Computer, Special Issue an Digital Libraries, February, 32(2), 45-50.; Chen, H., Yen, J., & Yang, C.C. (1999). International activities: development of Asian digital libraries. IEEE Computer, Special Issue an Digital Libraries, 32(2), 48-49.). However, research in crossing language boundaries, especially across European languages and Oriental languages, is still in the initial stage. In this proposal, we put our focus an cross-lingual semantic interoperability by developing automatic generation of a cross-lingual thesaurus based an English/Chinese parallel corpus. When the searchers encounter retrieval problems, Professional librarians usually consult the thesaurus to identify other relevant vocabularies. In the problem of searching across language boundaries, a cross-lingual thesaurus, which is generated by co-occurrence analysis and Hopfield network, can be used to generate additional semantically relevant terms that cannot be obtained from dictionary. In particular, the automatically generated cross-lingual thesaurus is able to capture the unknown words that do not exist in a dictionary, such as names of persons, organizations, and events. Due to Hong Kong's unique history background, both English and Chinese are used as official languages in all legal documents. Therefore, English/Chinese cross-lingual information retrieval is critical for applications in courts and the government. In this paper, we develop an automatic thesaurus by the Hopfield network based an a parallel corpus collected from the Web site of the Department of Justice of the Hong Kong Special Administrative Region (HKSAR) Government. Experiments are conducted to measure the precision and recall of the automatic generated English/Chinese thesaurus. The result Shows that such thesaurus is a promising tool to retrieve relevant terms, especially in the language that is not the same as the input term. The direct translation of the input term can also be retrieved in most of the cases.