Search (18 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Computerlinguistik"
  • × type_ss:"a"
  • × year_i:[2000 TO 2010}
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.07
    0.07269405 = sum of:
      0.05420002 = product of:
        0.21680008 = sum of:
          0.21680008 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.21680008 = score(doc=562,freq=2.0), product of:
              0.3857529 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.045500398 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.018494027 = product of:
        0.036988053 = sum of:
          0.036988053 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.036988053 = score(doc=562,freq=2.0), product of:
              0.15933464 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045500398 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Schneider, J.W.; Borlund, P.: ¬A bibliometric-based semiautomatic approach to identification of candidate thesaurus terms : parsing and filtering of noun phrases from citation contexts (2005) 0.04
    0.044322617 = product of:
      0.088645235 = sum of:
        0.088645235 = sum of:
          0.045492508 = weight(_text_:p in 156) [ClassicSimilarity], result of:
            0.045492508 = score(doc=156,freq=2.0), product of:
              0.16359726 = queryWeight, product of:
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.045500398 = queryNorm
              0.27807623 = fieldWeight in 156, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.0546875 = fieldNorm(doc=156)
          0.04315273 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
            0.04315273 = score(doc=156,freq=2.0), product of:
              0.15933464 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045500398 = queryNorm
              0.2708308 = fieldWeight in 156, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=156)
      0.5 = coord(1/2)
    
    Date
    8. 3.2007 19:55:22
  3. Drouin, P.: Term extraction using non-technical corpora as a point of leverage (2003) 0.03
    0.02599572 = product of:
      0.05199144 = sum of:
        0.05199144 = product of:
          0.10398288 = sum of:
            0.10398288 = weight(_text_:p in 8797) [ClassicSimilarity], result of:
              0.10398288 = score(doc=8797,freq=2.0), product of:
                0.16359726 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.045500398 = queryNorm
                0.63560283 = fieldWeight in 8797, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.125 = fieldNorm(doc=8797)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  4. Ekmekcioglu, F.C.; Willett, P.: Effectiveness of stemming for Turkish text retrieval (2000) 0.02
    0.022746254 = product of:
      0.045492508 = sum of:
        0.045492508 = product of:
          0.090985015 = sum of:
            0.090985015 = weight(_text_:p in 5423) [ClassicSimilarity], result of:
              0.090985015 = score(doc=5423,freq=2.0), product of:
                0.16359726 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.045500398 = queryNorm
                0.55615246 = fieldWeight in 5423, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5423)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  5. Hull, D.; Ait-Mokhtar, S.; Chuat, M.; Eisele, A.; Gaussier, E.; Grefenstette, G.; Isabelle, P.; Samulesson, C.; Segand, F.: Language technologies and patent search and classification (2001) 0.02
    0.01949679 = product of:
      0.03899358 = sum of:
        0.03899358 = product of:
          0.07798716 = sum of:
            0.07798716 = weight(_text_:p in 6318) [ClassicSimilarity], result of:
              0.07798716 = score(doc=6318,freq=2.0), product of:
                0.16359726 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.045500398 = queryNorm
                0.47670212 = fieldWeight in 6318, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6318)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  6. Perera, P.; Witte, R.: ¬A self-learning context-aware lemmatizer for German (2005) 0.01
    0.01299786 = product of:
      0.02599572 = sum of:
        0.02599572 = product of:
          0.05199144 = sum of:
            0.05199144 = weight(_text_:p in 4638) [ClassicSimilarity], result of:
              0.05199144 = score(doc=4638,freq=2.0), product of:
                0.16359726 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.045500398 = queryNorm
                0.31780142 = fieldWeight in 4638, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4638)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  7. Ramisch, C.; Schreiner, P.; Idiart, M.; Villavicencio, A.: ¬An evaluation of methods for the extraction of multiword expressions (20xx) 0.01
    0.01299786 = product of:
      0.02599572 = sum of:
        0.02599572 = product of:
          0.05199144 = sum of:
            0.05199144 = weight(_text_:p in 962) [ClassicSimilarity], result of:
              0.05199144 = score(doc=962,freq=2.0), product of:
                0.16359726 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.045500398 = queryNorm
                0.31780142 = fieldWeight in 962, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0625 = fieldNorm(doc=962)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  8. Ruiz, M.E.; Srinivasan, P.: Combining machine learning and hierarchical indexing structures for text categorization (2001) 0.01
    0.011373127 = product of:
      0.022746254 = sum of:
        0.022746254 = product of:
          0.045492508 = sum of:
            0.045492508 = weight(_text_:p in 1595) [ClassicSimilarity], result of:
              0.045492508 = score(doc=1595,freq=2.0), product of:
                0.16359726 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.045500398 = queryNorm
                0.27807623 = fieldWeight in 1595, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1595)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  9. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.01
    0.01089771 = product of:
      0.02179542 = sum of:
        0.02179542 = product of:
          0.04359084 = sum of:
            0.04359084 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
              0.04359084 = score(doc=2541,freq=4.0), product of:
                0.15933464 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045500398 = queryNorm
                0.27358043 = fieldWeight in 2541, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2541)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
  10. Hammwöhner, R.: TransRouter revisited : Decision support in the routing of translation projects (2000) 0.01
    0.010788183 = product of:
      0.021576365 = sum of:
        0.021576365 = product of:
          0.04315273 = sum of:
            0.04315273 = weight(_text_:22 in 5483) [ClassicSimilarity], result of:
              0.04315273 = score(doc=5483,freq=2.0), product of:
                0.15933464 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045500398 = queryNorm
                0.2708308 = fieldWeight in 5483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5483)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    10.12.2000 18:22:35
  11. Paolillo, J.C.: Linguistics and the information sciences (2009) 0.01
    0.010788183 = product of:
      0.021576365 = sum of:
        0.021576365 = product of:
          0.04315273 = sum of:
            0.04315273 = weight(_text_:22 in 3840) [ClassicSimilarity], result of:
              0.04315273 = score(doc=3840,freq=2.0), product of:
                0.15933464 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045500398 = queryNorm
                0.2708308 = fieldWeight in 3840, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3840)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 8.2011 14:22:33
  12. Nait-Baha, L.; Jackiewicz, A.; Djioua, B.; Laublet, P.: Query reformulation for information retrieval on the Web using the point of view methodology : preliminary results (2001) 0.01
    0.009748395 = product of:
      0.01949679 = sum of:
        0.01949679 = product of:
          0.03899358 = sum of:
            0.03899358 = weight(_text_:p in 249) [ClassicSimilarity], result of:
              0.03899358 = score(doc=249,freq=2.0), product of:
                0.16359726 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.045500398 = queryNorm
                0.23835106 = fieldWeight in 249, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.046875 = fieldNorm(doc=249)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  13. Cimiano, P.; Völker, J.; Studer, R.: Ontologies on demand? : a description of the state-of-the-art, applications, challenges and trends for ontology learning from text (2006) 0.01
    0.009748395 = product of:
      0.01949679 = sum of:
        0.01949679 = product of:
          0.03899358 = sum of:
            0.03899358 = weight(_text_:p in 6014) [ClassicSimilarity], result of:
              0.03899358 = score(doc=6014,freq=2.0), product of:
                0.16359726 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.045500398 = queryNorm
                0.23835106 = fieldWeight in 6014, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6014)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  14. Argamon, S.; Whitelaw, C.; Chase, P.; Hota, S.R.; Garg, N.; Levitan, S.: Stylistic text classification using functional lexical features (2007) 0.01
    0.009748395 = product of:
      0.01949679 = sum of:
        0.01949679 = product of:
          0.03899358 = sum of:
            0.03899358 = weight(_text_:p in 280) [ClassicSimilarity], result of:
              0.03899358 = score(doc=280,freq=2.0), product of:
                0.16359726 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.045500398 = queryNorm
                0.23835106 = fieldWeight in 280, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.046875 = fieldNorm(doc=280)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  15. Bian, G.-W.; Chen, H.-H.: Cross-language information access to multilingual collections on the Internet (2000) 0.01
    0.009247013 = product of:
      0.018494027 = sum of:
        0.018494027 = product of:
          0.036988053 = sum of:
            0.036988053 = weight(_text_:22 in 4436) [ClassicSimilarity], result of:
              0.036988053 = score(doc=4436,freq=2.0), product of:
                0.15933464 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045500398 = queryNorm
                0.23214069 = fieldWeight in 4436, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4436)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    16. 2.2000 14:22:39
  16. Kettunen, K.; Kunttu, T.; Järvelin, K.: To stem or lemmatize a highly inflectional language in a probabilistic IR environment? (2005) 0.01
    0.008123662 = product of:
      0.016247325 = sum of:
        0.016247325 = product of:
          0.03249465 = sum of:
            0.03249465 = weight(_text_:p in 4395) [ClassicSimilarity], result of:
              0.03249465 = score(doc=4395,freq=2.0), product of:
                0.16359726 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.045500398 = queryNorm
                0.19862589 = fieldWeight in 4395, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4395)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - To show that stem generation compares well with lemmatization as a morphological tool for a highly inflectional language for IR purposes in a best-match retrieval system. Design/methodology/approach - Effects of three different morphological methods - lemmatization, stemming and stem production - for Finnish are compared in a probabilistic IR environment (INQUERY). Evaluation is done using a four-point relevance scale which is partitioned differently in different test settings. Findings - Results show that stem production, a lighter method than morphological lemmatization, compares well with lemmatization in a best-match IR environment. Differences in performance between stem production and lemmatization are small and they are not statistically significant in most of the tested settings. It is also shown that hitherto a rather neglected method of morphological processing for Finnish, stemming, performs reasonably well although the stemmer used - a Porter stemmer implementation - is far from optimal for a morphologically complex language like Finnish. In another series of tests, the effects of compound splitting and derivational expansion of queries are tested. Practical implications - Usefulness of morphological lemmatization and stem generation for IR purposes can be estimated with many factors. On the average P-R level they seem to behave very close to each other in a probabilistic IR system. Thus, the choice of the used method with highly inflectional languages needs to be estimated along other dimensions too. Originality/value - Results are achieved using Finnish as an example of a highly inflectional language. The results are of interest for anyone who is interested in processing of morphological variation of a highly inflected language for IR purposes.
  17. Ahlgren, P.; Kekäläinen, J.: Indexing strategies for Swedish full text retrieval under different user scenarios (2007) 0.01
    0.008123662 = product of:
      0.016247325 = sum of:
        0.016247325 = product of:
          0.03249465 = sum of:
            0.03249465 = weight(_text_:p in 896) [ClassicSimilarity], result of:
              0.03249465 = score(doc=896,freq=2.0), product of:
                0.16359726 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.045500398 = queryNorm
                0.19862589 = fieldWeight in 896, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=896)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  18. Yang, C.C.; Luk, J.: Automatic generation of English/Chinese thesaurus based on a parallel corpus in laws (2003) 0.01
    0.0053940914 = product of:
      0.010788183 = sum of:
        0.010788183 = product of:
          0.021576365 = sum of:
            0.021576365 = weight(_text_:22 in 1616) [ClassicSimilarity], result of:
              0.021576365 = score(doc=1616,freq=2.0), product of:
                0.15933464 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045500398 = queryNorm
                0.1354154 = fieldWeight in 1616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1616)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The information available in languages other than English in the World Wide Web is increasing significantly. According to a report from Computer Economics in 1999, 54% of Internet users are English speakers ("English Will Dominate Web for Only Three More Years," Computer Economics, July 9, 1999, http://www.computereconomics. com/new4/pr/pr990610.html). However, it is predicted that there will be only 60% increase in Internet users among English speakers verses a 150% growth among nonEnglish speakers for the next five years. By 2005, 57% of Internet users will be non-English speakers. A report by CNN.com in 2000 showed that the number of Internet users in China had been increased from 8.9 million to 16.9 million from January to June in 2000 ("Report: China Internet users double to 17 million," CNN.com, July, 2000, http://cnn.org/2000/TECH/computing/07/27/ china.internet.reut/index.html). According to Nielsen/ NetRatings, there was a dramatic leap from 22.5 millions to 56.6 millions Internet users from 2001 to 2002. China had become the second largest global at-home Internet population in 2002 (US's Internet population was 166 millions) (Robyn Greenspan, "China Pulls Ahead of Japan," Internet.com, April 22, 2002, http://cyberatias.internet.com/big-picture/geographics/article/0,,5911_1013841,00. html). All of the evidences reveal the importance of crosslingual research to satisfy the needs in the near future. Digital library research has been focusing in structural and semantic interoperability in the past. Searching and retrieving objects across variations in protocols, formats and disciplines are widely explored (Schatz, B., & Chen, H. (1999). Digital libraries: technological advances and social impacts. IEEE Computer, Special Issue an Digital Libraries, February, 32(2), 45-50.; Chen, H., Yen, J., & Yang, C.C. (1999). International activities: development of Asian digital libraries. IEEE Computer, Special Issue an Digital Libraries, 32(2), 48-49.). However, research in crossing language boundaries, especially across European languages and Oriental languages, is still in the initial stage. In this proposal, we put our focus an cross-lingual semantic interoperability by developing automatic generation of a cross-lingual thesaurus based an English/Chinese parallel corpus. When the searchers encounter retrieval problems, Professional librarians usually consult the thesaurus to identify other relevant vocabularies. In the problem of searching across language boundaries, a cross-lingual thesaurus, which is generated by co-occurrence analysis and Hopfield network, can be used to generate additional semantically relevant terms that cannot be obtained from dictionary. In particular, the automatically generated cross-lingual thesaurus is able to capture the unknown words that do not exist in a dictionary, such as names of persons, organizations, and events. Due to Hong Kong's unique history background, both English and Chinese are used as official languages in all legal documents. Therefore, English/Chinese cross-lingual information retrieval is critical for applications in courts and the government. In this paper, we develop an automatic thesaurus by the Hopfield network based an a parallel corpus collected from the Web site of the Department of Justice of the Hong Kong Special Administrative Region (HKSAR) Government. Experiments are conducted to measure the precision and recall of the automatic generated English/Chinese thesaurus. The result Shows that such thesaurus is a promising tool to retrieve relevant terms, especially in the language that is not the same as the input term. The direct translation of the input term can also be retrieved in most of the cases.