Search (162 results, page 1 of 9)

  • × language_ss:"e"
  • × theme_ss:"Computerlinguistik"
  • × year_i:[2000 TO 2010}
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.08
    0.0846826 = sum of:
      0.054507013 = product of:
        0.21802805 = sum of:
          0.21802805 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.21802805 = score(doc=562,freq=2.0), product of:
              0.3879378 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.045758117 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.030175587 = product of:
        0.04526338 = sum of:
          0.008065818 = weight(_text_:a in 562) [ClassicSimilarity], result of:
            0.008065818 = score(doc=562,freq=8.0), product of:
              0.052761257 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.045758117 = queryNorm
              0.15287387 = fieldWeight in 562, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
          0.03719756 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.03719756 = score(doc=562,freq=2.0), product of:
              0.16023713 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045758117 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.6666667 = coord(2/3)
    
    Abstract
    Document representations for text classification are typically based on the classical Bag-Of-Words paradigm. This approach comes with deficiencies that motivate the integration of features on a higher semantic level than single words. In this paper we propose an enhancement of the classical document representation through concepts extracted from background knowledge. Boosting is used for actual classification. Experimental evaluations on two well known text corpora support our approach through consistent improvement of the results.
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
    Type
    a
  2. Bian, G.-W.; Chen, H.-H.: Cross-language information access to multilingual collections on the Internet (2000) 0.04
    0.035330825 = product of:
      0.07066165 = sum of:
        0.07066165 = sum of:
          0.006985203 = weight(_text_:a in 4436) [ClassicSimilarity], result of:
            0.006985203 = score(doc=4436,freq=6.0), product of:
              0.052761257 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.045758117 = queryNorm
              0.13239266 = fieldWeight in 4436, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=4436)
          0.026478881 = weight(_text_:h in 4436) [ClassicSimilarity], result of:
            0.026478881 = score(doc=4436,freq=4.0), product of:
              0.113683715 = queryWeight, product of:
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.045758117 = queryNorm
              0.2329171 = fieldWeight in 4436, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.046875 = fieldNorm(doc=4436)
          0.03719756 = weight(_text_:22 in 4436) [ClassicSimilarity], result of:
            0.03719756 = score(doc=4436,freq=2.0), product of:
              0.16023713 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045758117 = queryNorm
              0.23214069 = fieldWeight in 4436, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4436)
      0.5 = coord(1/2)
    
    Abstract
    Language barrier is the major problem that people face in searching for, retrieving, and understanding multilingual collections on the Internet. This paper deals with query translation and document translation in a Chinese-English information retrieval system called MTIR. Bilingual dictionary and monolingual corpus-based approaches are adopted to select suitable tranlated query terms. A machine transliteration algorithm is introduced to resolve proper name searching. We consider several design issues for document translation, including which material is translated, what roles the HTML tags play in translation, what the tradeoff is between the speed performance and the translation performance, and what from the translated result is presented in. About 100.000 Web pages translated in the last 4 months of 1997 are used for quantitative study of online and real-time Web page translation
    Date
    16. 2.2000 14:22:39
    Type
    a
  3. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.03
    0.02748698 = product of:
      0.05497396 = sum of:
        0.05497396 = product of:
          0.08246094 = sum of:
            0.008065818 = weight(_text_:a in 4888) [ClassicSimilarity], result of:
              0.008065818 = score(doc=4888,freq=2.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.15287387 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
            0.07439512 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.07439512 = score(doc=4888,freq=2.0), product of:
                0.16023713 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045758117 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    1. 3.2013 14:56:22
  4. Computational linguistics for the new millennium : divergence or synergy? Proceedings of the International Symposium held at the Ruprecht-Karls Universität Heidelberg, 21-22 July 2000. Festschrift in honour of Peter Hellwig on the occasion of his 60th birthday (2002) 0.03
    0.026661158 = product of:
      0.053322315 = sum of:
        0.053322315 = sum of:
          0.0067215143 = weight(_text_:a in 4900) [ClassicSimilarity], result of:
            0.0067215143 = score(doc=4900,freq=8.0), product of:
              0.052761257 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.045758117 = queryNorm
              0.12739488 = fieldWeight in 4900, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4900)
          0.015602832 = weight(_text_:h in 4900) [ClassicSimilarity], result of:
            0.015602832 = score(doc=4900,freq=2.0), product of:
              0.113683715 = queryWeight, product of:
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.045758117 = queryNorm
              0.13724773 = fieldWeight in 4900, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4900)
          0.030997967 = weight(_text_:22 in 4900) [ClassicSimilarity], result of:
            0.030997967 = score(doc=4900,freq=2.0), product of:
              0.16023713 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045758117 = queryNorm
              0.19345059 = fieldWeight in 4900, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4900)
      0.5 = coord(1/2)
    
    Abstract
    The two seemingly conflicting tendencies, synergy and divergence, are both fundamental to the advancement of any science. Their interplay defines the demarcation line between application-oriented and theoretical research. The papers in this festschrift in honour of Peter Hellwig are geared to answer questions that arise from this insight: where does the discipline of Computational Linguistics currently stand, what has been achieved so far and what should be done next. Given the complexity of such questions, no simple answers can be expected. However, each of the practitioners and researchers are contributing from their very own perspective a piece of insight into the overall picture of today's and tomorrow's computational linguistics.
    Content
    Contents: Manfred Klenner / Henriette Visser: Introduction - Khurshid Ahmad: Writing Linguistics: When I use a word it means what I choose it to mean - Jürgen Handke: 2000 and Beyond: The Potential of New Technologies in Linguistics - Jurij Apresjan / Igor Boguslavsky / Leonid Iomdin / Leonid Tsinman: Lexical Functions in NU: Possible Uses - Hubert Lehmann: Practical Machine Translation and Linguistic Theory - Karin Haenelt: A Contextbased Approach towards Content Processing of Electronic Documents - Petr Sgall / Eva Hajicová: Are Linguistic Frameworks Comparable? - Wolfgang Menzel: Theory and Applications in Computational Linguistics - Is there Common Ground? - Robert Porzel / Michael Strube: Towards Context-adaptive Natural Language Processing Systems - Nicoletta Calzolari: Language Resources in a Multilingual Setting: The European Perspective - Piek Vossen: Computational Linguistics for Theory and Practice.
    Editor
    Klenner, M. u. H. Visser
  5. Yang, C.C.; Luk, J.: Automatic generation of English/Chinese thesaurus based on a parallel corpus in laws (2003) 0.02
    0.022646997 = product of:
      0.045293994 = sum of:
        0.045293994 = sum of:
          0.008149404 = weight(_text_:a in 1616) [ClassicSimilarity], result of:
            0.008149404 = score(doc=1616,freq=24.0), product of:
              0.052761257 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.045758117 = queryNorm
              0.1544581 = fieldWeight in 1616, product of:
                4.8989797 = tf(freq=24.0), with freq of:
                  24.0 = termFreq=24.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1616)
          0.015446015 = weight(_text_:h in 1616) [ClassicSimilarity], result of:
            0.015446015 = score(doc=1616,freq=4.0), product of:
              0.113683715 = queryWeight, product of:
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.045758117 = queryNorm
              0.13586831 = fieldWeight in 1616, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1616)
          0.021698575 = weight(_text_:22 in 1616) [ClassicSimilarity], result of:
            0.021698575 = score(doc=1616,freq=2.0), product of:
              0.16023713 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045758117 = queryNorm
              0.1354154 = fieldWeight in 1616, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1616)
      0.5 = coord(1/2)
    
    Abstract
    The information available in languages other than English in the World Wide Web is increasing significantly. According to a report from Computer Economics in 1999, 54% of Internet users are English speakers ("English Will Dominate Web for Only Three More Years," Computer Economics, July 9, 1999, http://www.computereconomics. com/new4/pr/pr990610.html). However, it is predicted that there will be only 60% increase in Internet users among English speakers verses a 150% growth among nonEnglish speakers for the next five years. By 2005, 57% of Internet users will be non-English speakers. A report by CNN.com in 2000 showed that the number of Internet users in China had been increased from 8.9 million to 16.9 million from January to June in 2000 ("Report: China Internet users double to 17 million," CNN.com, July, 2000, http://cnn.org/2000/TECH/computing/07/27/ china.internet.reut/index.html). According to Nielsen/ NetRatings, there was a dramatic leap from 22.5 millions to 56.6 millions Internet users from 2001 to 2002. China had become the second largest global at-home Internet population in 2002 (US's Internet population was 166 millions) (Robyn Greenspan, "China Pulls Ahead of Japan," Internet.com, April 22, 2002, http://cyberatias.internet.com/big-picture/geographics/article/0,,5911_1013841,00. html). All of the evidences reveal the importance of crosslingual research to satisfy the needs in the near future. Digital library research has been focusing in structural and semantic interoperability in the past. Searching and retrieving objects across variations in protocols, formats and disciplines are widely explored (Schatz, B., & Chen, H. (1999). Digital libraries: technological advances and social impacts. IEEE Computer, Special Issue an Digital Libraries, February, 32(2), 45-50.; Chen, H., Yen, J., & Yang, C.C. (1999). International activities: development of Asian digital libraries. IEEE Computer, Special Issue an Digital Libraries, 32(2), 48-49.). However, research in crossing language boundaries, especially across European languages and Oriental languages, is still in the initial stage. In this proposal, we put our focus an cross-lingual semantic interoperability by developing automatic generation of a cross-lingual thesaurus based an English/Chinese parallel corpus. When the searchers encounter retrieval problems, Professional librarians usually consult the thesaurus to identify other relevant vocabularies. In the problem of searching across language boundaries, a cross-lingual thesaurus, which is generated by co-occurrence analysis and Hopfield network, can be used to generate additional semantically relevant terms that cannot be obtained from dictionary. In particular, the automatically generated cross-lingual thesaurus is able to capture the unknown words that do not exist in a dictionary, such as names of persons, organizations, and events. Due to Hong Kong's unique history background, both English and Chinese are used as official languages in all legal documents. Therefore, English/Chinese cross-lingual information retrieval is critical for applications in courts and the government. In this paper, we develop an automatic thesaurus by the Hopfield network based an a parallel corpus collected from the Web site of the Department of Justice of the Hong Kong Special Administrative Region (HKSAR) Government. Experiments are conducted to measure the precision and recall of the automatic generated English/Chinese thesaurus. The result Shows that such thesaurus is a promising tool to retrieve relevant terms, especially in the language that is not the same as the input term. The direct translation of the input term can also be retrieved in most of the cases.
    Footnote
    Teil eines Themenheftes: "Web retrieval and mining: A machine learning perspective"
    Type
    a
  6. Pirkola, A.; Hedlund, T.; Keskustalo, H.; Järvelin, K.: Dictionary-based cross-language information retrieval : problems, methods, and research findings (2001) 0.02
    0.018998615 = product of:
      0.03799723 = sum of:
        0.03799723 = product of:
          0.056995846 = sum of:
            0.013307921 = weight(_text_:a in 3908) [ClassicSimilarity], result of:
              0.013307921 = score(doc=3908,freq=4.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.25222903 = fieldWeight in 3908, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3908)
            0.043687925 = weight(_text_:h in 3908) [ClassicSimilarity], result of:
              0.043687925 = score(doc=3908,freq=2.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.38429362 = fieldWeight in 3908, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3908)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Type
    a
  7. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.02
    0.017781135 = product of:
      0.03556227 = sum of:
        0.03556227 = product of:
          0.0533434 = sum of:
            0.0095056575 = weight(_text_:a in 2541) [ClassicSimilarity], result of:
              0.0095056575 = score(doc=2541,freq=16.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.18016359 = fieldWeight in 2541, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2541)
            0.043837745 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
              0.043837745 = score(doc=2541,freq=4.0), product of:
                0.16023713 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045758117 = queryNorm
                0.27358043 = fieldWeight in 2541, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2541)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    The Specialized Information Services Division (SIS) of the National Library of Medicine (NLM) provides Web access to more than a dozen scientific databases on toxicology and the environment on TOXNET . Search queries on TOXNET often include misspelled or variant English words, medical and scientific jargon and chemical names. Following the example of search engines like Google and ClinicalTrials.gov, we set out to develop a spelling "suggestion" system for increased recall and precision in TOXNET searching. This paper describes development of dictionary technology that can be used in a variety of applications such as orthographic verification, writing aid, natural language processing, and information storage and retrieval. The design of the technology allows building complex applications using the components developed in the earlier phases of the work in a modular fashion without extensive rewriting of computer code. Since many of the potential applications envisioned for this work have on-line or web-based interfaces, the dictionaries and other computer components must have fast response, and must be adaptable to open-ended database vocabularies, including chemical nomenclature. The dictionary vocabulary for this work was derived from SIS and other databases and specialized resources, such as NLM's Unified Medical Language Systems (UMLS) . The resulting technology, A-Z Dictionary (AZdict), has three major constituents: 1) the vocabulary list, 2) the word attributes that define part of speech and morphological relationships between words in the list, and 3) a set of programs that implements the retrieval of words and their attributes, and determines similarity between words (ChemSpell). These three components can be used in various applications such as spelling verification, spelling aid, part-of-speech tagging, paraphrasing, and many other natural language processing functions.
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
    Type
    a
  8. Nakagawa, H.; Mori, T.: Automatic term recognition based an statistics of compound nouns and their components (2003) 0.02
    0.01769935 = product of:
      0.0353987 = sum of:
        0.0353987 = product of:
          0.053098045 = sum of:
            0.009410121 = weight(_text_:a in 4123) [ClassicSimilarity], result of:
              0.009410121 = score(doc=4123,freq=2.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.17835285 = fieldWeight in 4123, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4123)
            0.043687925 = weight(_text_:h in 4123) [ClassicSimilarity], result of:
              0.043687925 = score(doc=4123,freq=2.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.38429362 = fieldWeight in 4123, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4123)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Type
    a
  9. Paolillo, J.C.: Linguistics and the information sciences (2009) 0.02
    0.017602425 = product of:
      0.03520485 = sum of:
        0.03520485 = product of:
          0.05280727 = sum of:
            0.009410121 = weight(_text_:a in 3840) [ClassicSimilarity], result of:
              0.009410121 = score(doc=3840,freq=8.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.17835285 = fieldWeight in 3840, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3840)
            0.04339715 = weight(_text_:22 in 3840) [ClassicSimilarity], result of:
              0.04339715 = score(doc=3840,freq=2.0), product of:
                0.16023713 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045758117 = queryNorm
                0.2708308 = fieldWeight in 3840, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3840)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Linguistics is the scientific study of language which emphasizes language spoken in everyday settings by human beings. It has a long history of interdisciplinarity, both internally and in contribution to other fields, including information science. A linguistic perspective is beneficial in many ways in information science, since it examines the relationship between the forms of meaningful expressions and their social, cognitive, institutional, and communicative context, these being two perspectives on information that are actively studied, to different degrees, in information science. Examples of issues relevant to information science are presented for which the approach taken under a linguistic perspective is illustrated.
    Date
    27. 8.2011 14:22:33
    Type
    a
  10. Hammwöhner, R.: TransRouter revisited : Decision support in the routing of translation projects (2000) 0.02
    0.017182186 = product of:
      0.034364372 = sum of:
        0.034364372 = product of:
          0.051546555 = sum of:
            0.008149404 = weight(_text_:a in 5483) [ClassicSimilarity], result of:
              0.008149404 = score(doc=5483,freq=6.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.1544581 = fieldWeight in 5483, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5483)
            0.04339715 = weight(_text_:22 in 5483) [ClassicSimilarity], result of:
              0.04339715 = score(doc=5483,freq=2.0), product of:
                0.16023713 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045758117 = queryNorm
                0.2708308 = fieldWeight in 5483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5483)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    This paper gives an outline of the final results of the TransRouter project. In the scope of this project a decision support system for translation managers has been developed, which will support the selection of appropriate routes for translation projects. In this paper emphasis is put on the decision model, which is based on a stepwise refined assessment of translation routes. The workflow of using this system is considered as well
    Date
    10.12.2000 18:22:35
    Type
    a
  11. Schneider, J.W.; Borlund, P.: ¬A bibliometric-based semiautomatic approach to identification of candidate thesaurus terms : parsing and filtering of noun phrases from citation contexts (2005) 0.02
    0.017182186 = product of:
      0.034364372 = sum of:
        0.034364372 = product of:
          0.051546555 = sum of:
            0.008149404 = weight(_text_:a in 156) [ClassicSimilarity], result of:
              0.008149404 = score(doc=156,freq=6.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.1544581 = fieldWeight in 156, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
            0.04339715 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
              0.04339715 = score(doc=156,freq=2.0), product of:
                0.16023713 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045758117 = queryNorm
                0.2708308 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    The present study investigates the ability of a bibliometric based semi-automatic method to select candidate thesaurus terms from citation contexts. The method consists of document co-citation analysis, citation context analysis, and noun phrase parsing. The investigation is carried out within the specialty area of periodontology. The results clearly demonstrate that the method is able to select important candidate thesaurus terms within the chosen specialty area.
    Date
    8. 3.2007 19:55:22
    Type
    a
  12. Perera, P.; Witte, R.: ¬A self-learning context-aware lemmatizer for German (2005) 0.01
    0.012329447 = product of:
      0.024658894 = sum of:
        0.024658894 = product of:
          0.03698834 = sum of:
            0.012023811 = weight(_text_:a in 4638) [ClassicSimilarity], result of:
              0.012023811 = score(doc=4638,freq=10.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.22789092 = fieldWeight in 4638, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4638)
            0.02496453 = weight(_text_:h in 4638) [ClassicSimilarity], result of:
              0.02496453 = score(doc=4638,freq=2.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.21959636 = fieldWeight in 4638, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4638)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Accurate lemmatization of German nouns mandates the use of a lexicon. Comprehensive lexicons, however, are expensive to build and maintain. We present a self-learning lemmatizer capable of automatically creating a full-form lexicon by processing German documents.
    Content
    Vgl. unter: http://acl.ldc.upenn.edu//H/H05/H05-1080.pdf.
    Type
    a
  13. Radev, D.; Fan, W.; Qu, H.; Wu, H.; Grewal, A.: Probabilistic question answering on the Web (2005) 0.01
    0.01211915 = product of:
      0.0242383 = sum of:
        0.0242383 = product of:
          0.036357448 = sum of:
            0.009878568 = weight(_text_:a in 3455) [ClassicSimilarity], result of:
              0.009878568 = score(doc=3455,freq=12.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.18723148 = fieldWeight in 3455, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3455)
            0.026478881 = weight(_text_:h in 3455) [ClassicSimilarity], result of:
              0.026478881 = score(doc=3455,freq=4.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.2329171 = fieldWeight in 3455, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3455)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Web-based search engines such as Google and NorthernLight return documents that are relevant to a user query, not answers to user questions. We have developed an architecture that augments existing search engines so that they support natural language question answering. The process entails five steps: query modulation, document retrieval, passage extraction, phrase extraction, and answer ranking. In this article, we describe some probabilistic approaches to the last three of these stages. We show how our techniques apply to a number of existing search engines, and we also present results contrasting three different methods for question answering. Our algorithm, probabilistic phrase reranking (PPR), uses proximity and question type features and achieves a total reciprocal document rank of .20 an the TREC8 corpus. Our techniques have been implemented as a Web-accessible system, called NSIR.
    Type
    a
  14. Moisl, H.: Artificial neural networks and Natural Language Processing (2009) 0.01
    0.011426046 = product of:
      0.022852091 = sum of:
        0.022852091 = product of:
          0.034278136 = sum of:
            0.009313605 = weight(_text_:a in 3138) [ClassicSimilarity], result of:
              0.009313605 = score(doc=3138,freq=6.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.17652355 = fieldWeight in 3138, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3138)
            0.02496453 = weight(_text_:h in 3138) [ClassicSimilarity], result of:
              0.02496453 = score(doc=3138,freq=2.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.21959636 = fieldWeight in 3138, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3138)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    This entry gives an overview of work to date on natural language processing (NLP) using artificial neural networks (ANN). It is in three main parts: the first gives a brief introduction to ANNs, the second outlines some of the main issues in ANN-based NLP, and the third surveys specific application areas. Each part cites a representative selection of research literature that itself contains pointers to further reading.
    Type
    a
  15. Ruiz, M.E.; Srinivasan, P.: Combining machine learning and hierarchical indexing structures for text categorization (2001) 0.01
    0.010418028 = product of:
      0.020836055 = sum of:
        0.020836055 = product of:
          0.031254083 = sum of:
            0.009410121 = weight(_text_:a in 1595) [ClassicSimilarity], result of:
              0.009410121 = score(doc=1595,freq=8.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.17835285 = fieldWeight in 1595, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1595)
            0.021843962 = weight(_text_:h in 1595) [ClassicSimilarity], result of:
              0.021843962 = score(doc=1595,freq=2.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.19214681 = fieldWeight in 1595, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1595)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents a method that exploits the hierarchical structure of an indexing vocabulary to guide the development and training of machine learning methods for automatic text categorization. We present the design of a hierarchical classifier based an the divide-and-conquer principle. The method is evaluated using backpropagation neural networks, such as the machine learning algorithm, that leam to assign MeSH categories to a subset of MEDLINE records. Comparisons with traditional Rocchio's algorithm adapted for text categorization, as well as flat neural network classifiers, are provided. The results indicate that the use of hierarchical structures improves Performance significantly.
    Source
    Advances in classification research, vol.10: proceedings of the 10th ASIS SIG/CR Classification Research Workshop. Ed.: Albrechtsen, H. u. J.E. Mai
    Type
    a
  16. Chen, K.-H.: Evaluating Chinese text retrieval with multilingual queries (2002) 0.01
    0.009997789 = product of:
      0.019995578 = sum of:
        0.019995578 = product of:
          0.029993366 = sum of:
            0.008149404 = weight(_text_:a in 1851) [ClassicSimilarity], result of:
              0.008149404 = score(doc=1851,freq=6.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.1544581 = fieldWeight in 1851, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1851)
            0.021843962 = weight(_text_:h in 1851) [ClassicSimilarity], result of:
              0.021843962 = score(doc=1851,freq=2.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.19214681 = fieldWeight in 1851, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1851)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    This paper reports the design of a Chinese test collection with multilingual queries and the application of this test collection to evaluate information retrieval Systems. The effective indexing units, IR models, translation techniques, and query expansion for Chinese text retrieval are identified. The collaboration of East Asian countries for construction of test collections for cross-language multilingual text retrieval is also discussed in this paper. As well, a tool is designed to help assessors judge relevante and gather the events of relevante judgment. The log file created by this tool will be used to analyze the behaviors of assessors in the future.
    Type
    a
  17. Kuo, J.-S.; Li, H.; Yang, Y.-K.: Active learning for constructing transliteration lexicons from the Web (2008) 0.01
    0.009997789 = product of:
      0.019995578 = sum of:
        0.019995578 = product of:
          0.029993366 = sum of:
            0.008149404 = weight(_text_:a in 1345) [ClassicSimilarity], result of:
              0.008149404 = score(doc=1345,freq=6.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.1544581 = fieldWeight in 1345, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1345)
            0.021843962 = weight(_text_:h in 1345) [ClassicSimilarity], result of:
              0.021843962 = score(doc=1345,freq=2.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.19214681 = fieldWeight in 1345, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1345)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    This article presents an adaptive learning framework for Phonetic Similarity Modeling (PSM) that supports the automatic construction of transliteration lexicons. The learning algorithm starts with minimum prior knowledge about machine transliteration and acquires knowledge iteratively from the Web. We study the unsupervised learning and the active learning strategies that minimize human supervision in terms of data labeling. The learning process refines the PSM and constructs a transliteration lexicon at the same time. We evaluate the proposed PSM and its learning algorithm through a series of systematic experiments, which show that the proposed framework is reliably effective on two independent databases.
    Type
    a
  18. Cimiano, P.; Völker, J.; Studer, R.: Ontologies on demand? : a description of the state-of-the-art, applications, challenges and trends for ontology learning from text (2006) 0.01
    0.008929739 = product of:
      0.017859478 = sum of:
        0.017859478 = product of:
          0.026789214 = sum of:
            0.008065818 = weight(_text_:a in 6014) [ClassicSimilarity], result of:
              0.008065818 = score(doc=6014,freq=8.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.15287387 = fieldWeight in 6014, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6014)
            0.018723397 = weight(_text_:h in 6014) [ClassicSimilarity], result of:
              0.018723397 = score(doc=6014,freq=2.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.16469726 = fieldWeight in 6014, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6014)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Ontologies are nowadays used for many applications requiring data, services and resources in general to be interoperable and machine understandable. Such applications are for example web service discovery and composition, information integration across databases, intelligent search, etc. The general idea is that data and services are semantically described with respect to ontologies, which are formal specifications of a domain of interest, and can thus be shared and reused in a way such that the shared meaning specified by the ontology remains formally the same across different parties and applications. As the cost of creating ontologies is relatively high, different proposals have emerged for learning ontologies from structured and unstructured resources. In this article we examine the maturity of techniques for ontology learning from textual resources, addressing the question whether the state-of-the-art is mature enough to produce ontologies 'on demand'.
    Source
    Information - Wissenschaft und Praxis. 57(2006) H.6/7, S.315-320
    Type
    a
  19. Shaalan, K.; Raza, H.: NERA: Named Entity Recognition for Arabic (2009) 0.01
    0.008916401 = product of:
      0.017832803 = sum of:
        0.017832803 = product of:
          0.026749203 = sum of:
            0.011146371 = weight(_text_:a in 2953) [ClassicSimilarity], result of:
              0.011146371 = score(doc=2953,freq=22.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.21126054 = fieldWeight in 2953, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2953)
            0.015602832 = weight(_text_:h in 2953) [ClassicSimilarity], result of:
              0.015602832 = score(doc=2953,freq=2.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.13724773 = fieldWeight in 2953, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2953)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Name identification has been worked on quite intensively for the past few years, and has been incorporated into several products revolving around natural language processing tasks. Many researchers have attacked the name identification problem in a variety of languages, but only a few limited research efforts have focused on named entity recognition for Arabic script. This is due to the lack of resources for Arabic named entities and the limited amount of progress made in Arabic natural language processing in general. In this article, we present the results of our attempt at the recognition and extraction of the 10 most important categories of named entities in Arabic script: the person name, location, company, date, time, price, measurement, phone number, ISBN, and file name. We developed the system Named Entity Recognition for Arabic (NERA) using a rule-based approach. The resources created are: a Whitelist representing a dictionary of names, and a grammar, in the form of regular expressions, which are responsible for recognizing the named entities. A filtration mechanism is used that serves two different purposes: (a) revision of the results from a named entity extractor by using metadata, in terms of a Blacklist or rejecter, about ill-formed named entities and (b) disambiguation of identical or overlapping textual matches returned by different name entity extractors to get the correct choice. In NERA, we addressed major challenges posed by NER in the Arabic language arising due to the complexity of the language, peculiarities in the Arabic orthographic system, nonstandardization of the written text, ambiguity, and lack of resources. NERA has been effectively evaluated using our own tagged corpus; it achieved satisfactory results in terms of precision, recall, and F-measure.
    Type
    a
  20. Tseng, Y.-H.: Automatic thesaurus generation for Chinese documents (2002) 0.01
    0.008743494 = product of:
      0.017486988 = sum of:
        0.017486988 = product of:
          0.02623048 = sum of:
            0.010627648 = weight(_text_:a in 5226) [ClassicSimilarity], result of:
              0.010627648 = score(doc=5226,freq=20.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.20142901 = fieldWeight in 5226, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5226)
            0.015602832 = weight(_text_:h in 5226) [ClassicSimilarity], result of:
              0.015602832 = score(doc=5226,freq=2.0), product of:
                0.113683715 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045758117 = queryNorm
                0.13724773 = fieldWeight in 5226, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5226)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Tseng constructs a word co-occurrence based thesaurus by means of the automatic analysis of Chinese text. Words are identified by a longest dictionary match supplemented by a key word extraction algorithm that merges back nearby tokens and accepts shorter strings of characters if they occur more often than the longest string. Single character auxiliary words are a major source of error but this can be greatly reduced with the use of a 70-character 2680 word stop list. Extracted terms with their associate document weights are sorted by decreasing frequency and the top of this list is associated using a Dice coefficient modified to account for longer documents on the weights of term pairs. Co-occurrence is not in the document as a whole but in paragraph or sentence size sections in order to reduce computation time. A window of 29 characters or 11 words was found to be sufficient. A thesaurus was produced from 25,230 Chinese news articles and judges asked to review the top 50 terms associated with each of 30 single word query terms. They determined 69% to be relevant.
    Type
    a

Authors

Languages

Types

  • a 148
  • m 11
  • el 8
  • s 7
  • More… Less…