Search (30 results, page 1 of 2)

  • × theme_ss:"Computerlinguistik"
  • × type_ss:"a"
  • × year_i:[2000 TO 2010}
  1. Schneider, J.W.; Borlund, P.: ¬A bibliometric-based semiautomatic approach to identification of candidate thesaurus terms : parsing and filtering of noun phrases from citation contexts (2005) 0.05
    0.04992383 = product of:
      0.07488574 = sum of:
        0.054032028 = weight(_text_:f in 156) [ClassicSimilarity], result of:
          0.054032028 = score(doc=156,freq=2.0), product of:
            0.17528075 = queryWeight, product of:
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.04397646 = queryNorm
            0.3082599 = fieldWeight in 156, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.0546875 = fieldNorm(doc=156)
        0.02085371 = product of:
          0.04170742 = sum of:
            0.04170742 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
              0.04170742 = score(doc=156,freq=2.0), product of:
                0.15399806 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04397646 = queryNorm
                0.2708308 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    8. 3.2007 19:55:22
    Source
    Context: nature, impact and role. 5th International Conference an Conceptions of Library and Information Sciences, CoLIS 2005 Glasgow, UK, June 2005. Ed. by F. Crestani u. I. Ruthven
  2. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.05
    0.046839546 = product of:
      0.07025932 = sum of:
        0.052384708 = product of:
          0.20953883 = sum of:
            0.20953883 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.20953883 = score(doc=562,freq=2.0), product of:
                0.37283292 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04397646 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.25 = coord(1/4)
        0.017874608 = product of:
          0.035749216 = sum of:
            0.035749216 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.035749216 = score(doc=562,freq=2.0), product of:
                0.15399806 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04397646 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  3. Hull, D.; Ait-Mokhtar, S.; Chuat, M.; Eisele, A.; Gaussier, E.; Grefenstette, G.; Isabelle, P.; Samulesson, C.; Segand, F.: Language technologies and patent search and classification (2001) 0.03
    0.030875444 = product of:
      0.09262633 = sum of:
        0.09262633 = weight(_text_:f in 6318) [ClassicSimilarity], result of:
          0.09262633 = score(doc=6318,freq=2.0), product of:
            0.17528075 = queryWeight, product of:
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.04397646 = queryNorm
            0.52844554 = fieldWeight in 6318, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.09375 = fieldNorm(doc=6318)
      0.33333334 = coord(1/3)
    
  4. Rosemblat, G.; Tse, T.; Gemoets, D.: Adapting a monolingual consumer health system for Spanish cross-language information retrieval (2004) 0.03
    0.025729537 = product of:
      0.07718861 = sum of:
        0.07718861 = weight(_text_:f in 2673) [ClassicSimilarity], result of:
          0.07718861 = score(doc=2673,freq=8.0), product of:
            0.17528075 = queryWeight, product of:
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.04397646 = queryNorm
            0.4403713 = fieldWeight in 2673, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2673)
      0.33333334 = coord(1/3)
    
    Abstract
    This preliminary study applies a bilingual term list (BTL) approach to cross-language information retrieval (CLIR) in the consumer health domain and compares it to a machine translation (MT) approach. We compiled a Spanish-English BTL of 34,980 medical and general terms. We collected a training set of 466 general health queries from MedlinePlus en espaiiol and 488 domainspecific queries from ClinicalTrials.gov translated into Spanish. We submitted the training set queries in English against a test bed of 7,170 ClinicalTrials.gov English documents, and compared MT and BTL against this English monolingual standard. The BTL approach was less effective (F = 0.420) than the MT approach (F = 0.578). A failure analysis of the results led to substitution of BTL dictionary sources and the addition of rudimentary normalisation of plural forms. These changes improved the CLIR effectiveness of the same training set queries (F = 0.474), and yielded comparable results for a test set of new 954 queries (F= 0.484). These results will shape our efforts to support Spanishspeakers' needs for consumer health information currently only available in English.
  5. Liu, S.; Liu, F.; Yu, C.; Meng, W.: ¬An effective approach to document retrieval via utilizing WordNet and recognizing phrases (2004) 0.03
    0.025729537 = product of:
      0.07718861 = sum of:
        0.07718861 = weight(_text_:f in 4078) [ClassicSimilarity], result of:
          0.07718861 = score(doc=4078,freq=2.0), product of:
            0.17528075 = queryWeight, product of:
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.04397646 = queryNorm
            0.4403713 = fieldWeight in 4078, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.078125 = fieldNorm(doc=4078)
      0.33333334 = coord(1/3)
    
  6. Fattah, M. Abdel; Ren, F.: English-Arabic proper-noun transliteration-pairs creation (2008) 0.02
    0.018193532 = product of:
      0.05458059 = sum of:
        0.05458059 = weight(_text_:f in 1999) [ClassicSimilarity], result of:
          0.05458059 = score(doc=1999,freq=4.0), product of:
            0.17528075 = queryWeight, product of:
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.04397646 = queryNorm
            0.31138954 = fieldWeight in 1999, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1999)
      0.33333334 = coord(1/3)
    
    Abstract
    Proper nouns may be considered the most important query words in information retrieval. If the two languages use the same alphabet, the same proper nouns can be found in either language. However, if the two languages use different alphabets, the names must be transliterated. Short vowels are not usually marked on Arabic words in almost all Arabic documents (except very important documents like the Muslim and Christian holy books). Moreover, most Arabic words have a syllable consisting of a consonant-vowel combination (CV), which means that most Arabic words contain a short or long vowel between two successive consonant letters. That makes it difficult to create English-Arabic transliteration pairs, since some English letters may not be matched with any romanized Arabic letter. In the present study, we present different approaches for extraction of transliteration proper-noun pairs from parallel corpora based on different similarity measures between the English and romanized Arabic proper nouns under consideration. The strength of our new system is that it works well for low-frequency proper noun pairs. We evaluate the new approaches presented using two different English-Arabic parallel corpora. Most of our results outperform previously published results in terms of precision, recall, and F-Measure.
  7. Cruys, T. van de; Moirón, B.V.: Semantics-based multiword expression extraction (2007) 0.02
    0.01762826 = product of:
      0.052884776 = sum of:
        0.052884776 = product of:
          0.10576955 = sum of:
            0.10576955 = weight(_text_:van in 2919) [ClassicSimilarity], result of:
              0.10576955 = score(doc=2919,freq=2.0), product of:
                0.24523866 = queryWeight, product of:
                  5.5765896 = idf(docFreq=454, maxDocs=44218)
                  0.04397646 = queryNorm
                0.43129233 = fieldWeight in 2919, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.5765896 = idf(docFreq=454, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2919)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  8. Schröter, F.; Meyer, U.: Entwicklung sprachlicher Handlungskompetenz in Englisch mit Hilfe eines Multimedia-Sprachlernsystems (2000) 0.02
    0.015437722 = product of:
      0.046313167 = sum of:
        0.046313167 = weight(_text_:f in 5567) [ClassicSimilarity], result of:
          0.046313167 = score(doc=5567,freq=2.0), product of:
            0.17528075 = queryWeight, product of:
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.04397646 = queryNorm
            0.26422277 = fieldWeight in 5567, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.046875 = fieldNorm(doc=5567)
      0.33333334 = coord(1/3)
    
  9. Martínez, F.; Martín, M.T.; Rivas, V.M.; Díaz, M.C.; Ureña, L.A.: Using neural networks for multiword recognition in IR (2003) 0.02
    0.015437722 = product of:
      0.046313167 = sum of:
        0.046313167 = weight(_text_:f in 2777) [ClassicSimilarity], result of:
          0.046313167 = score(doc=2777,freq=2.0), product of:
            0.17528075 = queryWeight, product of:
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.04397646 = queryNorm
            0.26422277 = fieldWeight in 2777, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.046875 = fieldNorm(doc=2777)
      0.33333334 = coord(1/3)
    
  10. Sebastiani, F.: Machine learning in automated text categorization (2002) 0.02
    0.015437722 = product of:
      0.046313167 = sum of:
        0.046313167 = weight(_text_:f in 3389) [ClassicSimilarity], result of:
          0.046313167 = score(doc=3389,freq=2.0), product of:
            0.17528075 = queryWeight, product of:
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.04397646 = queryNorm
            0.26422277 = fieldWeight in 3389, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.046875 = fieldNorm(doc=3389)
      0.33333334 = coord(1/3)
    
  11. Galvez, C.; Moya-Anegón, F. de; Solana, V.H.: Term conflation methods in information retrieval : non-linguistic and linguistic approaches (2005) 0.02
    0.015437722 = product of:
      0.046313167 = sum of:
        0.046313167 = weight(_text_:f in 4394) [ClassicSimilarity], result of:
          0.046313167 = score(doc=4394,freq=2.0), product of:
            0.17528075 = queryWeight, product of:
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.04397646 = queryNorm
            0.26422277 = fieldWeight in 4394, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.046875 = fieldNorm(doc=4394)
      0.33333334 = coord(1/3)
    
  12. Galvez, C.; Moya-Anegón, F. de: ¬An evaluation of conflation accuracy using finite-state transducers (2006) 0.02
    0.015437722 = product of:
      0.046313167 = sum of:
        0.046313167 = weight(_text_:f in 5599) [ClassicSimilarity], result of:
          0.046313167 = score(doc=5599,freq=2.0), product of:
            0.17528075 = queryWeight, product of:
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.04397646 = queryNorm
            0.26422277 = fieldWeight in 5599, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.046875 = fieldNorm(doc=5599)
      0.33333334 = coord(1/3)
    
  13. Ahmed, F.; Nürnberger, A.: Evaluation of n-gram conflation approaches for Arabic text retrieval (2009) 0.02
    0.015437722 = product of:
      0.046313167 = sum of:
        0.046313167 = weight(_text_:f in 2941) [ClassicSimilarity], result of:
          0.046313167 = score(doc=2941,freq=2.0), product of:
            0.17528075 = queryWeight, product of:
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.04397646 = queryNorm
            0.26422277 = fieldWeight in 2941, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.046875 = fieldNorm(doc=2941)
      0.33333334 = coord(1/3)
    
  14. Zhang, C.; Zeng, D.; Li, J.; Wang, F.-Y.; Zuo, W.: Sentiment analysis of Chinese documents : from sentence to document level (2009) 0.02
    0.015437722 = product of:
      0.046313167 = sum of:
        0.046313167 = weight(_text_:f in 3296) [ClassicSimilarity], result of:
          0.046313167 = score(doc=3296,freq=2.0), product of:
            0.17528075 = queryWeight, product of:
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.04397646 = queryNorm
            0.26422277 = fieldWeight in 3296, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.046875 = fieldNorm(doc=3296)
      0.33333334 = coord(1/3)
    
  15. Li, W.; Wong, K.-F.; Yuan, C.: Toward automatic Chinese temporal information extraction (2001) 0.01
    0.0128647685 = product of:
      0.038594306 = sum of:
        0.038594306 = weight(_text_:f in 6029) [ClassicSimilarity], result of:
          0.038594306 = score(doc=6029,freq=2.0), product of:
            0.17528075 = queryWeight, product of:
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.04397646 = queryNorm
            0.22018565 = fieldWeight in 6029, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6029)
      0.33333334 = coord(1/3)
    
  16. Ibekwe-SanJuan, F.; SanJuan, E.: From term variants to research topics (2002) 0.01
    0.0128647685 = product of:
      0.038594306 = sum of:
        0.038594306 = weight(_text_:f in 1853) [ClassicSimilarity], result of:
          0.038594306 = score(doc=1853,freq=2.0), product of:
            0.17528075 = queryWeight, product of:
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.04397646 = queryNorm
            0.22018565 = fieldWeight in 1853, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1853)
      0.33333334 = coord(1/3)
    
  17. Peng, F.; Huang, X.: Machine learning for Asian language text classification (2007) 0.01
    0.0128647685 = product of:
      0.038594306 = sum of:
        0.038594306 = weight(_text_:f in 831) [ClassicSimilarity], result of:
          0.038594306 = score(doc=831,freq=2.0), product of:
            0.17528075 = queryWeight, product of:
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.04397646 = queryNorm
            0.22018565 = fieldWeight in 831, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.0390625 = fieldNorm(doc=831)
      0.33333334 = coord(1/3)
    
  18. Shaalan, K.; Raza, H.: NERA: Named Entity Recognition for Arabic (2009) 0.01
    0.0128647685 = product of:
      0.038594306 = sum of:
        0.038594306 = weight(_text_:f in 2953) [ClassicSimilarity], result of:
          0.038594306 = score(doc=2953,freq=2.0), product of:
            0.17528075 = queryWeight, product of:
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.04397646 = queryNorm
            0.22018565 = fieldWeight in 2953, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.985786 = idf(docFreq=2232, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2953)
      0.33333334 = coord(1/3)
    
    Abstract
    Name identification has been worked on quite intensively for the past few years, and has been incorporated into several products revolving around natural language processing tasks. Many researchers have attacked the name identification problem in a variety of languages, but only a few limited research efforts have focused on named entity recognition for Arabic script. This is due to the lack of resources for Arabic named entities and the limited amount of progress made in Arabic natural language processing in general. In this article, we present the results of our attempt at the recognition and extraction of the 10 most important categories of named entities in Arabic script: the person name, location, company, date, time, price, measurement, phone number, ISBN, and file name. We developed the system Named Entity Recognition for Arabic (NERA) using a rule-based approach. The resources created are: a Whitelist representing a dictionary of names, and a grammar, in the form of regular expressions, which are responsible for recognizing the named entities. A filtration mechanism is used that serves two different purposes: (a) revision of the results from a named entity extractor by using metadata, in terms of a Blacklist or rejecter, about ill-formed named entities and (b) disambiguation of identical or overlapping textual matches returned by different name entity extractors to get the correct choice. In NERA, we addressed major challenges posed by NER in the Arabic language arising due to the complexity of the language, peculiarities in the Arabic orthographic system, nonstandardization of the written text, ambiguity, and lack of resources. NERA has been effectively evaluated using our own tagged corpus; it achieved satisfactory results in terms of precision, recall, and F-measure.
  19. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.01
    0.0119164055 = product of:
      0.035749216 = sum of:
        0.035749216 = product of:
          0.07149843 = sum of:
            0.07149843 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
              0.07149843 = score(doc=5429,freq=2.0), product of:
                0.15399806 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04397646 = queryNorm
                0.46428138 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    c't. 2000, H.22, S.230-231
  20. Blair, D.C.: Information retrieval and the philosophy of language (2002) 0.01
    0.010073291 = product of:
      0.030219873 = sum of:
        0.030219873 = product of:
          0.060439747 = sum of:
            0.060439747 = weight(_text_:van in 4283) [ClassicSimilarity], result of:
              0.060439747 = score(doc=4283,freq=2.0), product of:
                0.24523866 = queryWeight, product of:
                  5.5765896 = idf(docFreq=454, maxDocs=44218)
                  0.04397646 = queryNorm
                0.24645276 = fieldWeight in 4283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.5765896 = idf(docFreq=454, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4283)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Information retrieval - the retrieval, primarily, of documents or textual material - is fundamentally a linguistic process. At the very least we must describe what we want and match that description with descriptions of the information that is available to us. Furthermore, when we describe what we want, we must mean something by that description. This is a deceptively simple act, but such linguistic events have been the grist for philosophical analysis since Aristotle. Although there are complexities involved in referring to authors, document types, or other categories of information retrieval context, here I wish to focus an one of the most problematic activities in information retrieval: the description of the intellectual content of information items. And even though I take information retrieval to involve the description and retrieval of written text, what I say here is applicable to any information item whose intellectual content can be described for retrieval-books, documents, images, audio clips, video clips, scientific specimens, engineering schematics, and so forth. For convenience, though, I will refer only to the description and retrieval of documents. The description of intellectual content can go wrong in many obvious ways. We may describe what we want incorrectly; we may describe it correctly but in such general terms that its description is useless for retrieval; or we may describe what we want correctly, but misinterpret the descriptions of available information, and thereby match our description of what we want incorrectly. From a linguistic point of view, we can be misunderstood in the process of retrieval in many ways. Because the philosophy of language deals specifically with how we are understood and mis-understood, it should have some use for understanding the process of description in information retrieval. First, however, let us examine more closely the kinds of misunderstandings that can occur in information retrieval. We use language in searching for information in two principal ways. We use it to describe what we want and to discriminate what we want from other information that is available to us but that we do not want. Description and discrimination together articulate the goals of the information search process; they also delineate the two principal ways in which language can fail us in this process. Van Rijsbergen (1979) was the first to make this distinction, calling them "representation" and "discrimination.""