Search (23 results, page 1 of 2)

  • × year_i:[2000 TO 2010}
  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.10066795 = sum of:
      0.080155164 = product of:
        0.24046549 = sum of:
          0.24046549 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.24046549 = score(doc=562,freq=2.0), product of:
              0.4278608 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05046712 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.020512793 = product of:
        0.041025586 = sum of:
          0.041025586 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.041025586 = score(doc=562,freq=2.0), product of:
              0.17672725 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05046712 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.07
    0.0727624 = product of:
      0.1455248 = sum of:
        0.1455248 = sum of:
          0.09717569 = weight(_text_:tagging in 2541) [ClassicSimilarity], result of:
            0.09717569 = score(doc=2541,freq=2.0), product of:
              0.2979515 = queryWeight, product of:
                5.9038734 = idf(docFreq=327, maxDocs=44218)
                0.05046712 = queryNorm
              0.326146 = fieldWeight in 2541, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.9038734 = idf(docFreq=327, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2541)
          0.048349116 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
            0.048349116 = score(doc=2541,freq=4.0), product of:
              0.17672725 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05046712 = queryNorm
              0.27358043 = fieldWeight in 2541, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2541)
      0.5 = coord(1/2)
    
    Abstract
    The Specialized Information Services Division (SIS) of the National Library of Medicine (NLM) provides Web access to more than a dozen scientific databases on toxicology and the environment on TOXNET . Search queries on TOXNET often include misspelled or variant English words, medical and scientific jargon and chemical names. Following the example of search engines like Google and ClinicalTrials.gov, we set out to develop a spelling "suggestion" system for increased recall and precision in TOXNET searching. This paper describes development of dictionary technology that can be used in a variety of applications such as orthographic verification, writing aid, natural language processing, and information storage and retrieval. The design of the technology allows building complex applications using the components developed in the earlier phases of the work in a modular fashion without extensive rewriting of computer code. Since many of the potential applications envisioned for this work have on-line or web-based interfaces, the dictionaries and other computer components must have fast response, and must be adaptable to open-ended database vocabularies, including chemical nomenclature. The dictionary vocabulary for this work was derived from SIS and other databases and specialized resources, such as NLM's Unified Medical Language Systems (UMLS) . The resulting technology, A-Z Dictionary (AZdict), has three major constituents: 1) the vocabulary list, 2) the word attributes that define part of speech and morphological relationships between words in the list, and 3) a set of programs that implements the retrieval of words and their attributes, and determines similarity between words (ChemSpell). These three components can be used in various applications such as spelling verification, spelling aid, part-of-speech tagging, paraphrasing, and many other natural language processing functions.
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
  3. Toutanova, K.; Klein, D.; Manning, C.D.; Singer, Y.: Feature-rich Part-of-Speech Tagging with a cyclic dependency network (2003) 0.05
    0.048099514 = product of:
      0.09619903 = sum of:
        0.09619903 = product of:
          0.19239806 = sum of:
            0.19239806 = weight(_text_:tagging in 1059) [ClassicSimilarity], result of:
              0.19239806 = score(doc=1059,freq=4.0), product of:
                0.2979515 = queryWeight, product of:
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.05046712 = queryNorm
                0.64573616 = fieldWeight in 1059, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1059)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. Using these ideas together, the resulting tagger gives a 97.24%accuracy on the Penn TreebankWSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result.
  4. Granitzer, M.: Statistische Verfahren der Textanalyse (2006) 0.03
    0.03401149 = product of:
      0.06802298 = sum of:
        0.06802298 = product of:
          0.13604596 = sum of:
            0.13604596 = weight(_text_:tagging in 5809) [ClassicSimilarity], result of:
              0.13604596 = score(doc=5809,freq=2.0), product of:
                0.2979515 = queryWeight, product of:
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.05046712 = queryNorm
                0.4566044 = fieldWeight in 5809, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5809)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Der vorliegende Artikel bietet einen Überblick über statistische Verfahren der Textanalyse im Kontext des Semantic Webs. Als Einleitung erfolgt die Diskussion von Methoden und gängigen Techniken zur Vorverarbeitung von Texten wie z. B. Stemming oder Part-of-Speech Tagging. Die so eingeführten Repräsentationsformen dienen als Basis für statistische Merkmalsanalysen sowie für weiterführende Techniken wie Information Extraction und maschinelle Lernverfahren. Die Darstellung dieser speziellen Techniken erfolgt im Überblick, wobei auf die wichtigsten Aspekte in Bezug auf das Semantic Web detailliert eingegangen wird. Die Anwendung der vorgestellten Techniken zur Erstellung und Wartung von Ontologien sowie der Verweis auf weiterführende Literatur bilden den Abschluss dieses Artikels.
  5. Toutanova, K.; Manning, C.D.: Enriching the knowledge sources used in a maximum entropy Part-of-Speech Tagger (2000) 0.03
    0.03401149 = product of:
      0.06802298 = sum of:
        0.06802298 = product of:
          0.13604596 = sum of:
            0.13604596 = weight(_text_:tagging in 1060) [ClassicSimilarity], result of:
              0.13604596 = score(doc=1060,freq=2.0), product of:
                0.2979515 = queryWeight, product of:
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.05046712 = queryNorm
                0.4566044 = fieldWeight in 1060, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1060)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents results for a maximumentropy-based part of speech tagger, which achieves superior performance principally by enriching the information sources used for tagging. In particular, we get improved results by incorporating these features: (i) more extensive treatment of capitalization for unknown words; (ii) features for the disambiguation of the tense forms of verbs; (iii) features for disambiguating particles from prepositions and adverbs. The best resulting accuracy for the tagger on the Penn Treebank is 96.86% overall, and 86.91% on previously unseen words.
  6. L'Homme, D.; L'Homme, M.-C.; Lemay, C.: Benchmarking the performance of two Part-of-Speech (POS) taggers for terminological purposes (2002) 0.03
    0.029152704 = product of:
      0.05830541 = sum of:
        0.05830541 = product of:
          0.11661082 = sum of:
            0.11661082 = weight(_text_:tagging in 1855) [ClassicSimilarity], result of:
              0.11661082 = score(doc=1855,freq=2.0), product of:
                0.2979515 = queryWeight, product of:
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.05046712 = queryNorm
                0.39137518 = fieldWeight in 1855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1855)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Part-of-Speech (POS) taggers are used in an increasing number of terminology applications. However, terminologists do not know exactly how they perform an specialized texts since most POS taggers have been trained an "general" Corpora, that is, Corpora containing all sorts of undifferentiated texts. In this article, we evaluate the Performance of two POS taggers an French and English medical texts. The taggers are TnT (a statistical tagger developed at Saarland University (Brants 2000)) and WinBrill (the Windows version of the tagger initially developed by Eric Brill (1992)). Ten extracts from medical texts were submitted to the taggers and the outputs scanned manually. Results pertain to the accuracy of tagging in terms of correctly and incorrectly tagged words. We also study the handling of unknown words from different viewpoints.
  7. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.02
    0.020512793 = product of:
      0.041025586 = sum of:
        0.041025586 = product of:
          0.08205117 = sum of:
            0.08205117 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.08205117 = score(doc=4888,freq=2.0), product of:
                0.17672725 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046712 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 3.2013 14:56:22
  8. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.02
    0.020512793 = product of:
      0.041025586 = sum of:
        0.041025586 = product of:
          0.08205117 = sum of:
            0.08205117 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
              0.08205117 = score(doc=5429,freq=2.0), product of:
                0.17672725 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046712 = queryNorm
                0.46428138 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.230-231
  9. Kiss, T.: Anmerkungen zur scheinbaren Konkurrenz von numerischen und symbolischen Verfahren in der Computerlinguistik (2002) 0.02
    0.019435138 = product of:
      0.038870275 = sum of:
        0.038870275 = product of:
          0.07774055 = sum of:
            0.07774055 = weight(_text_:tagging in 1752) [ClassicSimilarity], result of:
              0.07774055 = score(doc=1752,freq=2.0), product of:
                0.2979515 = queryWeight, product of:
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.05046712 = queryNorm
                0.2609168 = fieldWeight in 1752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1752)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Man kann wohl abschließend festhalten, dass von einer Überlegenheit statistischer Verfahren zumindest im Bereich des Tagging eigentlich nicht gesprochen werden sollte. Darüber hinaus muss die Opposition zwischen regelbasierten und numerischen Verfahren hier aufgeweicht werden, denn auch die statistischen Verfahren verwenden Regelsysteme. Selbst beim Lernen ohne Referenzkorpus ist ja zumindest eine Zuordnung der Wörter zu einem Lexikon bzw. auch eine heuristische Erkennung unbekannter Wörter nach Regeln notwendig. Statistische Verfahren haben - und dies wurde hier wahrscheinlich nicht ausreichend betont - durchaus ihre Berechtigung, sie sind nützlich; sie gestatten, insbesondere im Vergleich zur Introspektion, eine unmittelbarere und breitere Heranführung an das Phänomen Sprache. Die vorhandenen umfangreichen elektronischen Korpora verlangen nahezu danach, Sprache auch mit statistischen Mitteln zu untersuchen. Allerdings können die statistischen Verfahren die regelbasierten Verfahren nicht ersetzen. Somit muss dem Diktum vom "Es geht nicht anders" deutlich widersprochen werden. Dass die statistischen Verfahren zur Zeit so en vogue sind und die regelbasierten Verfahren aussehen lassen wie eine alte Dallas-Folge, mag wohl auch daran liegen, dass zu viele Vertreter des alten Paradigmas nicht die Energie aufbringen, sich dem neuen Paradigma so weit zu öffnen, dass eine kritische Auseinandersetzung mit dem neuen auf der Basis des alten möglich wird. Die Mathematik ist eine geachtete, weil schwierige Wissenschaft, die statistische Sprachverarbeitung ist eine gefürchtete, weil in ihren Eigenschaften oftmals nicht gründlich genug betrachtete Disziplin.
  10. Kuhlmann, U.; Monnerjahn, P.: Sprache auf Knopfdruck : Sieben automatische Übersetzungsprogramme im Test (2000) 0.02
    0.017093996 = product of:
      0.03418799 = sum of:
        0.03418799 = product of:
          0.06837598 = sum of:
            0.06837598 = weight(_text_:22 in 5428) [ClassicSimilarity], result of:
              0.06837598 = score(doc=5428,freq=2.0), product of:
                0.17672725 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046712 = queryNorm
                0.38690117 = fieldWeight in 5428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5428)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.220-229
  11. Jones, I.; Cunliffe, D.; Tudhope, D.: Natural language processing and knowledge organization systems as an aid to retrieval (2004) 0.02
    0.017005745 = product of:
      0.03401149 = sum of:
        0.03401149 = product of:
          0.06802298 = sum of:
            0.06802298 = weight(_text_:tagging in 2677) [ClassicSimilarity], result of:
              0.06802298 = score(doc=2677,freq=2.0), product of:
                0.2979515 = queryWeight, product of:
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.05046712 = queryNorm
                0.2283022 = fieldWeight in 2677, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2677)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    1. Introduction The need for research into the application of linguistic techniques in Information Retrieval (IR) in general, and a similar need in faceted Knowledge Organization Systems (KOS) has been indicated by various authors. Smeaton (1997) points out the inherent limitations of conventional approaches to IR based an "bags of words", mainly difficulties caused by lexical ambiguity in the words concerned, and goes an to suggest the possibility of using Natural Language Processing (NLP) in query formulation. Past experience with a faceted retrieval system highlighted the need for integrating the linguistic perspective in order to fully utilise the potential of a KOS (Tudhope et al." 2002). The present research seeks to address some of these needs in using NLP to improve the efficacy of KOS tools in query and retrieval systems. Syntactic parsing and part-of-speech tagging can substantially reduce lexical ambiguity through homograph disambiguation. Given the two strings "1 fable the motion" and "I put the motion an the fable", for instance, the parser used in this research clearly indicates that 'fable' in the first string is a verb, while 'table' in the second string is a noun, a distinction that would be missed in the "bag of words" approach. This syntactic disambiguation enables a more precise matching from free text to the controlled vocabulary of a KOS and vice versa. The use of a general linguistic resource, namely Roget's Thesaurus of English Words and Phrases (RTEWP), as an intermediary in this process, is investigated. The adaptation of the Link parser (Sleator & Temperley, 1993) to the purposes of the research is reported. The design and implementation of the early practical stages of the project are described, and the results of the initial experiments are presented and evaluated. Applications of the techniques developed are foreseen in the areas of query disambiguation, information retrieval and automatic indexing. In the first section of the paper a brief review of the literature and relevant current work in the field is presented. The second section includes reports an the development of algorithms, the construction of data sets and theoretical and experimental work undertaken to date. The third section evaluates the results obtained, and outlines directions for future research.
  12. Hammwöhner, R.: TransRouter revisited : Decision support in the routing of translation projects (2000) 0.01
    0.011965796 = product of:
      0.023931593 = sum of:
        0.023931593 = product of:
          0.047863185 = sum of:
            0.047863185 = weight(_text_:22 in 5483) [ClassicSimilarity], result of:
              0.047863185 = score(doc=5483,freq=2.0), product of:
                0.17672725 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046712 = queryNorm
                0.2708308 = fieldWeight in 5483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5483)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    10.12.2000 18:22:35
  13. Schneider, J.W.; Borlund, P.: ¬A bibliometric-based semiautomatic approach to identification of candidate thesaurus terms : parsing and filtering of noun phrases from citation contexts (2005) 0.01
    0.011965796 = product of:
      0.023931593 = sum of:
        0.023931593 = product of:
          0.047863185 = sum of:
            0.047863185 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
              0.047863185 = score(doc=156,freq=2.0), product of:
                0.17672725 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046712 = queryNorm
                0.2708308 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8. 3.2007 19:55:22
  14. Paolillo, J.C.: Linguistics and the information sciences (2009) 0.01
    0.011965796 = product of:
      0.023931593 = sum of:
        0.023931593 = product of:
          0.047863185 = sum of:
            0.047863185 = weight(_text_:22 in 3840) [ClassicSimilarity], result of:
              0.047863185 = score(doc=3840,freq=2.0), product of:
                0.17672725 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046712 = queryNorm
                0.2708308 = fieldWeight in 3840, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3840)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 8.2011 14:22:33
  15. Schneider, R.: Web 3.0 ante portas? : Integration von Social Web und Semantic Web (2008) 0.01
    0.011965796 = product of:
      0.023931593 = sum of:
        0.023931593 = product of:
          0.047863185 = sum of:
            0.047863185 = weight(_text_:22 in 4184) [ClassicSimilarity], result of:
              0.047863185 = score(doc=4184,freq=2.0), product of:
                0.17672725 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046712 = queryNorm
                0.2708308 = fieldWeight in 4184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4184)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2011 10:38:28
  16. Bian, G.-W.; Chen, H.-H.: Cross-language information access to multilingual collections on the Internet (2000) 0.01
    0.010256397 = product of:
      0.020512793 = sum of:
        0.020512793 = product of:
          0.041025586 = sum of:
            0.041025586 = weight(_text_:22 in 4436) [ClassicSimilarity], result of:
              0.041025586 = score(doc=4436,freq=2.0), product of:
                0.17672725 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046712 = queryNorm
                0.23214069 = fieldWeight in 4436, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4436)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    16. 2.2000 14:22:39
  17. Lorenz, S.: Konzeption und prototypische Realisierung einer begriffsbasierten Texterschließung (2006) 0.01
    0.010256397 = product of:
      0.020512793 = sum of:
        0.020512793 = product of:
          0.041025586 = sum of:
            0.041025586 = weight(_text_:22 in 1746) [ClassicSimilarity], result of:
              0.041025586 = score(doc=1746,freq=2.0), product of:
                0.17672725 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046712 = queryNorm
                0.23214069 = fieldWeight in 1746, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1746)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2015 9:17:30
  18. Sienel, J.; Weiss, M.; Laube, M.: Sprachtechnologien für die Informationsgesellschaft des 21. Jahrhunderts (2000) 0.01
    0.008546998 = product of:
      0.017093996 = sum of:
        0.017093996 = product of:
          0.03418799 = sum of:
            0.03418799 = weight(_text_:22 in 5557) [ClassicSimilarity], result of:
              0.03418799 = score(doc=5557,freq=2.0), product of:
                0.17672725 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046712 = queryNorm
                0.19345059 = fieldWeight in 5557, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5557)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    26.12.2000 13:22:17
  19. Pinker, S.: Wörter und Regeln : Die Natur der Sprache (2000) 0.01
    0.008546998 = product of:
      0.017093996 = sum of:
        0.017093996 = product of:
          0.03418799 = sum of:
            0.03418799 = weight(_text_:22 in 734) [ClassicSimilarity], result of:
              0.03418799 = score(doc=734,freq=2.0), product of:
                0.17672725 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046712 = queryNorm
                0.19345059 = fieldWeight in 734, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=734)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    19. 7.2002 14:22:31
  20. Computational linguistics for the new millennium : divergence or synergy? Proceedings of the International Symposium held at the Ruprecht-Karls Universität Heidelberg, 21-22 July 2000. Festschrift in honour of Peter Hellwig on the occasion of his 60th birthday (2002) 0.01
    0.008546998 = product of:
      0.017093996 = sum of:
        0.017093996 = product of:
          0.03418799 = sum of:
            0.03418799 = weight(_text_:22 in 4900) [ClassicSimilarity], result of:
              0.03418799 = score(doc=4900,freq=2.0), product of:
                0.17672725 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046712 = queryNorm
                0.19345059 = fieldWeight in 4900, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4900)
          0.5 = coord(1/2)
      0.5 = coord(1/2)