Search (64 results, page 1 of 4)

  • × theme_ss:"Computerlinguistik"
  1. Lezius, W.: Morphy - Morphologie und Tagging für das Deutsche (2013) 0.11
    0.105090946 = product of:
      0.21018189 = sum of:
        0.21018189 = sum of:
          0.1554811 = weight(_text_:tagging in 1490) [ClassicSimilarity], result of:
            0.1554811 = score(doc=1490,freq=2.0), product of:
              0.2979515 = queryWeight, product of:
                5.9038734 = idf(docFreq=327, maxDocs=44218)
                0.05046712 = queryNorm
              0.5218336 = fieldWeight in 1490, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.9038734 = idf(docFreq=327, maxDocs=44218)
                0.0625 = fieldNorm(doc=1490)
          0.054700784 = weight(_text_:22 in 1490) [ClassicSimilarity], result of:
            0.054700784 = score(doc=1490,freq=2.0), product of:
              0.17672725 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05046712 = queryNorm
              0.30952093 = fieldWeight in 1490, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1490)
      0.5 = coord(1/2)
    
    Date
    22. 3.2015 9:30:24
  2. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.10066795 = sum of:
      0.080155164 = product of:
        0.24046549 = sum of:
          0.24046549 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.24046549 = score(doc=562,freq=2.0), product of:
              0.4278608 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05046712 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.020512793 = product of:
        0.041025586 = sum of:
          0.041025586 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.041025586 = score(doc=562,freq=2.0), product of:
              0.17672725 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05046712 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  3. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.07
    0.0727624 = product of:
      0.1455248 = sum of:
        0.1455248 = sum of:
          0.09717569 = weight(_text_:tagging in 2541) [ClassicSimilarity], result of:
            0.09717569 = score(doc=2541,freq=2.0), product of:
              0.2979515 = queryWeight, product of:
                5.9038734 = idf(docFreq=327, maxDocs=44218)
                0.05046712 = queryNorm
              0.326146 = fieldWeight in 2541, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.9038734 = idf(docFreq=327, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2541)
          0.048349116 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
            0.048349116 = score(doc=2541,freq=4.0), product of:
              0.17672725 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05046712 = queryNorm
              0.27358043 = fieldWeight in 2541, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2541)
      0.5 = coord(1/2)
    
    Abstract
    The Specialized Information Services Division (SIS) of the National Library of Medicine (NLM) provides Web access to more than a dozen scientific databases on toxicology and the environment on TOXNET . Search queries on TOXNET often include misspelled or variant English words, medical and scientific jargon and chemical names. Following the example of search engines like Google and ClinicalTrials.gov, we set out to develop a spelling "suggestion" system for increased recall and precision in TOXNET searching. This paper describes development of dictionary technology that can be used in a variety of applications such as orthographic verification, writing aid, natural language processing, and information storage and retrieval. The design of the technology allows building complex applications using the components developed in the earlier phases of the work in a modular fashion without extensive rewriting of computer code. Since many of the potential applications envisioned for this work have on-line or web-based interfaces, the dictionaries and other computer components must have fast response, and must be adaptable to open-ended database vocabularies, including chemical nomenclature. The dictionary vocabulary for this work was derived from SIS and other databases and specialized resources, such as NLM's Unified Medical Language Systems (UMLS) . The resulting technology, A-Z Dictionary (AZdict), has three major constituents: 1) the vocabulary list, 2) the word attributes that define part of speech and morphological relationships between words in the list, and 3) a set of programs that implements the retrieval of words and their attributes, and determines similarity between words (ChemSpell). These three components can be used in various applications such as spelling verification, spelling aid, part-of-speech tagging, paraphrasing, and many other natural language processing functions.
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
  4. Chowdhury, A.; Mccabe, M.C.: Improving information retrieval systems using part of speech tagging (1993) 0.05
    0.05049397 = product of:
      0.10098794 = sum of:
        0.10098794 = product of:
          0.20197588 = sum of:
            0.20197588 = weight(_text_:tagging in 1061) [ClassicSimilarity], result of:
              0.20197588 = score(doc=1061,freq=6.0), product of:
                0.2979515 = queryWeight, product of:
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.05046712 = queryNorm
                0.6778818 = fieldWeight in 1061, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1061)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The object of Information Retrieval is to retrieve all relevant documents for a user query and only those relevant documents. Much research has focused on achieving this objective with little regard for storage overhead or performance. In the paper we evaluate the use of Part of Speech Tagging to improve, the index storage overhead and general speed of the system with only a minimal reduction to precision recall measurements. We tagged 500Mbs of the Los Angeles Times 1990 and 1989 document collection provided by TREC for parts of speech. We then experimented to find the most relevant part of speech to index. We show that 90% of precision recall is achieved with 40% of the document collections terms. We also show that this is a improvement in overhead with only a 1% reduction in precision recall.
    Object
    POS-Tagging
  5. Manning, C.D.: Part-of-Speech Tagging from 97% to 100% : is it time for some linguistics? (2011) 0.05
    0.048587844 = product of:
      0.09717569 = sum of:
        0.09717569 = product of:
          0.19435138 = sum of:
            0.19435138 = weight(_text_:tagging in 1121) [ClassicSimilarity], result of:
              0.19435138 = score(doc=1121,freq=8.0), product of:
                0.2979515 = queryWeight, product of:
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.05046712 = queryNorm
                0.652292 = fieldWeight in 1121, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1121)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    I examine what would be necessary to move part-of-speech tagging performance from its current level of about 97.3% token accuracy (56% sentence accuracy) to close to 100% accuracy. I suggest that it must still be possible to greatly increase tagging performance and examine some useful improvements that have recently been made to the Stanford Part-of-Speech Tagger. However, an error analysis of some of the remaining errors suggests that there is limited further mileage to be had either from better machine learning or better features in a discriminative sequence classifier. The prospects for further gains from semisupervised learning also seem quite limited. Rather, I suggest and begin to demonstrate that the largest opportunity for further progress comes from improving the taxonomic basis of the linguistic resources from which taggers are trained. That is, from improved descriptive linguistics. However, I conclude by suggesting that there are also limits to this process. The status of some words may not be able to be adequately captured by assigning them to one of a small number of categories. While conventions can be used in such cases to improve tagging consistency, they lack a strong linguistic basis.
  6. Toutanova, K.; Klein, D.; Manning, C.D.; Singer, Y.: Feature-rich Part-of-Speech Tagging with a cyclic dependency network (2003) 0.05
    0.048099514 = product of:
      0.09619903 = sum of:
        0.09619903 = product of:
          0.19239806 = sum of:
            0.19239806 = weight(_text_:tagging in 1059) [ClassicSimilarity], result of:
              0.19239806 = score(doc=1059,freq=4.0), product of:
                0.2979515 = queryWeight, product of:
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.05046712 = queryNorm
                0.64573616 = fieldWeight in 1059, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1059)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. Using these ideas together, the resulting tagger gives a 97.24%accuracy on the Penn TreebankWSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result.
  7. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.04
    0.040077582 = product of:
      0.080155164 = sum of:
        0.080155164 = product of:
          0.24046549 = sum of:
            0.24046549 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.24046549 = score(doc=862,freq=2.0), product of:
                0.4278608 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05046712 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  8. Schmid, H.: Improvements in Part-of-Speech tagging with an application to German (1995) 0.04
    0.038870275 = product of:
      0.07774055 = sum of:
        0.07774055 = product of:
          0.1554811 = sum of:
            0.1554811 = weight(_text_:tagging in 124) [ClassicSimilarity], result of:
              0.1554811 = score(doc=124,freq=2.0), product of:
                0.2979515 = queryWeight, product of:
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.05046712 = queryNorm
                0.5218336 = fieldWeight in 124, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.0625 = fieldNorm(doc=124)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  9. Granitzer, M.: Statistische Verfahren der Textanalyse (2006) 0.03
    0.03401149 = product of:
      0.06802298 = sum of:
        0.06802298 = product of:
          0.13604596 = sum of:
            0.13604596 = weight(_text_:tagging in 5809) [ClassicSimilarity], result of:
              0.13604596 = score(doc=5809,freq=2.0), product of:
                0.2979515 = queryWeight, product of:
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.05046712 = queryNorm
                0.4566044 = fieldWeight in 5809, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5809)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Der vorliegende Artikel bietet einen Überblick über statistische Verfahren der Textanalyse im Kontext des Semantic Webs. Als Einleitung erfolgt die Diskussion von Methoden und gängigen Techniken zur Vorverarbeitung von Texten wie z. B. Stemming oder Part-of-Speech Tagging. Die so eingeführten Repräsentationsformen dienen als Basis für statistische Merkmalsanalysen sowie für weiterführende Techniken wie Information Extraction und maschinelle Lernverfahren. Die Darstellung dieser speziellen Techniken erfolgt im Überblick, wobei auf die wichtigsten Aspekte in Bezug auf das Semantic Web detailliert eingegangen wird. Die Anwendung der vorgestellten Techniken zur Erstellung und Wartung von Ontologien sowie der Verweis auf weiterführende Literatur bilden den Abschluss dieses Artikels.
  10. Toutanova, K.; Manning, C.D.: Enriching the knowledge sources used in a maximum entropy Part-of-Speech Tagger (2000) 0.03
    0.03401149 = product of:
      0.06802298 = sum of:
        0.06802298 = product of:
          0.13604596 = sum of:
            0.13604596 = weight(_text_:tagging in 1060) [ClassicSimilarity], result of:
              0.13604596 = score(doc=1060,freq=2.0), product of:
                0.2979515 = queryWeight, product of:
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.05046712 = queryNorm
                0.4566044 = fieldWeight in 1060, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1060)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents results for a maximumentropy-based part of speech tagger, which achieves superior performance principally by enriching the information sources used for tagging. In particular, we get improved results by incorporating these features: (i) more extensive treatment of capitalization for unknown words; (ii) features for the disambiguation of the tense forms of verbs; (iii) features for disambiguating particles from prepositions and adverbs. The best resulting accuracy for the tagger on the Penn Treebank is 96.86% overall, and 86.91% on previously unseen words.
  11. L'Homme, D.; L'Homme, M.-C.; Lemay, C.: Benchmarking the performance of two Part-of-Speech (POS) taggers for terminological purposes (2002) 0.03
    0.029152704 = product of:
      0.05830541 = sum of:
        0.05830541 = product of:
          0.11661082 = sum of:
            0.11661082 = weight(_text_:tagging in 1855) [ClassicSimilarity], result of:
              0.11661082 = score(doc=1855,freq=2.0), product of:
                0.2979515 = queryWeight, product of:
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.05046712 = queryNorm
                0.39137518 = fieldWeight in 1855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1855)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Part-of-Speech (POS) taggers are used in an increasing number of terminology applications. However, terminologists do not know exactly how they perform an specialized texts since most POS taggers have been trained an "general" Corpora, that is, Corpora containing all sorts of undifferentiated texts. In this article, we evaluate the Performance of two POS taggers an French and English medical texts. The taggers are TnT (a statistical tagger developed at Saarland University (Brants 2000)) and WinBrill (the Windows version of the tagger initially developed by Eric Brill (1992)). Ten extracts from medical texts were submitted to the taggers and the outputs scanned manually. Results pertain to the accuracy of tagging in terms of correctly and incorrectly tagged words. We also study the handling of unknown words from different viewpoints.
  12. Schöneberg, U.; Sperber, W.: POS tagging and its applications for mathematics (2014) 0.03
    0.029152704 = product of:
      0.05830541 = sum of:
        0.05830541 = product of:
          0.11661082 = sum of:
            0.11661082 = weight(_text_:tagging in 1748) [ClassicSimilarity], result of:
              0.11661082 = score(doc=1748,freq=2.0), product of:
                0.2979515 = queryWeight, product of:
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.05046712 = queryNorm
                0.39137518 = fieldWeight in 1748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1748)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  13. Warner, A.J.: Natural language processing (1987) 0.03
    0.027350392 = product of:
      0.054700784 = sum of:
        0.054700784 = product of:
          0.10940157 = sum of:
            0.10940157 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.10940157 = score(doc=337,freq=2.0), product of:
                0.17672725 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046712 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  14. Bredack, J.: Automatische Extraktion fachterminologischer Mehrwortbegriffe : ein Verfahrensvergleich (2016) 0.02
    0.024293922 = product of:
      0.048587844 = sum of:
        0.048587844 = product of:
          0.09717569 = sum of:
            0.09717569 = weight(_text_:tagging in 3194) [ClassicSimilarity], result of:
              0.09717569 = score(doc=3194,freq=2.0), product of:
                0.2979515 = queryWeight, product of:
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.05046712 = queryNorm
                0.326146 = fieldWeight in 3194, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3194)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Als Extraktionssysteme wurden der TreeTagger und die Indexierungssoftware Lingo verwendet. Der TreeTagger basiert auf einem statistischen Tagging- und Chunking- Algorithmus, mit dessen Hilfe NPs automatisch identifiziert und extrahiert werden. Er kann für verschiedene Anwendungsszenarien der natürlichen Sprachverarbeitung eingesetzt werden, in erster Linie als POS-Tagger für unterschiedliche Sprachen. Das Indexierungssystem Lingo arbeitet im Gegensatz zum TreeTagger mit elektronischen Wörterbüchern und einem musterbasierten Abgleich. Lingo ist ein auf automatische Indexierung ausgerichtetes System, was eine Vielzahl von Modulen mitliefert, die individuell auf eine bestimmte Aufgabenstellung angepasst und aufeinander abgestimmt werden können. Die unterschiedlichen Verarbeitungsweisen haben sich in den Ergebnismengen beider Systeme deutlich gezeigt. Die gering ausfallenden Übereinstimmungen der Ergebnismengen verdeutlichen die abweichende Funktionsweise und konnte mit einer qualitativen Analyse beispielhaft beschrieben werden. In der vorliegenden Arbeit kann abschließend nicht geklärt werden, welches der beiden Systeme bevorzugt für die Generierung von Indextermen eingesetzt werden sollte.
  15. Chen, L.; Fang, H.: ¬An automatic method for ex-tracting innovative ideas based on the Scopus® database (2019) 0.02
    0.024293922 = product of:
      0.048587844 = sum of:
        0.048587844 = product of:
          0.09717569 = sum of:
            0.09717569 = weight(_text_:tagging in 5310) [ClassicSimilarity], result of:
              0.09717569 = score(doc=5310,freq=2.0), product of:
                0.2979515 = queryWeight, product of:
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.05046712 = queryNorm
                0.326146 = fieldWeight in 5310, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.9038734 = idf(docFreq=327, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5310)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The novelty of knowledge claims in a research paper can be considered an evaluation criterion for papers to supplement citations. To provide a foundation for research evaluation from the perspective of innovativeness, we propose an automatic approach for extracting innovative ideas from the abstracts of technology and engineering papers. The approach extracts N-grams as candidates based on part-of-speech tagging and determines whether they are novel by checking the Scopus® database to determine whether they had ever been presented previously. Moreover, we discussed the distributions of innovative ideas in different abstract structures. To improve the performance by excluding noisy N-grams, a list of stopwords and a list of research description characteristics were developed. We selected abstracts of articles published from 2011 to 2017 with the topic of semantic analysis as the experimental texts. Excluding noisy N-grams, considering the distribution of innovative ideas in abstracts, and suitably combining N-grams can effectively improve the performance of automatic innovative idea extraction. Unlike co-word and co-citation analysis, innovative-idea extraction aims to identify the differences in a paper from all previously published papers.
  16. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.02
    0.023931593 = product of:
      0.047863185 = sum of:
        0.047863185 = product of:
          0.09572637 = sum of:
            0.09572637 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.09572637 = score(doc=3164,freq=2.0), product of:
                0.17672725 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046712 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  17. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.02
    0.023931593 = product of:
      0.047863185 = sum of:
        0.047863185 = product of:
          0.09572637 = sum of:
            0.09572637 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.09572637 = score(doc=4506,freq=2.0), product of:
                0.17672725 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046712 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8.10.2000 11:52:22
  18. Somers, H.: Example-based machine translation : Review article (1999) 0.02
    0.023931593 = product of:
      0.047863185 = sum of:
        0.047863185 = product of:
          0.09572637 = sum of:
            0.09572637 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.09572637 = score(doc=6672,freq=2.0), product of:
                0.17672725 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046712 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  19. New tools for human translators (1997) 0.02
    0.023931593 = product of:
      0.047863185 = sum of:
        0.047863185 = product of:
          0.09572637 = sum of:
            0.09572637 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.09572637 = score(doc=1179,freq=2.0), product of:
                0.17672725 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046712 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  20. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.02
    0.023931593 = product of:
      0.047863185 = sum of:
        0.047863185 = product of:
          0.09572637 = sum of:
            0.09572637 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.09572637 = score(doc=3117,freq=2.0), product of:
                0.17672725 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05046712 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    28. 2.1999 10:48:22

Years

Languages

  • e 45
  • d 19

Types

  • a 51
  • el 7
  • m 5
  • s 3
  • x 3
  • p 2
  • d 1
  • More… Less…