Search (57 results, page 1 of 3)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.12
    0.11783104 = sum of:
      0.093820974 = product of:
        0.2814629 = sum of:
          0.2814629 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.2814629 = score(doc=562,freq=2.0), product of:
              0.5008076 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.059071355 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.024010058 = product of:
        0.048020117 = sum of:
          0.048020117 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.048020117 = score(doc=562,freq=2.0), product of:
              0.20685782 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.059071355 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.05
    0.046910487 = product of:
      0.093820974 = sum of:
        0.093820974 = product of:
          0.2814629 = sum of:
            0.2814629 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.2814629 = score(doc=862,freq=2.0), product of:
                0.5008076 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.059071355 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. dpa: 14 Forscher mit viel Geld angelockt : Wolfgang-Paul-Preis (2001) 0.04
    0.036587905 = product of:
      0.07317581 = sum of:
        0.07317581 = product of:
          0.14635162 = sum of:
            0.14635162 = weight(_text_:500 in 6814) [ClassicSimilarity], result of:
              0.14635162 = score(doc=6814,freq=2.0), product of:
                0.36112627 = queryWeight, product of:
                  6.113391 = idf(docFreq=265, maxDocs=44218)
                  0.059071355 = queryNorm
                0.40526438 = fieldWeight in 6814, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.113391 = idf(docFreq=265, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6814)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Darin. "Die Sprachwissenschaftlerin Christiane Fellbaum (dpa-Bild) wird ihr Preisgeld für das an der Berlin-Brandenburgischen Akademie der Wissenschaften zu erstellende "Digitale Wörterbuch der Deutschen Sprache des 20. Jahrhunderts" einsetzen. Sie setzt mit ihrem Computer dort an, wo konventionelle Wörterbücher nicht mehr mithalten können. Sie stellt per Knopfdruck Wortverbindungen her, die eine Sprache so reich an Bildern und Vorstellungen - und damit einzigartig - machen. Ihr elektronisches Lexikon aus über 500 Millionen Wörtern soll später als Datenbank zugänglich sein. Seine Grundlage ist die deutsche Sprache der vergangenen hundert Jahre - ein repräsentativer Querschnitt, zusammengestellt aus Literatur, Zeitungsdeutsch, Fachbuchsprache, Werbetexten und niedergeschriebener Umgangssprache. Wo ein Wörterbuch heute nur ein Wort mit Synonymen oder wenigen Verwendungsmöglichkeiten präsentiert, spannt die Forscherin ein riesiges Netz von Wortverbindungen. Bei Christiane Fellbaums Systematik heißt es beispielsweise nicht nur "verlieren", sondern auch noch "den Faden" oder "die Geduld" verlieren - samt allen möglichen weiteren Kombinationen, die der Computer wie eine Suchmaschine in seinen gespeicherten Texten findet."
  4. Yang, C.C.; Luk, J.: Automatic generation of English/Chinese thesaurus based on a parallel corpus in laws (2003) 0.04
    0.035516188 = sum of:
      0.021510322 = product of:
        0.06453096 = sum of:
          0.06453096 = weight(_text_:objects in 1616) [ClassicSimilarity], result of:
            0.06453096 = score(doc=1616,freq=2.0), product of:
              0.31396845 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.059071355 = queryNorm
              0.20553327 = fieldWeight in 1616, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1616)
        0.33333334 = coord(1/3)
      0.014005868 = product of:
        0.028011736 = sum of:
          0.028011736 = weight(_text_:22 in 1616) [ClassicSimilarity], result of:
            0.028011736 = score(doc=1616,freq=2.0), product of:
              0.20685782 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.059071355 = queryNorm
              0.1354154 = fieldWeight in 1616, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1616)
        0.5 = coord(1/2)
    
    Abstract
    The information available in languages other than English in the World Wide Web is increasing significantly. According to a report from Computer Economics in 1999, 54% of Internet users are English speakers ("English Will Dominate Web for Only Three More Years," Computer Economics, July 9, 1999, http://www.computereconomics. com/new4/pr/pr990610.html). However, it is predicted that there will be only 60% increase in Internet users among English speakers verses a 150% growth among nonEnglish speakers for the next five years. By 2005, 57% of Internet users will be non-English speakers. A report by CNN.com in 2000 showed that the number of Internet users in China had been increased from 8.9 million to 16.9 million from January to June in 2000 ("Report: China Internet users double to 17 million," CNN.com, July, 2000, http://cnn.org/2000/TECH/computing/07/27/ china.internet.reut/index.html). According to Nielsen/ NetRatings, there was a dramatic leap from 22.5 millions to 56.6 millions Internet users from 2001 to 2002. China had become the second largest global at-home Internet population in 2002 (US's Internet population was 166 millions) (Robyn Greenspan, "China Pulls Ahead of Japan," Internet.com, April 22, 2002, http://cyberatias.internet.com/big-picture/geographics/article/0,,5911_1013841,00. html). All of the evidences reveal the importance of crosslingual research to satisfy the needs in the near future. Digital library research has been focusing in structural and semantic interoperability in the past. Searching and retrieving objects across variations in protocols, formats and disciplines are widely explored (Schatz, B., & Chen, H. (1999). Digital libraries: technological advances and social impacts. IEEE Computer, Special Issue an Digital Libraries, February, 32(2), 45-50.; Chen, H., Yen, J., & Yang, C.C. (1999). International activities: development of Asian digital libraries. IEEE Computer, Special Issue an Digital Libraries, 32(2), 48-49.). However, research in crossing language boundaries, especially across European languages and Oriental languages, is still in the initial stage. In this proposal, we put our focus an cross-lingual semantic interoperability by developing automatic generation of a cross-lingual thesaurus based an English/Chinese parallel corpus. When the searchers encounter retrieval problems, Professional librarians usually consult the thesaurus to identify other relevant vocabularies. In the problem of searching across language boundaries, a cross-lingual thesaurus, which is generated by co-occurrence analysis and Hopfield network, can be used to generate additional semantically relevant terms that cannot be obtained from dictionary. In particular, the automatically generated cross-lingual thesaurus is able to capture the unknown words that do not exist in a dictionary, such as names of persons, organizations, and events. Due to Hong Kong's unique history background, both English and Chinese are used as official languages in all legal documents. Therefore, English/Chinese cross-lingual information retrieval is critical for applications in courts and the government. In this paper, we develop an automatic thesaurus by the Hopfield network based an a parallel corpus collected from the Web site of the Department of Justice of the Hong Kong Special Administrative Region (HKSAR) Government. Experiments are conducted to measure the precision and recall of the automatic generated English/Chinese thesaurus. The result Shows that such thesaurus is a promising tool to retrieve relevant terms, especially in the language that is not the same as the input term. The direct translation of the input term can also be retrieved in most of the cases.
  5. Warner, A.J.: Natural language processing (1987) 0.03
    0.032013413 = product of:
      0.064026825 = sum of:
        0.064026825 = product of:
          0.12805365 = sum of:
            0.12805365 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.12805365 = score(doc=337,freq=2.0), product of:
                0.20685782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.059071355 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  6. Sokirko, A.V.: Programnaya realizatsiya Russkogo abshchesemanticheskogo slovarya (1997) 0.03
    0.030729031 = product of:
      0.061458062 = sum of:
        0.061458062 = product of:
          0.18437418 = sum of:
            0.18437418 = weight(_text_:objects in 2258) [ClassicSimilarity], result of:
              0.18437418 = score(doc=2258,freq=2.0), product of:
                0.31396845 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.059071355 = queryNorm
                0.58723795 = fieldWeight in 2258, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2258)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Discusses the Dolphi2 for Windows software which has been used for the development of the Russian Semantic Dictionay ROSS. Although not a relational database as such, Dolphi actively uses standard objects of relational databases
  7. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.03
    0.028011736 = product of:
      0.05602347 = sum of:
        0.05602347 = product of:
          0.11204694 = sum of:
            0.11204694 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.11204694 = score(doc=3164,freq=2.0), product of:
                0.20685782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.059071355 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  8. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.03
    0.028011736 = product of:
      0.05602347 = sum of:
        0.05602347 = product of:
          0.11204694 = sum of:
            0.11204694 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.11204694 = score(doc=4506,freq=2.0), product of:
                0.20685782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.059071355 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8.10.2000 11:52:22
  9. Somers, H.: Example-based machine translation : Review article (1999) 0.03
    0.028011736 = product of:
      0.05602347 = sum of:
        0.05602347 = product of:
          0.11204694 = sum of:
            0.11204694 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.11204694 = score(doc=6672,freq=2.0), product of:
                0.20685782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.059071355 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  10. New tools for human translators (1997) 0.03
    0.028011736 = product of:
      0.05602347 = sum of:
        0.05602347 = product of:
          0.11204694 = sum of:
            0.11204694 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.11204694 = score(doc=1179,freq=2.0), product of:
                0.20685782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.059071355 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  11. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.03
    0.028011736 = product of:
      0.05602347 = sum of:
        0.05602347 = product of:
          0.11204694 = sum of:
            0.11204694 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.11204694 = score(doc=3117,freq=2.0), product of:
                0.20685782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.059071355 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    28. 2.1999 10:48:22
  12. ¬Der Student aus dem Computer (2023) 0.03
    0.028011736 = product of:
      0.05602347 = sum of:
        0.05602347 = product of:
          0.11204694 = sum of:
            0.11204694 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.11204694 = score(doc=1079,freq=2.0), product of:
                0.20685782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.059071355 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 1.2023 16:22:55
  13. Kim, W.; Wilbur, W.J.: Corpus-based statistical screening for content-bearing terms (2001) 0.02
    0.024391936 = product of:
      0.048783872 = sum of:
        0.048783872 = product of:
          0.097567745 = sum of:
            0.097567745 = weight(_text_:500 in 5188) [ClassicSimilarity], result of:
              0.097567745 = score(doc=5188,freq=2.0), product of:
                0.36112627 = queryWeight, product of:
                  6.113391 = idf(docFreq=265, maxDocs=44218)
                  0.059071355 = queryNorm
                0.27017626 = fieldWeight in 5188, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.113391 = idf(docFreq=265, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5188)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Kim and Wilber present three techniques for the algorithmic identification in text of content bearing terms and phrases intended for human use as entry points or hyperlinks. Using a set of 1,075 terms from MEDLINE evaluated on a zero to four, stop word to definite content word scale, they evaluate the ranked lists of their three methods based on their placement of content words in the top ranks. Data consist of the natural language elements of 304,057 MEDLINE records from 1996, and 173,252 Wall Street Journal records from the TIPSTER collection. Phrases are extracted by breaking at punctuation marks and stop words, normalized by lower casing, replacement of nonalphanumerics with spaces, and the reduction of multiple spaces. In the ``strength of context'' approach each document is a vector of binary values for each word or word pair. The words or word pairs are removed from all documents, and the Robertson, Spark Jones relevance weight for each term computed, negative weights replaced with zero, those below a randomness threshold ignored, and the remainder summed for each document, to yield a score for the document and finally to assign to the term the average document score for documents in which it occurred. The average of these word scores is assigned to the original phrase. The ``frequency clumping'' approach defines a random phrase as one whose distribution among documents is Poisson in character. A pvalue, the probability that a phrase frequency of occurrence would be equal to, or less than, Poisson expectations is computed, and a score assigned which is the negative log of that value. In the ``database comparison'' approach if a phrase occurring in a document allows prediction that the document is in MEDLINE rather that in the Wall Street Journal, it is considered to be content bearing for MEDLINE. The score is computed by dividing the number of occurrences of the term in MEDLINE by occurrences in the Journal, and taking the product of all these values. The one hundred top and bottom ranked phrases that occurred in at least 500 documents were collected for each method. The union set had 476 phrases. A second selection was made of two word phrases occurring each in only three documents with a union of 599 phrases. A judge then ranked the two sets of terms as to subject specificity on a 0 to 4 scale. Precision was the average subject specificity of the first r ranks and recall the fraction of the subject specific phrases in the first r ranks and eleven point average precision was used as a summary measure. The three methods all move content bearing terms forward in the lists as does the use of the sum of the logs of the three methods.
  14. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.02
    0.024010058 = product of:
      0.048020117 = sum of:
        0.048020117 = product of:
          0.096040234 = sum of:
            0.096040234 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.096040234 = score(doc=4483,freq=2.0), product of:
                0.20685782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.059071355 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    15. 3.2000 10:22:37
  15. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.02
    0.024010058 = product of:
      0.048020117 = sum of:
        0.048020117 = product of:
          0.096040234 = sum of:
            0.096040234 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.096040234 = score(doc=4888,freq=2.0), product of:
                0.20685782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.059071355 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 3.2013 14:56:22
  16. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.02
    0.024010058 = product of:
      0.048020117 = sum of:
        0.048020117 = product of:
          0.096040234 = sum of:
            0.096040234 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
              0.096040234 = score(doc=5429,freq=2.0), product of:
                0.20685782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.059071355 = queryNorm
                0.46428138 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.230-231
  17. Hutchins, J.: From first conception to first demonstration : the nascent years of machine translation, 1947-1954. A chronology (1997) 0.02
    0.020008383 = product of:
      0.040016767 = sum of:
        0.040016767 = product of:
          0.08003353 = sum of:
            0.08003353 = weight(_text_:22 in 1463) [ClassicSimilarity], result of:
              0.08003353 = score(doc=1463,freq=2.0), product of:
                0.20685782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.059071355 = queryNorm
                0.38690117 = fieldWeight in 1463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1463)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  18. Kuhlmann, U.; Monnerjahn, P.: Sprache auf Knopfdruck : Sieben automatische Übersetzungsprogramme im Test (2000) 0.02
    0.020008383 = product of:
      0.040016767 = sum of:
        0.040016767 = product of:
          0.08003353 = sum of:
            0.08003353 = weight(_text_:22 in 5428) [ClassicSimilarity], result of:
              0.08003353 = score(doc=5428,freq=2.0), product of:
                0.20685782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.059071355 = queryNorm
                0.38690117 = fieldWeight in 5428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5428)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.220-229
  19. Lezius, W.; Rapp, R.; Wettler, M.: ¬A morphology-system and part-of-speech tagger for German (1996) 0.02
    0.020008383 = product of:
      0.040016767 = sum of:
        0.040016767 = product of:
          0.08003353 = sum of:
            0.08003353 = weight(_text_:22 in 1693) [ClassicSimilarity], result of:
              0.08003353 = score(doc=1693,freq=2.0), product of:
                0.20685782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.059071355 = queryNorm
                0.38690117 = fieldWeight in 1693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1693)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2015 9:37:18
  20. Wanner, L.: Lexical choice in text generation and machine translation (1996) 0.02
    0.016006706 = product of:
      0.032013413 = sum of:
        0.032013413 = product of:
          0.064026825 = sum of:
            0.064026825 = weight(_text_:22 in 8521) [ClassicSimilarity], result of:
              0.064026825 = score(doc=8521,freq=2.0), product of:
                0.20685782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.059071355 = queryNorm
                0.30952093 = fieldWeight in 8521, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8521)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19

Years

Languages

  • e 39
  • d 17
  • ru 1
  • More… Less…

Types

  • a 44
  • el 6
  • m 5
  • s 3
  • p 2
  • x 2
  • d 1
  • More… Less…