Search (57 results, page 1 of 3)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.10192762 = sum of:
      0.08115815 = product of:
        0.24347445 = sum of:
          0.24347445 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.24347445 = score(doc=562,freq=2.0), product of:
              0.43321466 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05109862 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.020769471 = product of:
        0.041538943 = sum of:
          0.041538943 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.041538943 = score(doc=562,freq=2.0), product of:
              0.17893866 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05109862 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.04
    0.040579077 = product of:
      0.08115815 = sum of:
        0.08115815 = product of:
          0.24347445 = sum of:
            0.24347445 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.24347445 = score(doc=862,freq=2.0), product of:
                0.43321466 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05109862 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Warner, A.J.: Natural language processing (1987) 0.03
    0.027692629 = product of:
      0.055385258 = sum of:
        0.055385258 = product of:
          0.110770516 = sum of:
            0.110770516 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.110770516 = score(doc=337,freq=2.0), product of:
                0.17893866 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05109862 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  4. He, S.: Translingual alteration of conceptual information in medical translation : a crosslanguage analysis between English and chinese (2000) 0.03
    0.025161162 = product of:
      0.050322324 = sum of:
        0.050322324 = product of:
          0.10064465 = sum of:
            0.10064465 = weight(_text_:journals in 5162) [ClassicSimilarity], result of:
              0.10064465 = score(doc=5162,freq=4.0), product of:
                0.25656942 = queryWeight, product of:
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.05109862 = queryNorm
                0.39227062 = fieldWeight in 5162, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5162)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This research investigated conceptual alteration in medical article titles translation between English and Chinese with a twofold purpose: one was to further justify the findings from a pilot study, and the other was to further investigate how the concepts were altered in translation. The research corpus of 800 medical article titles in English and Chinese was selected from two English medical journals and two Chinese medical journals. The analysis was based on the pairing of concepts in English and Chinese and their conceptual similarity/ dissimilarity via translation between English and Chinese. Two kinds of conceptual alteration were discussed: one was apparent conceptual alteration that was obvious with addition or omission of concepts in translation. The other was latent conceptual alteration that was not obvious, and can only be recognized by the differences between the original and translated concepts. The findings from the pilot study were verified with the findings from this research. Additional findings, for example, the addition/omission of single-word and multiword concepts in the general and medical domain and, implicit information vs. explicit information, were also discussed. The findings provided useful insights into future studies on crosslanguage information retrieval via medical translation between English and Chinese, and other languages as well
  5. Wu, H.; He, J.; Pei, Y.: Scientific impact at the topic level : a case study in computational linguistics (2010) 0.02
    0.024908276 = product of:
      0.049816553 = sum of:
        0.049816553 = product of:
          0.099633105 = sum of:
            0.099633105 = weight(_text_:journals in 4103) [ClassicSimilarity], result of:
              0.099633105 = score(doc=4103,freq=2.0), product of:
                0.25656942 = queryWeight, product of:
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.05109862 = queryNorm
                0.38832808 = fieldWeight in 4103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4103)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this article, we propose to apply the topic model and topic-level eigenfactor (TEF) algorithm to assess the relative importance of academic entities including articles, authors, journals, and conferences. Scientific impact is measured by the biased PageRank score toward topics created by the latent topic model. The TEF metric considers the impact of an academic entity in multiple granular views as well as in a global view. Experiments on a computational linguistics corpus show that the method is a useful and promising measure to assess scientific impact.
  6. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.02
    0.02423105 = product of:
      0.0484621 = sum of:
        0.0484621 = product of:
          0.0969242 = sum of:
            0.0969242 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.0969242 = score(doc=3164,freq=2.0), product of:
                0.17893866 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05109862 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  7. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.02
    0.02423105 = product of:
      0.0484621 = sum of:
        0.0484621 = product of:
          0.0969242 = sum of:
            0.0969242 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.0969242 = score(doc=4506,freq=2.0), product of:
                0.17893866 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05109862 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8.10.2000 11:52:22
  8. Somers, H.: Example-based machine translation : Review article (1999) 0.02
    0.02423105 = product of:
      0.0484621 = sum of:
        0.0484621 = product of:
          0.0969242 = sum of:
            0.0969242 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.0969242 = score(doc=6672,freq=2.0), product of:
                0.17893866 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05109862 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  9. New tools for human translators (1997) 0.02
    0.02423105 = product of:
      0.0484621 = sum of:
        0.0484621 = product of:
          0.0969242 = sum of:
            0.0969242 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.0969242 = score(doc=1179,freq=2.0), product of:
                0.17893866 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05109862 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  10. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.02
    0.02423105 = product of:
      0.0484621 = sum of:
        0.0484621 = product of:
          0.0969242 = sum of:
            0.0969242 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.0969242 = score(doc=3117,freq=2.0), product of:
                0.17893866 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05109862 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    28. 2.1999 10:48:22
  11. ¬Der Student aus dem Computer (2023) 0.02
    0.02423105 = product of:
      0.0484621 = sum of:
        0.0484621 = product of:
          0.0969242 = sum of:
            0.0969242 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.0969242 = score(doc=1079,freq=2.0), product of:
                0.17893866 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05109862 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 1.2023 16:22:55
  12. Lu, C.; Bu, Y.; Wang, J.; Ding, Y.; Torvik, V.; Schnaars, M.; Zhang, C.: Examining scientific writing styles from the perspective of linguistic complexity : a cross-level moderation model (2019) 0.02
    0.021349952 = product of:
      0.042699903 = sum of:
        0.042699903 = product of:
          0.08539981 = sum of:
            0.08539981 = weight(_text_:journals in 5219) [ClassicSimilarity], result of:
              0.08539981 = score(doc=5219,freq=2.0), product of:
                0.25656942 = queryWeight, product of:
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.05109862 = queryNorm
                0.33285263 = fieldWeight in 5219, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5219)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Publishing articles in high-impact English journals is difficult for scholars around the world, especially for non-native English-speaking scholars (NNESs), most of whom struggle with proficiency in English. To uncover the differences in English scientific writing between native English-speaking scholars (NESs) and NNESs, we collected a large-scale data set containing more than 150,000 full-text articles published in PLoS between 2006 and 2015. We divided these articles into three groups according to the ethnic backgrounds of the first and corresponding authors, obtained by Ethnea, and examined the scientific writing styles in English from a two-fold perspective of linguistic complexity: (a) syntactic complexity, including measurements of sentence length and sentence complexity; and (b) lexical complexity, including measurements of lexical diversity, lexical density, and lexical sophistication. The observations suggest marginal differences between groups in syntactical and lexical complexity.
  13. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.02
    0.020769471 = product of:
      0.041538943 = sum of:
        0.041538943 = product of:
          0.083077885 = sum of:
            0.083077885 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.083077885 = score(doc=4483,freq=2.0), product of:
                0.17893866 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05109862 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    15. 3.2000 10:22:37
  14. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.02
    0.020769471 = product of:
      0.041538943 = sum of:
        0.041538943 = product of:
          0.083077885 = sum of:
            0.083077885 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.083077885 = score(doc=4888,freq=2.0), product of:
                0.17893866 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05109862 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 3.2013 14:56:22
  15. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.02
    0.020769471 = product of:
      0.041538943 = sum of:
        0.041538943 = product of:
          0.083077885 = sum of:
            0.083077885 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
              0.083077885 = score(doc=5429,freq=2.0), product of:
                0.17893866 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05109862 = queryNorm
                0.46428138 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.230-231
  16. Hutchins, J.: From first conception to first demonstration : the nascent years of machine translation, 1947-1954. A chronology (1997) 0.02
    0.017307894 = product of:
      0.03461579 = sum of:
        0.03461579 = product of:
          0.06923158 = sum of:
            0.06923158 = weight(_text_:22 in 1463) [ClassicSimilarity], result of:
              0.06923158 = score(doc=1463,freq=2.0), product of:
                0.17893866 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05109862 = queryNorm
                0.38690117 = fieldWeight in 1463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1463)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  17. Kuhlmann, U.; Monnerjahn, P.: Sprache auf Knopfdruck : Sieben automatische Übersetzungsprogramme im Test (2000) 0.02
    0.017307894 = product of:
      0.03461579 = sum of:
        0.03461579 = product of:
          0.06923158 = sum of:
            0.06923158 = weight(_text_:22 in 5428) [ClassicSimilarity], result of:
              0.06923158 = score(doc=5428,freq=2.0), product of:
                0.17893866 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05109862 = queryNorm
                0.38690117 = fieldWeight in 5428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5428)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.220-229
  18. Lezius, W.; Rapp, R.; Wettler, M.: ¬A morphology-system and part-of-speech tagger for German (1996) 0.02
    0.017307894 = product of:
      0.03461579 = sum of:
        0.03461579 = product of:
          0.06923158 = sum of:
            0.06923158 = weight(_text_:22 in 1693) [ClassicSimilarity], result of:
              0.06923158 = score(doc=1693,freq=2.0), product of:
                0.17893866 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05109862 = queryNorm
                0.38690117 = fieldWeight in 1693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1693)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2015 9:37:18
  19. Humphrey, S.M.; Rogers, W.J.; Kilicoglu, H.; Demner-Fushman, D.; Rindflesch, T.C.: Word sense disambiguation by selecting the best semantic type based on journal descriptor indexing : preliminary experiment (2006) 0.01
    0.014233301 = product of:
      0.028466603 = sum of:
        0.028466603 = product of:
          0.056933206 = sum of:
            0.056933206 = weight(_text_:journals in 4912) [ClassicSimilarity], result of:
              0.056933206 = score(doc=4912,freq=2.0), product of:
                0.25656942 = queryWeight, product of:
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.05109862 = queryNorm
                0.22190176 = fieldWeight in 4912, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4912)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    An experiment was performed at the National Library of Medicine® (NLM®) in word sense disambiguation (WSD) using the Journal Descriptor Indexing (JDI) methodology. The motivation is the need to solve the ambiguity problem confronting NLM's MetaMap system, which maps free text to terms corresponding to concepts in NLM's Unified Medical Language System® (UMLS®) Metathesaurus®. If the text maps to more than one Metathesaurus concept at the same high confidence score, MetaMap has no way of knowing which concept is the correct mapping. We describe the JDI methodology, which is ultimately based an statistical associations between words in a training set of MEDLINE® citations and a small set of journal descriptors (assigned by humans to journals per se) assumed to be inherited by the citations. JDI is the basis for selecting the best meaning that is correlated to UMLS semantic types (STs) assigned to ambiguous concepts in the Metathesaurus. For example, the ambiguity transport has two meanings: "Biological Transport" assigned the ST Cell Function and "Patient transport" assigned the ST Health Care Activity. A JDI-based methodology can analyze text containing transport and determine which ST receives a higher score for that text, which then returns the associated meaning, presumed to apply to the ambiguity itself. We then present an experiment in which a baseline disambiguation method was compared to four versions of JDI in disambiguating 45 ambiguous strings from NLM's WSD Test Collection. Overall average precision for the highest-scoring JDI version was 0.7873 compared to 0.2492 for the baseline method, and average precision for individual ambiguities was greater than 0.90 for 23 of them (51%), greater than 0.85 for 24 (53%), and greater than 0.65 for 35 (79%). On the basis of these results, we hope to improve performance of JDI and test its use in applications.
  20. Donath, A.: Nutzungsverbote für ChatGPT (2023) 0.01
    0.014233301 = product of:
      0.028466603 = sum of:
        0.028466603 = product of:
          0.056933206 = sum of:
            0.056933206 = weight(_text_:journals in 877) [ClassicSimilarity], result of:
              0.056933206 = score(doc=877,freq=2.0), product of:
                0.25656942 = queryWeight, product of:
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.05109862 = queryNorm
                0.22190176 = fieldWeight in 877, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.03125 = fieldNorm(doc=877)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Milliardenbewertung für ChatGPT OpenAI, das Chatbot ChatGPT betreibt, befindet sich laut einem Bericht des Wall Street Journals in Gesprächen zu einem Aktienverkauf. Das WSJ meldete, der mögliche Verkauf der Aktien würde die Bewertung von OpenAI auf 29 Milliarden US-Dollar anheben. Sorgen auch in Brandenburg Der brandenburgische SPD-Abgeordnete Erik Stohn stellte mit Hilfe von ChatGPT eine Kleine Anfrage an den Brandenburger Landtag, in der er fragte, wie die Landesregierung sicherstelle, dass Studierende bei maschinell erstellten Texten gerecht beurteilt und benotet würden. Er fragte auch nach Maßnahmen, die ergriffen worden seien, um sicherzustellen, dass maschinell erstellte Texte nicht in betrügerischer Weise von Studierenden bei der Bewertung von Studienleistungen verwendet werden könnten.

Years

Languages

  • e 40
  • d 17

Types

  • a 44
  • el 6
  • m 5
  • s 3
  • p 2
  • x 2
  • d 1
  • More… Less…