Search (72 results, page 1 of 4)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.06
    0.061287217 = product of:
      0.09193082 = sum of:
        0.07319836 = product of:
          0.21959509 = sum of:
            0.21959509 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.21959509 = score(doc=562,freq=2.0), product of:
                0.39072606 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046086997 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.018732455 = product of:
          0.03746491 = sum of:
            0.03746491 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.03746491 = score(doc=562,freq=2.0), product of:
                0.16138881 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046086997 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Deventer, J.P. van; Kruger, C.J.; Johnson, R.D.: Delineating knowledge management through lexical analysis : a retrospective (2015) 0.03
    0.029061532 = product of:
      0.043592297 = sum of:
        0.03266503 = product of:
          0.09799508 = sum of:
            0.09799508 = weight(_text_:authors in 3807) [ClassicSimilarity], result of:
              0.09799508 = score(doc=3807,freq=14.0), product of:
                0.21010205 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.046086997 = queryNorm
                0.46641657 = fieldWeight in 3807, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3807)
          0.33333334 = coord(1/3)
        0.0109272655 = product of:
          0.021854531 = sum of:
            0.021854531 = weight(_text_:22 in 3807) [ClassicSimilarity], result of:
              0.021854531 = score(doc=3807,freq=2.0), product of:
                0.16138881 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046086997 = queryNorm
                0.1354154 = fieldWeight in 3807, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3807)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose Academic authors tend to define terms that meet their own needs. Knowledge Management (KM) is a term that comes to mind and is examined in this study. Lexicographical research identified KM terms used by authors from 1996 to 2006 in academic outlets to define KM. Data were collected based on strict criteria which included that definitions should be unique instances. From 2006 onwards, these authors could not identify new unique instances of definitions with repetitive usage of such definition instances. Analysis revealed that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, and Process) and Contextualised Content (Information). The paper aims to discuss these issues. Design/methodology/approach The aim of this paper is to add to the body of knowledge in the KM discipline and supply KM practitioners and scholars with insight into what is commonly regarded to be KM so as to reignite the debate on what one could consider as KM. The lexicon used by KM scholars was evaluated though the application of lexicographical research methods as extended though Knowledge Discovery and Text Analysis methods. Findings By simplifying term relationships through the application of lexicographical research methods, as extended though Knowledge Discovery and Text Analysis methods, it was found that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, Process) and Contextualised Content (Information). One would therefore be able to indicate that KM, from an academic point of view, refers to people processing contextualised content.
    Research limitations/implications In total, 42 definitions were identified spanning a period of 11 years. This represented the first use of KM through the estimated apex of terms used. From 2006 onwards definitions were used in repetition, and all definitions that were considered to repeat were therefore subsequently excluded as not being unique instances. All definitions listed are by no means complete and exhaustive. The definitions are viewed outside the scope and context in which they were originally formulated and then used to review the key concepts in the definitions themselves. Social implications When the authors refer to the aforementioned discussion of KM content as well as the presentation of the method followed in this paper, the authors may have a few implications for future research in KM. First the research validates ideas presented by the OECD in 2005 pertaining to KM. It also validates that through the evolution of KM, the authors ended with a description of KM that may be seen as a standardised description. If the authors as academics and practitioners, for example, refer to KM as the same construct and/or idea, it has the potential to speculatively, distinguish between what KM may or may not be. Originality/value By simplifying the term used to define KM, by focusing on the most common definitions, the paper assist in refocusing KM by reconsidering the dimensions that is the most common in how it has been defined over time. This would hopefully assist in reigniting discussions about KM and how it may be used to the benefit of an organisation.
    Date
    20. 1.2015 18:30:22
  3. Becks, D.; Schulz, J.M.: Domänenübergreifende Phrasenextraktion mithilfe einer lexikonunabhängigen Analysekomponente (2010) 0.03
    0.025189033 = product of:
      0.0755671 = sum of:
        0.0755671 = product of:
          0.1511342 = sum of:
            0.1511342 = weight(_text_:j.m in 4661) [ClassicSimilarity], result of:
              0.1511342 = score(doc=4661,freq=2.0), product of:
                0.28071982 = queryWeight, product of:
                  6.091085 = idf(docFreq=271, maxDocs=44218)
                  0.046086997 = queryNorm
                0.5383809 = fieldWeight in 4661, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.091085 = idf(docFreq=271, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4661)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  4. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.02
    0.024399456 = product of:
      0.07319836 = sum of:
        0.07319836 = product of:
          0.21959509 = sum of:
            0.21959509 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.21959509 = score(doc=862,freq=2.0), product of:
                0.39072606 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046086997 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  5. Ponte, J.M.: Language models for relevance feedback (2000) 0.02
    0.018891776 = product of:
      0.056675326 = sum of:
        0.056675326 = product of:
          0.11335065 = sum of:
            0.11335065 = weight(_text_:j.m in 35) [ClassicSimilarity], result of:
              0.11335065 = score(doc=35,freq=2.0), product of:
                0.28071982 = queryWeight, product of:
                  6.091085 = idf(docFreq=271, maxDocs=44218)
                  0.046086997 = queryNorm
                0.4037857 = fieldWeight in 35, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.091085 = idf(docFreq=271, maxDocs=44218)
                  0.046875 = fieldNorm(doc=35)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  6. Warner, A.J.: Natural language processing (1987) 0.02
    0.016651072 = product of:
      0.049953215 = sum of:
        0.049953215 = product of:
          0.09990643 = sum of:
            0.09990643 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.09990643 = score(doc=337,freq=2.0), product of:
                0.16138881 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046086997 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  7. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.01
    0.014569688 = product of:
      0.043709062 = sum of:
        0.043709062 = product of:
          0.087418124 = sum of:
            0.087418124 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.087418124 = score(doc=3164,freq=2.0), product of:
                0.16138881 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046086997 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  8. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.01
    0.014569688 = product of:
      0.043709062 = sum of:
        0.043709062 = product of:
          0.087418124 = sum of:
            0.087418124 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.087418124 = score(doc=4506,freq=2.0), product of:
                0.16138881 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046086997 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    8.10.2000 11:52:22
  9. Somers, H.: Example-based machine translation : Review article (1999) 0.01
    0.014569688 = product of:
      0.043709062 = sum of:
        0.043709062 = product of:
          0.087418124 = sum of:
            0.087418124 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.087418124 = score(doc=6672,freq=2.0), product of:
                0.16138881 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046086997 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    31. 7.1996 9:22:19
  10. New tools for human translators (1997) 0.01
    0.014569688 = product of:
      0.043709062 = sum of:
        0.043709062 = product of:
          0.087418124 = sum of:
            0.087418124 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.087418124 = score(doc=1179,freq=2.0), product of:
                0.16138881 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046086997 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    31. 7.1996 9:22:19
  11. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.01
    0.014569688 = product of:
      0.043709062 = sum of:
        0.043709062 = product of:
          0.087418124 = sum of:
            0.087418124 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.087418124 = score(doc=3117,freq=2.0), product of:
                0.16138881 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046086997 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    28. 2.1999 10:48:22
  12. ¬Der Student aus dem Computer (2023) 0.01
    0.014569688 = product of:
      0.043709062 = sum of:
        0.043709062 = product of:
          0.087418124 = sum of:
            0.087418124 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.087418124 = score(doc=1079,freq=2.0), product of:
                0.16138881 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046086997 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    27. 1.2023 16:22:55
  13. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.01
    0.012488304 = product of:
      0.03746491 = sum of:
        0.03746491 = product of:
          0.07492982 = sum of:
            0.07492982 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.07492982 = score(doc=4483,freq=2.0), product of:
                0.16138881 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046086997 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    15. 3.2000 10:22:37
  14. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.01
    0.012488304 = product of:
      0.03746491 = sum of:
        0.03746491 = product of:
          0.07492982 = sum of:
            0.07492982 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.07492982 = score(doc=4888,freq=2.0), product of:
                0.16138881 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046086997 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    1. 3.2013 14:56:22
  15. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.01
    0.012488304 = product of:
      0.03746491 = sum of:
        0.03746491 = product of:
          0.07492982 = sum of:
            0.07492982 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
              0.07492982 = score(doc=5429,freq=2.0), product of:
                0.16138881 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046086997 = queryNorm
                0.46428138 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    c't. 2000, H.22, S.230-231
  16. Hutchins, J.: From first conception to first demonstration : the nascent years of machine translation, 1947-1954. A chronology (1997) 0.01
    0.010406921 = product of:
      0.03122076 = sum of:
        0.03122076 = product of:
          0.06244152 = sum of:
            0.06244152 = weight(_text_:22 in 1463) [ClassicSimilarity], result of:
              0.06244152 = score(doc=1463,freq=2.0), product of:
                0.16138881 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046086997 = queryNorm
                0.38690117 = fieldWeight in 1463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1463)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    31. 7.1996 9:22:19
  17. Kuhlmann, U.; Monnerjahn, P.: Sprache auf Knopfdruck : Sieben automatische Übersetzungsprogramme im Test (2000) 0.01
    0.010406921 = product of:
      0.03122076 = sum of:
        0.03122076 = product of:
          0.06244152 = sum of:
            0.06244152 = weight(_text_:22 in 5428) [ClassicSimilarity], result of:
              0.06244152 = score(doc=5428,freq=2.0), product of:
                0.16138881 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046086997 = queryNorm
                0.38690117 = fieldWeight in 5428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5428)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    c't. 2000, H.22, S.220-229
  18. Lezius, W.; Rapp, R.; Wettler, M.: ¬A morphology-system and part-of-speech tagger for German (1996) 0.01
    0.010406921 = product of:
      0.03122076 = sum of:
        0.03122076 = product of:
          0.06244152 = sum of:
            0.06244152 = weight(_text_:22 in 1693) [ClassicSimilarity], result of:
              0.06244152 = score(doc=1693,freq=2.0), product of:
                0.16138881 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046086997 = queryNorm
                0.38690117 = fieldWeight in 1693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1693)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2015 9:37:18
  19. Ali, C.B.; Haddad, H.; Slimani, Y.: Multi-word terms selection for information retrieval (2022) 0.01
    0.010182992 = product of:
      0.030548973 = sum of:
        0.030548973 = product of:
          0.09164692 = sum of:
            0.09164692 = weight(_text_:authors in 900) [ClassicSimilarity], result of:
              0.09164692 = score(doc=900,freq=6.0), product of:
                0.21010205 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.046086997 = queryNorm
                0.43620193 = fieldWeight in 900, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=900)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose A number of approaches and algorithms have been proposed over the years as a basis for automatic indexing. Many of these approaches suffer from precision inefficiency at low recall. The choice of indexing units has a great impact on search system effectiveness. The authors dive beyond simple terms indexing to propose a framework for multi-word terms (MWT) filtering and indexing. Design/methodology/approach In this paper, the authors rely on ranking MWT to filter them, keeping the most effective ones for the indexing process. The proposed model is based on filtering MWT according to their ability to capture the document topic and distinguish between different documents from the same collection. The authors rely on the hypothesis that the best MWT are those that achieve the greatest association degree. The experiments are carried out with English and French languages data sets. Findings The results indicate that this approach achieved precision enhancements at low recall, and it performed better than more advanced models based on terms dependencies. Originality/value Using and testing different association measures to select MWT that best describe the documents to enhance the precision in the first retrieved documents.
  20. Corbara, S.; Moreo, A.; Sebastiani, F.: Syllabic quantity patterns as rhythmic features for Latin authorship attribution (2023) 0.01
    0.009977253 = product of:
      0.029931758 = sum of:
        0.029931758 = product of:
          0.08979527 = sum of:
            0.08979527 = weight(_text_:authors in 846) [ClassicSimilarity], result of:
              0.08979527 = score(doc=846,freq=4.0), product of:
                0.21010205 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.046086997 = queryNorm
                0.42738882 = fieldWeight in 846, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.046875 = fieldNorm(doc=846)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    It is well known that, within the Latin production of written text, peculiar metric schemes were followed not only in poetic compositions, but also in many prose works. Such metric patterns were based on so-called syllabic quantity, that is, on the length of the involved syllables, and there is substantial evidence suggesting that certain authors had a preference for certain metric patterns over others. In this research we investigate the possibility to employ syllabic quantity as a base for deriving rhythmic features for the task of computational authorship attribution of Latin prose texts. We test the impact of these features on the authorship attribution task when combined with other topic-agnostic features. Our experiments, carried out on three different datasets using support vector machines (SVMs) show that rhythmic features based on syllabic quantity are beneficial in discriminating among Latin prose authors.

Years

Languages

  • e 55
  • d 17

Types

  • a 56
  • m 9
  • el 6
  • s 4
  • p 2
  • x 2
  • d 1
  • More… Less…

Classifications