Search (24 results, page 1 of 2)

  • × theme_ss:"Computerlinguistik"
  • × year_i:[2010 TO 2020}
  1. Kocijan, K.: Visualizing natural language resources (2015) 0.02
    0.016018637 = product of:
      0.032037273 = sum of:
        0.032037273 = product of:
          0.064074546 = sum of:
            0.064074546 = weight(_text_:k in 2995) [ClassicSimilarity], result of:
              0.064074546 = score(doc=2995,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.39440846 = fieldWeight in 2995, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2995)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  2. Baierer, K.; Zumstein, P.: Verbesserung der OCR in digitalen Sammlungen von Bibliotheken (2016) 0.01
    0.01281491 = product of:
      0.02562982 = sum of:
        0.02562982 = product of:
          0.05125964 = sum of:
            0.05125964 = weight(_text_:k in 2818) [ClassicSimilarity], result of:
              0.05125964 = score(doc=2818,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.31552678 = fieldWeight in 2818, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2818)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  3. Lezius, W.: Morphy - Morphologie und Tagging für das Deutsche (2013) 0.01
    0.012331706 = product of:
      0.024663411 = sum of:
        0.024663411 = product of:
          0.049326822 = sum of:
            0.049326822 = weight(_text_:22 in 1490) [ClassicSimilarity], result of:
              0.049326822 = score(doc=1490,freq=2.0), product of:
                0.15936506 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045509085 = queryNorm
                0.30952093 = fieldWeight in 1490, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1490)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2015 9:30:24
  4. Babik, W.: Keywords as linguistic tools in information and knowledge organization (2017) 0.01
    0.0112130465 = product of:
      0.022426093 = sum of:
        0.022426093 = product of:
          0.044852186 = sum of:
            0.044852186 = weight(_text_:k in 3510) [ClassicSimilarity], result of:
              0.044852186 = score(doc=3510,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.27608594 = fieldWeight in 3510, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3510)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Theorie, Semantik und Organisation von Wissen: Proceedings der 13. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und dem 13. Internationalen Symposium der Informationswissenschaft der Higher Education Association for Information Science (HI) Potsdam (19.-20.03.2013): 'Theory, Information and Organization of Knowledge' / Proceedings der 14. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und Natural Language & Information Systems (NLDB) Passau (16.06.2015): 'Lexical Resources for Knowledge Organization' / Proceedings des Workshops der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) auf der SEMANTICS Leipzig (1.09.2014): 'Knowledge Organization and Semantic Web' / Proceedings des Workshops der Polnischen und Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) Cottbus (29.-30.09.2011): 'Economics of Knowledge Production and Organization'. Hrsg. von W. Babik, H.P. Ohly u. K. Weber
  5. Bowker, L.; Ciro, J.B.: Machine translation and global research : towards improved machine translation literacy in the scholarly community (2019) 0.01
    0.0110980375 = product of:
      0.022196075 = sum of:
        0.022196075 = product of:
          0.04439215 = sum of:
            0.04439215 = weight(_text_:k in 5970) [ClassicSimilarity], result of:
              0.04439215 = score(doc=5970,freq=6.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.27325422 = fieldWeight in 5970, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5970)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Classification
    BFP (FH K)
    Footnote
    Rez. in: JASIST 71(2020) no.10, S.1275-1278 (Krystyna K. Matusiak).
    GHBS
    BFP (FH K)
  6. Al-Shawakfa, E.; Al-Badarneh, A.; Shatnawi, S.; Al-Rabab'ah, K.; Bani-Ismail, B.: ¬A comparison study of some Arabic root finding algorithms (2010) 0.01
    0.009611183 = product of:
      0.019222366 = sum of:
        0.019222366 = product of:
          0.03844473 = sum of:
            0.03844473 = weight(_text_:k in 3457) [ClassicSimilarity], result of:
              0.03844473 = score(doc=3457,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.23664509 = fieldWeight in 3457, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3457)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  7. Moohebat, M.; Raj, R.G.; Kareem, S.B.A.; Thorleuchter, D.: Identifying ISI-indexed articles by their lexical usage : a text analysis approach (2015) 0.01
    0.009611183 = product of:
      0.019222366 = sum of:
        0.019222366 = product of:
          0.03844473 = sum of:
            0.03844473 = weight(_text_:k in 1664) [ClassicSimilarity], result of:
              0.03844473 = score(doc=1664,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.23664509 = fieldWeight in 1664, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1664)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This research creates an architecture for investigating the existence of probable lexical divergences between articles, categorized as Institute for Scientific Information (ISI) and non-ISI, and consequently, if such a difference is discovered, to propose the best available classification method. Based on a collection of ISI- and non-ISI-indexed articles in the areas of business and computer science, three classification models are trained. A sensitivity analysis is applied to demonstrate the impact of words in different syntactical forms on the classification decision. The results demonstrate that the lexical domains of ISI and non-ISI articles are distinguishable by machine learning techniques. Our findings indicate that the support vector machine identifies ISI-indexed articles in both disciplines with higher precision than do the Naïve Bayesian and K-Nearest Neighbors techniques.
  8. Scherer Auberson, K.: Counteracting concept drift in natural language classifiers : proposal for an automated method (2018) 0.01
    0.009611183 = product of:
      0.019222366 = sum of:
        0.019222366 = product of:
          0.03844473 = sum of:
            0.03844473 = weight(_text_:k in 2849) [ClassicSimilarity], result of:
              0.03844473 = score(doc=2849,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.23664509 = fieldWeight in 2849, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2849)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  9. Lu, K.; Cai, X.; Ajiferuke, I.; Wolfram, D.: Vocabulary size and its effect on topic representation (2017) 0.01
    0.009611183 = product of:
      0.019222366 = sum of:
        0.019222366 = product of:
          0.03844473 = sum of:
            0.03844473 = weight(_text_:k in 3414) [ClassicSimilarity], result of:
              0.03844473 = score(doc=3414,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.23664509 = fieldWeight in 3414, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3414)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  10. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.01
    0.009248778 = product of:
      0.018497556 = sum of:
        0.018497556 = product of:
          0.036995113 = sum of:
            0.036995113 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.036995113 = score(doc=563,freq=2.0), product of:
                0.15936506 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045509085 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    10. 1.2013 19:22:47
  11. Lawrie, D.; Mayfield, J.; McNamee, P.; Oard, P.W.: Cross-language person-entity linking from 20 languages (2015) 0.01
    0.009248778 = product of:
      0.018497556 = sum of:
        0.018497556 = product of:
          0.036995113 = sum of:
            0.036995113 = weight(_text_:22 in 1848) [ClassicSimilarity], result of:
              0.036995113 = score(doc=1848,freq=2.0), product of:
                0.15936506 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045509085 = queryNorm
                0.23214069 = fieldWeight in 1848, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1848)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The goal of entity linking is to associate references to an entity that is found in unstructured natural language content to an authoritative inventory of known entities. This article describes the construction of 6 test collections for cross-language person-entity linking that together span 22 languages. Fully automated components were used together with 2 crowdsourced validation stages to affordably generate ground-truth annotations with an accuracy comparable to that of a completely manual process. The resulting test collections each contain between 642 (Arabic) and 2,361 (Romanian) person references in non-English texts for which the correct resolution in English Wikipedia is known, plus a similar number of references for which no correct resolution into English Wikipedia is believed to exist. Fully automated cross-language person-name linking experiments with 20 non-English languages yielded a resolution accuracy of between 0.84 (Serbian) and 0.98 (Romanian), which compares favorably with previously reported cross-language entity linking results for Spanish.
  12. Ramisch, C.: Multiword expressions acquisition : a generic and open framework (2015) 0.01
    0.00906151 = product of:
      0.01812302 = sum of:
        0.01812302 = product of:
          0.03624604 = sum of:
            0.03624604 = weight(_text_:k in 1649) [ClassicSimilarity], result of:
              0.03624604 = score(doc=1649,freq=4.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.22311112 = fieldWeight in 1649, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1649)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Classification
    BFP (FH K)
    GHBS
    BFP (FH K)
  13. Sankarasubramaniam, Y.; Ramanathan, K.; Ghosh, S.: Text summarization using Wikipedia (2014) 0.01
    0.008009318 = product of:
      0.016018637 = sum of:
        0.016018637 = product of:
          0.032037273 = sum of:
            0.032037273 = weight(_text_:k in 2693) [ClassicSimilarity], result of:
              0.032037273 = score(doc=2693,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.19720423 = fieldWeight in 2693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2693)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  14. Savoy, J.: Text representation strategies : an example with the State of the union addresses (2016) 0.01
    0.008009318 = product of:
      0.016018637 = sum of:
        0.016018637 = product of:
          0.032037273 = sum of:
            0.032037273 = weight(_text_:k in 3042) [ClassicSimilarity], result of:
              0.032037273 = score(doc=3042,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.19720423 = fieldWeight in 3042, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3042)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Based on State of the Union addresses from 1790 to 2014 (225 speeches delivered by 42 presidents), this paper describes and evaluates different text representation strategies. To determine the most important words of a given text, the term frequencies (tf) or the tf?idf weighting scheme can be applied. Recently, latent Dirichlet allocation (LDA) has been proposed to define the topics included in a corpus. As another strategy, this study proposes to apply a vocabulary specificity measure (Z?score) to determine the most significantly overused word-types or short sequences of them. Our experiments show that the simple term frequency measure is not able to discriminate between specific terms associated with a document or a set of texts. Using the tf idf or LDA approach, the selection requires some arbitrary decisions. Based on the term-specific measure (Z?score), the term selection has a clear theoretical basis. Moreover, the most significant sentences for each presidency can be determined. As another facet, we can visualize the dynamic evolution of usage of some terms associated with their specificity measures. Finally, this technique can be employed to define the most important lexical leaders introducing terms overused by the k following presidencies.
  15. Lian, T.; Yu, C.; Wang, W.; Yuan, Q.; Hou, Z.: Doctoral dissertations on tourism in China : a co-word analysis (2016) 0.01
    0.008009318 = product of:
      0.016018637 = sum of:
        0.016018637 = product of:
          0.032037273 = sum of:
            0.032037273 = weight(_text_:k in 3178) [ClassicSimilarity], result of:
              0.032037273 = score(doc=3178,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.19720423 = fieldWeight in 3178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3178)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The aim of this paper is to map the foci of research in doctoral dissertations on tourism in China. In the paper, coword analysis is applied, with keywords coming from six public dissertation databases, i.e. CDFD, Wanfang Data, NLC, CALIS, ISTIC, and NSTL, as well as some university libraries providing doctoral dissertations on tourism. Altogether we have examined 928 doctoral dissertations on tourism written between 1989 and 2013. Doctoral dissertations on tourism in China involve 36 first level disciplines and 102 secondary level disciplines. We collect the top 68 keywords of practical significance in tourism which are mentioned at least four times or more. These keywords are classified into 12 categories based on co-word analysis, including cluster analysis, strategic diagrams analysis, and social network analysis. According to the strategic diagram of the 12 categories, we find the mature and immature areas in tourism study. From social networks, we can see the social network maps of original co-occurrence matrix and k-cores analysis of binary matrix. The paper provides valuable insight into the study of tourism by analyzing doctoral dissertations on tourism in China.
  16. Lhadj, L.S.; Boughanem, M.; Amrouche, K.: Enhancing information retrieval through concept-based language modeling and semantic smoothing (2016) 0.01
    0.008009318 = product of:
      0.016018637 = sum of:
        0.016018637 = product of:
          0.032037273 = sum of:
            0.032037273 = weight(_text_:k in 3221) [ClassicSimilarity], result of:
              0.032037273 = score(doc=3221,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.19720423 = fieldWeight in 3221, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3221)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  17. Järvelin, A.; Keskustalo, H.; Sormunen, E.; Saastamoinen, M.; Kettunen, K.: Information retrieval from historical newspaper collections in highly inflectional languages : a query expansion approach (2016) 0.01
    0.008009318 = product of:
      0.016018637 = sum of:
        0.016018637 = product of:
          0.032037273 = sum of:
            0.032037273 = weight(_text_:k in 3223) [ClassicSimilarity], result of:
              0.032037273 = score(doc=3223,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.19720423 = fieldWeight in 3223, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3223)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  18. K., Vani; Gupta, D.: Unmasking text plagiarism using syntactic-semantic based natural language processing techniques : comparisons, analysis and challenges (2018) 0.01
    0.008009318 = product of:
      0.016018637 = sum of:
        0.016018637 = product of:
          0.032037273 = sum of:
            0.032037273 = weight(_text_:k in 5084) [ClassicSimilarity], result of:
              0.032037273 = score(doc=5084,freq=2.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.19720423 = fieldWeight in 5084, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5084)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  19. Fóris, A.: Network theory and terminology (2013) 0.01
    0.007707316 = product of:
      0.015414632 = sum of:
        0.015414632 = product of:
          0.030829264 = sum of:
            0.030829264 = weight(_text_:22 in 1365) [ClassicSimilarity], result of:
              0.030829264 = score(doc=1365,freq=2.0), product of:
                0.15936506 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045509085 = queryNorm
                0.19345059 = fieldWeight in 1365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1365)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    2. 9.2014 21:22:48
  20. RWI/PH: Auf der Suche nach dem entscheidenden Wort : die Häufung bestimmter Wörter innerhalb eines Textes macht diese zu Schlüsselwörtern (2012) 0.01
    0.006796132 = product of:
      0.013592264 = sum of:
        0.013592264 = product of:
          0.027184527 = sum of:
            0.027184527 = weight(_text_:k in 331) [ClassicSimilarity], result of:
              0.027184527 = score(doc=331,freq=4.0), product of:
                0.16245733 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.045509085 = queryNorm
                0.16733333 = fieldWeight in 331, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=331)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    "Die Dresdner Wissenschaftler haben die semantischen Eigenschaften von Texten mathematisch untersucht, indem sie zehn verschiedene englische Texte in unterschiedlichen Formen kodierten. Dazu zählt unter anderem die englische Ausgabe von Leo Tolstois "Krieg und Frieden". Beispielsweise übersetzten die Forscher Buchstaben innerhalb eines Textes in eine Binär-Sequenz. Dazu ersetzten sie alle Vokale durch eine Eins und alle Konsonanten durch eine Null. Mit Hilfe weiterer mathematischer Funktionen beleuchteten die Wissenschaftler dabei verschiedene Ebenen des Textes, also sowohl einzelne Vokale, Buchstaben als auch ganze Wörter, die in verschiedenen Formen kodiert wurden. Innerhalb des ganzen Textes lassen sich so wiederkehrende Muster finden. Diesen Zusammenhang innerhalb des Textes bezeichnet man als Langzeitkorrelation. Diese gibt an, ob zwei Buchstaben an beliebig weit voneinander entfernten Textstellen miteinander in Verbindung stehen - beispielsweise gibt es wenn wir an einer Stelle einen Buchstaben "K" finden, eine messbare höhere Wahrscheinlichkeit den Buchstaben "K" einige Seiten später nochmal zu finden. "Es ist zu erwarten, dass wenn es in einem Buch an einer Stelle um Krieg geht, die Wahrscheinlichkeit hoch ist das Wort Krieg auch einige Seiten später zu finden. Überraschend ist es, dass wir die hohe Wahrscheinlichkeit auch auf der Buchstabenebene finden", so Altmann.