Search (6 results, page 1 of 1)

  • × type_ss:"x"
  • × theme_ss:"Computerlinguistik"
  1. Bredack, J.: Automatische Extraktion fachterminologischer Mehrwortbegriffe : ein Verfahrensvergleich (2016) 0.08
    0.07832358 = product of:
      0.15664716 = sum of:
        0.11337131 = weight(_text_:master in 3194) [ClassicSimilarity], result of:
          0.11337131 = score(doc=3194,freq=2.0), product of:
            0.3116585 = queryWeight, product of:
              6.5848994 = idf(docFreq=165, maxDocs=44218)
              0.047329273 = queryNorm
            0.36376774 = fieldWeight in 3194, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.5848994 = idf(docFreq=165, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3194)
        0.04327585 = weight(_text_:reference in 3194) [ClassicSimilarity], result of:
          0.04327585 = score(doc=3194,freq=2.0), product of:
            0.19255297 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.047329273 = queryNorm
            0.22474778 = fieldWeight in 3194, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3194)
      0.5 = coord(2/4)
    
    Abstract
    In dieser Untersuchung wurden zwei Systeme eingesetzt, um MWT aus einer Dokumentkollektion mit fachsprachlichem Bezug (Volltexte des ACL Anthology Reference Corpus) automatisch zu extrahieren. Das thematische Spektrum umfasste alle Bereiche der natürlichen Sprachverarbeitung, im Speziellen die CL als interdisziplinäre Wissenschaft. Ziel war es MWT zu extrahieren, die als potentielle Indexterme im IR Verwendung finden können. Diese sollten auf Konzepte, Methoden, Verfahren und Algorithmen in der CL und angrenzenden Teilgebieten, wie Linguistik und Informatik hinweisen bzw. benennen.
    Content
    Schriftliche Hausarbeit (Masterarbeit) zur Erlangung des Grades eines Master of Arts An der Universität Trier Fachbereich II Studiengang Computerlinguistik.
  2. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.08
    0.07764148 = product of:
      0.15528296 = sum of:
        0.13604558 = weight(_text_:master in 563) [ClassicSimilarity], result of:
          0.13604558 = score(doc=563,freq=2.0), product of:
            0.3116585 = queryWeight, product of:
              6.5848994 = idf(docFreq=165, maxDocs=44218)
              0.047329273 = queryNorm
            0.4365213 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.5848994 = idf(docFreq=165, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.019237388 = product of:
          0.038474776 = sum of:
            0.038474776 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.038474776 = score(doc=563,freq=2.0), product of:
                0.16573904 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047329273 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  3. Scherer Auberson, K.: Counteracting concept drift in natural language classifiers : proposal for an automated method (2018) 0.03
    0.034011394 = product of:
      0.13604558 = sum of:
        0.13604558 = weight(_text_:master in 2849) [ClassicSimilarity], result of:
          0.13604558 = score(doc=2849,freq=2.0), product of:
            0.3116585 = queryWeight, product of:
              6.5848994 = idf(docFreq=165, maxDocs=44218)
              0.047329273 = queryNorm
            0.4365213 = fieldWeight in 2849, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.5848994 = idf(docFreq=165, maxDocs=44218)
              0.046875 = fieldNorm(doc=2849)
      0.25 = coord(1/4)
    
    Content
    Diese Publikation entstand im Rahmen einer Thesis zum Master of Science FHO in Business Administration, Major Information and Data Management.
  4. Renker, L.: Exploration von Textkorpora : Topic Models als Grundlage der Interaktion (2015) 0.03
    0.028342828 = product of:
      0.11337131 = sum of:
        0.11337131 = weight(_text_:master in 2380) [ClassicSimilarity], result of:
          0.11337131 = score(doc=2380,freq=2.0), product of:
            0.3116585 = queryWeight, product of:
              6.5848994 = idf(docFreq=165, maxDocs=44218)
              0.047329273 = queryNorm
            0.36376774 = fieldWeight in 2380, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.5848994 = idf(docFreq=165, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2380)
      0.25 = coord(1/4)
    
    Footnote
    Masterthesis zur Erlangung des akademischen Grades Master of Science (M.Sc.) vorgelegt an der Fachhochschule Köln / Fakultät für Informatik und Ingenieurswissenschaften im Studiengang Medieninformatik.
  5. Karlova-Bourbonus, N.: Automatic detection of contradictions in texts (2018) 0.01
    0.006491377 = product of:
      0.025965508 = sum of:
        0.025965508 = weight(_text_:reference in 5976) [ClassicSimilarity], result of:
          0.025965508 = score(doc=5976,freq=2.0), product of:
            0.19255297 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.047329273 = queryNorm
            0.13484865 = fieldWeight in 5976, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0234375 = fieldNorm(doc=5976)
      0.25 = coord(1/4)
    
    Abstract
    Implicit contradictions will only partially be the subject of the present study, aiming primarily at identifying the realization mechanism and cues (Chapter 5) as well as finding the parts of contradictions by applying the state of the art algorithms for natural language processing without conducting deep meaning processing. Further in focus are the explicit and implicit contradictions that can be detected by means of explicit linguistic, structural, lexical cues, and by conducting some additional processing operations (e.g., counting the sum in order to detect contradictions arising from numerical divergencies). One should note that an additional complexity in finding contradictions can arise in case parts of the contradictions occur on different levels of realization. Thus, a contradiction can be observed on the word- and phrase-level, such as in a married bachelor (for variations of contradictions on lexical level, see Ganeev 2004), on the sentence level - between parts of a sentence or between two or more sentences, or on the text level - between the portions of a text or between the whole texts such as a contradiction between the Bible and the Quran, for example. Only contradictions arising at the level of single sentences occurring in one or more texts, as well as parts of a sentence, will be considered for the purpose of this study. Though the focus of interest will be on single sentences, it will make use of text particularities such as coreference resolution without establishing the referents in the real world. Finally, another aspect to be considered is that parts of the contradictions are not neces-sarily to appear at the same time. They can be separated by many years and centuries with or without time expression making their recognition by human and detection by machine challenging. According to Aristotle's ontological version of the LNC (Section 3.1.1), how-ever, the same time reference is required in order for two statements to be judged as a contradiction. Taking this into account, we set the borders for the study by limiting the ana-lyzed textual data thematically (only nine world events) and temporally (three days after the reported event had happened) (Section 5.1). No sophisticated time processing will thus be conducted.
  6. Lorenz, S.: Konzeption und prototypische Realisierung einer begriffsbasierten Texterschließung (2006) 0.00
    0.004809347 = product of:
      0.019237388 = sum of:
        0.019237388 = product of:
          0.038474776 = sum of:
            0.038474776 = weight(_text_:22 in 1746) [ClassicSimilarity], result of:
              0.038474776 = score(doc=1746,freq=2.0), product of:
                0.16573904 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047329273 = queryNorm
                0.23214069 = fieldWeight in 1746, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1746)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 3.2015 9:17:30