Search (713 results, page 1 of 36)

  • × theme_ss:"Computerlinguistik"
  1. Lezius, W.; Rapp, R.; Wettler, M.: ¬A morphology-system and part-of-speech tagger for German (1996) 0.12
    0.12423068 = product of:
      0.18634602 = sum of:
        0.0107438285 = weight(_text_:a in 1693) [ClassicSimilarity], result of:
          0.0107438285 = score(doc=1693,freq=4.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.18016359 = fieldWeight in 1693, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=1693)
        0.1756022 = sum of:
          0.1055309 = weight(_text_:de in 1693) [ClassicSimilarity], result of:
            0.1055309 = score(doc=1693,freq=2.0), product of:
              0.22225924 = queryWeight, product of:
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.051718395 = queryNorm
              0.47480997 = fieldWeight in 1693, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.078125 = fieldNorm(doc=1693)
          0.07007129 = weight(_text_:22 in 1693) [ClassicSimilarity], result of:
            0.07007129 = score(doc=1693,freq=2.0), product of:
              0.18110901 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051718395 = queryNorm
              0.38690117 = fieldWeight in 1693, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=1693)
      0.6666667 = coord(2/3)
    
    Date
    22. 3.2015 9:37:18
    Imprint
    Berlin : Mouton de Gruyter
    Type
    a
  2. Rieger, F.: Lügende Computer (2023) 0.10
    0.09770625 = product of:
      0.14655937 = sum of:
        0.0060776267 = weight(_text_:a in 912) [ClassicSimilarity], result of:
          0.0060776267 = score(doc=912,freq=2.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.10191591 = fieldWeight in 912, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=912)
        0.14048174 = sum of:
          0.08442472 = weight(_text_:de in 912) [ClassicSimilarity], result of:
            0.08442472 = score(doc=912,freq=2.0), product of:
              0.22225924 = queryWeight, product of:
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.051718395 = queryNorm
              0.37984797 = fieldWeight in 912, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.0625 = fieldNorm(doc=912)
          0.05605703 = weight(_text_:22 in 912) [ClassicSimilarity], result of:
            0.05605703 = score(doc=912,freq=2.0), product of:
              0.18110901 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051718395 = queryNorm
              0.30952093 = fieldWeight in 912, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=912)
      0.6666667 = coord(2/3)
    
    Date
    16. 3.2023 19:22:55
    Source
    https://steadyhq.com/de/realitatsabzweig/posts/3ed79605-0650-4725-ab35-43f1243b57ee
    Type
    a
  3. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.09
    0.09174472 = sum of:
      0.06160689 = product of:
        0.24642757 = sum of:
          0.24642757 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.24642757 = score(doc=562,freq=2.0), product of:
              0.43846914 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.051718395 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.009116441 = weight(_text_:a in 562) [ClassicSimilarity], result of:
        0.009116441 = score(doc=562,freq=8.0), product of:
          0.05963374 = queryWeight, product of:
            1.153047 = idf(docFreq=37942, maxDocs=44218)
            0.051718395 = queryNorm
          0.15287387 = fieldWeight in 562, product of:
            2.828427 = tf(freq=8.0), with freq of:
              8.0 = termFreq=8.0
            1.153047 = idf(docFreq=37942, maxDocs=44218)
            0.046875 = fieldNorm(doc=562)
      0.021021385 = product of:
        0.04204277 = sum of:
          0.04204277 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.04204277 = score(doc=562,freq=2.0), product of:
              0.18110901 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051718395 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Abstract
    Document representations for text classification are typically based on the classical Bag-Of-Words paradigm. This approach comes with deficiencies that motivate the integration of features on a higher semantic level than single words. In this paper we propose an enhancement of the classical document representation through concepts extracted from background knowledge. Boosting is used for actual classification. Experimental evaluations on two well known text corpora support our approach through consistent improvement of the results.
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
    Type
    a
  4. Chibout, K.; Vilnat, A.: Primitive sémantiques, classification des verbes et polysémie (1999) 0.08
    0.07751649 = product of:
      0.11627473 = sum of:
        0.0107438285 = weight(_text_:a in 6229) [ClassicSimilarity], result of:
          0.0107438285 = score(doc=6229,freq=4.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.18016359 = fieldWeight in 6229, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=6229)
        0.1055309 = product of:
          0.2110618 = sum of:
            0.2110618 = weight(_text_:de in 6229) [ClassicSimilarity], result of:
              0.2110618 = score(doc=6229,freq=8.0), product of:
                0.22225924 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.051718395 = queryNorm
                0.94961995 = fieldWeight in 6229, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6229)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Imprint
    Lille : Université Charles-de-Gaulle
    Source
    Organisation des connaissances en vue de leur intégration dans les systèmes de représentation et de recherche d'information. Ed.: J. Maniez, et al
    Type
    a
  5. Sidhom, S.; Hassoun, M.: Morpho-syntactic parsing to text mining environment : NP recognition model to knowledge visualization and information (2003) 0.08
    0.07541863 = product of:
      0.11312794 = sum of:
        0.0075970334 = weight(_text_:a in 3546) [ClassicSimilarity], result of:
          0.0075970334 = score(doc=3546,freq=2.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.12739488 = fieldWeight in 3546, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=3546)
        0.1055309 = product of:
          0.2110618 = sum of:
            0.2110618 = weight(_text_:de in 3546) [ClassicSimilarity], result of:
              0.2110618 = score(doc=3546,freq=8.0), product of:
                0.22225924 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.051718395 = queryNorm
                0.94961995 = fieldWeight in 3546, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3546)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Tendencias de investigación en organización del conocimient: IV Cologuio International de Ciencas de la Documentación , VI Congreso del Capitulo Espanol de ISKO = Trends in knowledge organization research. Eds.: J.A. Frias u. C. Travieso
    Type
    a
  6. Ferret, O.; Grau, B.; Masson, N.: Utilisation d'un réseau de cooccurences lexikales pour a méliorer une analyse thématique fondée sur la distribution des mots (1999) 0.07
    0.071029976 = product of:
      0.10654496 = sum of:
        0.012155253 = weight(_text_:a in 6295) [ClassicSimilarity], result of:
          0.012155253 = score(doc=6295,freq=8.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.20383182 = fieldWeight in 6295, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=6295)
        0.09438971 = product of:
          0.18877941 = sum of:
            0.18877941 = weight(_text_:de in 6295) [ClassicSimilarity], result of:
              0.18877941 = score(doc=6295,freq=10.0), product of:
                0.22225924 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.051718395 = queryNorm
                0.8493659 = fieldWeight in 6295, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6295)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    Übers. d. Titels: Use of a network of lexical co-occurences to improve a thematic analysis based on distribution of words
    Imprint
    Lille : Université Charles-de-Gaulle
    Source
    Organisation des connaissances en vue de leur intégration dans les systèmes de représentation et de recherche d'information. Ed.: J. Maniez, et al
    Type
    a
  7. Sienel, J.; Weiss, M.; Laube, M.: Sprachtechnologien für die Informationsgesellschaft des 21. Jahrhunderts (2000) 0.06
    0.06106641 = product of:
      0.09159961 = sum of:
        0.0037985167 = weight(_text_:a in 5557) [ClassicSimilarity], result of:
          0.0037985167 = score(doc=5557,freq=2.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.06369744 = fieldWeight in 5557, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5557)
        0.0878011 = sum of:
          0.05276545 = weight(_text_:de in 5557) [ClassicSimilarity], result of:
            0.05276545 = score(doc=5557,freq=2.0), product of:
              0.22225924 = queryWeight, product of:
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.051718395 = queryNorm
              0.23740499 = fieldWeight in 5557, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5557)
          0.035035644 = weight(_text_:22 in 5557) [ClassicSimilarity], result of:
            0.035035644 = score(doc=5557,freq=2.0), product of:
              0.18110901 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051718395 = queryNorm
              0.19345059 = fieldWeight in 5557, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5557)
      0.6666667 = coord(2/3)
    
    Date
    26.12.2000 13:22:17
    Source
    Sprachtechnologie für eine dynamische Wirtschaft im Medienzeitalter - Language technologies for dynamic business in the age of the media - L'ingénierie linguistique au service de la dynamisation économique à l'ère du multimédia: Tagungsakten der XXVI. Jahrestagung der Internationalen Vereinigung Sprache und Wirtschaft e.V., 23.-25.11.2000, Fachhochschule Köln. Hrsg.: K.-D. Schmitz
    Type
    a
  8. Pinker, S.: Wörter und Regeln : Die Natur der Sprache (2000) 0.06
    0.06106641 = product of:
      0.09159961 = sum of:
        0.0037985167 = weight(_text_:a in 734) [ClassicSimilarity], result of:
          0.0037985167 = score(doc=734,freq=2.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.06369744 = fieldWeight in 734, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=734)
        0.0878011 = sum of:
          0.05276545 = weight(_text_:de in 734) [ClassicSimilarity], result of:
            0.05276545 = score(doc=734,freq=2.0), product of:
              0.22225924 = queryWeight, product of:
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.051718395 = queryNorm
              0.23740499 = fieldWeight in 734, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.0390625 = fieldNorm(doc=734)
          0.035035644 = weight(_text_:22 in 734) [ClassicSimilarity], result of:
            0.035035644 = score(doc=734,freq=2.0), product of:
              0.18110901 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051718395 = queryNorm
              0.19345059 = fieldWeight in 734, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=734)
      0.6666667 = coord(2/3)
    
    Abstract
    Wie lernen Kinder sprechen? Welche Hinweise geben gerade ihre Fehler beim Spracherwerb auf den Ablauf des Lernprozesses - getreu dem Motto: "Kinder sagen die töllsten Sachen«? Und wie helfen beziehungsweise warum scheitern bislang Computer bei der Simulation neuronaler Netzwerke, die am komplizierten Gewebe der menschlichen Sprache mitwirken? In seinem neuen Buch Wörter und Regeln hat der bekannte US-amerikanische Kognitionswissenschaftler Steven Pinker (Der Sprachinstinkt) wieder einmal eine ebenso informative wie kurzweifige Erkundungstour ins Reich der Sprache unternommen. Was die Sache besonders spannend und lesenswert macht: Souverän beleuchtet der Professor am Massachusetts Institute of Technology sowohl natur- als auch geisteswissenschaftliche Aspekte. So vermittelt er einerseits linguistische Grundlagen in den Fußspuren Ferdinand de Saussures, etwa die einer generativen Grammatik, liefert einen Exkurs durch die Sprachgeschichte und widmet ein eigenes Kapitel den Schrecken der deutschen Sprache". Andererseits lässt er aber auch die neuesten bildgebenden Verfahren nicht außen vor, die zeigen, was im Gehirn bei der Sprachverarbeitung abläuft. Pinkers Theorie, die sich in diesem Puzzle verschiedenster Aspekte wiederfindet: Sprache besteht im Kein aus zwei Bestandteilen - einem mentalen Lexikon aus erinnerten Wörtern und einer mentalen Grammatik aus verschiedenen kombinatorischen Regeln. Konkret heißt das: Wir prägen uns bekannte Größen und ihre abgestuften, sich kreuzenden Merkmale ein, aber wir erzeugen auch neue geistige Produkte, in dem wir Regeln anwenden. Gerade daraus, so schließt Pinker, erschließt sich der Reichtum und die ungeheure Ausdruckskraft unserer Sprache
    Date
    19. 7.2002 14:22:31
    Footnote
    Rez. in: Franfurter Rundschau Nr.43 vom 20.2.2001, S.23 (A. Barthelmy)
  9. Vazov, N.: Identification des differentes structures temporelles dans des textes et leur rôles dans le raisonnement temporel (1999) 0.06
    0.0603349 = product of:
      0.090502344 = sum of:
        0.0060776267 = weight(_text_:a in 6203) [ClassicSimilarity], result of:
          0.0060776267 = score(doc=6203,freq=2.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.10191591 = fieldWeight in 6203, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=6203)
        0.08442472 = product of:
          0.16884944 = sum of:
            0.16884944 = weight(_text_:de in 6203) [ClassicSimilarity], result of:
              0.16884944 = score(doc=6203,freq=8.0), product of:
                0.22225924 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.051718395 = queryNorm
                0.75969595 = fieldWeight in 6203, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6203)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Imprint
    Lille : Université Charles-de-Gaulle
    Source
    Organisation des connaissances en vue de leur intégration dans les systèmes de représentation et de recherche d'information. Ed.: J. Maniez, et al
    Type
    a
  10. Wauschkuhn, O.: ¬Ein Werkzeug zur partiellen syntaktischen Analyse deutscher Textkorpora (1996) 0.06
    0.056338318 = product of:
      0.08450747 = sum of:
        0.010635847 = weight(_text_:a in 7296) [ClassicSimilarity], result of:
          0.010635847 = score(doc=7296,freq=2.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.17835285 = fieldWeight in 7296, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=7296)
        0.07387163 = product of:
          0.14774325 = sum of:
            0.14774325 = weight(_text_:de in 7296) [ClassicSimilarity], result of:
              0.14774325 = score(doc=7296,freq=2.0), product of:
                0.22225924 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.051718395 = queryNorm
                0.66473395 = fieldWeight in 7296, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.109375 = fieldNorm(doc=7296)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Imprint
    Berlin : Mouton de Gruyter
    Type
    a
  11. Konrad, K.; Maier, H.; Pinkal, M.; Milward, D.: CLEARS: ein Werkzeug für Ausbildung und Forschung in der Computerlinguistik (1996) 0.05
    0.048289984 = product of:
      0.07243498 = sum of:
        0.009116441 = weight(_text_:a in 7298) [ClassicSimilarity], result of:
          0.009116441 = score(doc=7298,freq=2.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.15287387 = fieldWeight in 7298, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=7298)
        0.063318536 = product of:
          0.12663707 = sum of:
            0.12663707 = weight(_text_:de in 7298) [ClassicSimilarity], result of:
              0.12663707 = score(doc=7298,freq=2.0), product of:
                0.22225924 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.051718395 = queryNorm
                0.56977195 = fieldWeight in 7298, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7298)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Imprint
    Berlin : Mouton de Gruyter
    Type
    a
  12. Betrand-Gastaldy, S.: ¬La modelisation de l'analyse documentaire : à la convergence de la semiotique, de la psychologie cognitive et de l'intelligence (1995) 0.05
    0.048289984 = product of:
      0.07243498 = sum of:
        0.009116441 = weight(_text_:a in 5377) [ClassicSimilarity], result of:
          0.009116441 = score(doc=5377,freq=8.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.15287387 = fieldWeight in 5377, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=5377)
        0.063318536 = product of:
          0.12663707 = sum of:
            0.12663707 = weight(_text_:de in 5377) [ClassicSimilarity], result of:
              0.12663707 = score(doc=5377,freq=8.0), product of:
                0.22225924 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.051718395 = queryNorm
                0.56977195 = fieldWeight in 5377, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5377)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Textual semiotics and cognitive psychology are advocated to model several types of documentary analysis. Proposes a theoretical model which combines elements from the 2 disciplines. Thanks to the addition of values of properties pertaining to different semiotic systems to the primary and secondary texts, one can retrieve the units and the characteristics valued by a group of indexers or by one individual. The cognitive studies of the experts confirm or complete the textual analysis. Examples from the findings obtained by the statistic-linguistic analysis of 2 corpora illustrate the usefulness of the methodology, especially for the conception of expert systems to assist whatever kind of reading
    Source
    Connectedness: information, systems, people, organizations. Proceedings of CAIS/ACSI 95, the proceedings of the 23rd Annual Conference of the Canadian Association for Information Science. Ed. by Hope A. Olson and Denis B. Ward
    Type
    a
  13. Figuerola, C.G.; Gomez, R.; Lopez de San Roman, E.: Stemming and n-grams in Spanish : an evaluation of their impact in information retrieval (2000) 0.05
    0.048289984 = product of:
      0.07243498 = sum of:
        0.009116441 = weight(_text_:a in 6501) [ClassicSimilarity], result of:
          0.009116441 = score(doc=6501,freq=2.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.15287387 = fieldWeight in 6501, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=6501)
        0.063318536 = product of:
          0.12663707 = sum of:
            0.12663707 = weight(_text_:de in 6501) [ClassicSimilarity], result of:
              0.12663707 = score(doc=6501,freq=2.0), product of:
                0.22225924 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.051718395 = queryNorm
                0.56977195 = fieldWeight in 6501, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6501)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Type
    a
  14. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.05
    0.047866255 = product of:
      0.07179938 = sum of:
        0.06160689 = product of:
          0.24642757 = sum of:
            0.24642757 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.24642757 = score(doc=862,freq=2.0), product of:
                0.43846914 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051718395 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.25 = coord(1/4)
        0.010192491 = weight(_text_:a in 862) [ClassicSimilarity], result of:
          0.010192491 = score(doc=862,freq=10.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.1709182 = fieldWeight in 862, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.6666667 = coord(2/3)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
    Type
    a
  15. Deventer, J.P. van; Kruger, C.J.; Johnson, R.D.: Delineating knowledge management through lexical analysis : a retrospective (2015) 0.05
    0.046789717 = sum of:
      0.027492289 = product of:
        0.109969154 = sum of:
          0.109969154 = weight(_text_:authors in 3807) [ClassicSimilarity], result of:
            0.109969154 = score(doc=3807,freq=14.0), product of:
              0.23577455 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.051718395 = queryNorm
              0.46641657 = fieldWeight in 3807, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3807)
        0.25 = coord(1/4)
      0.0070349523 = weight(_text_:a in 3807) [ClassicSimilarity], result of:
        0.0070349523 = score(doc=3807,freq=14.0), product of:
          0.05963374 = queryWeight, product of:
            1.153047 = idf(docFreq=37942, maxDocs=44218)
            0.051718395 = queryNorm
          0.11796933 = fieldWeight in 3807, product of:
            3.7416575 = tf(freq=14.0), with freq of:
              14.0 = termFreq=14.0
            1.153047 = idf(docFreq=37942, maxDocs=44218)
            0.02734375 = fieldNorm(doc=3807)
      0.012262475 = product of:
        0.02452495 = sum of:
          0.02452495 = weight(_text_:22 in 3807) [ClassicSimilarity], result of:
            0.02452495 = score(doc=3807,freq=2.0), product of:
              0.18110901 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051718395 = queryNorm
              0.1354154 = fieldWeight in 3807, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3807)
        0.5 = coord(1/2)
    
    Abstract
    Purpose Academic authors tend to define terms that meet their own needs. Knowledge Management (KM) is a term that comes to mind and is examined in this study. Lexicographical research identified KM terms used by authors from 1996 to 2006 in academic outlets to define KM. Data were collected based on strict criteria which included that definitions should be unique instances. From 2006 onwards, these authors could not identify new unique instances of definitions with repetitive usage of such definition instances. Analysis revealed that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, and Process) and Contextualised Content (Information). The paper aims to discuss these issues. Design/methodology/approach The aim of this paper is to add to the body of knowledge in the KM discipline and supply KM practitioners and scholars with insight into what is commonly regarded to be KM so as to reignite the debate on what one could consider as KM. The lexicon used by KM scholars was evaluated though the application of lexicographical research methods as extended though Knowledge Discovery and Text Analysis methods. Findings By simplifying term relationships through the application of lexicographical research methods, as extended though Knowledge Discovery and Text Analysis methods, it was found that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, Process) and Contextualised Content (Information). One would therefore be able to indicate that KM, from an academic point of view, refers to people processing contextualised content.
    Research limitations/implications In total, 42 definitions were identified spanning a period of 11 years. This represented the first use of KM through the estimated apex of terms used. From 2006 onwards definitions were used in repetition, and all definitions that were considered to repeat were therefore subsequently excluded as not being unique instances. All definitions listed are by no means complete and exhaustive. The definitions are viewed outside the scope and context in which they were originally formulated and then used to review the key concepts in the definitions themselves. Social implications When the authors refer to the aforementioned discussion of KM content as well as the presentation of the method followed in this paper, the authors may have a few implications for future research in KM. First the research validates ideas presented by the OECD in 2005 pertaining to KM. It also validates that through the evolution of KM, the authors ended with a description of KM that may be seen as a standardised description. If the authors as academics and practitioners, for example, refer to KM as the same construct and/or idea, it has the potential to speculatively, distinguish between what KM may or may not be. Originality/value By simplifying the term used to define KM, by focusing on the most common definitions, the paper assist in refocusing KM by reconsidering the dimensions that is the most common in how it has been defined over time. This would hopefully assist in reigniting discussions about KM and how it may be used to the benefit of an organisation.
    Date
    20. 1.2015 18:30:22
    Isbn
    a
  16. Warner, A.J.: Natural language processing (1987) 0.05
    0.045474857 = product of:
      0.068212286 = sum of:
        0.012155253 = weight(_text_:a in 337) [ClassicSimilarity], result of:
          0.012155253 = score(doc=337,freq=2.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.20383182 = fieldWeight in 337, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=337)
        0.05605703 = product of:
          0.11211406 = sum of:
            0.11211406 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.11211406 = score(doc=337,freq=2.0), product of:
                0.18110901 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051718395 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
    Type
    a
  17. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.04
    0.04498115 = product of:
      0.06747173 = sum of:
        0.018421829 = weight(_text_:a in 4506) [ClassicSimilarity], result of:
          0.018421829 = score(doc=4506,freq=6.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.3089162 = fieldWeight in 4506, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=4506)
        0.0490499 = product of:
          0.0980998 = sum of:
            0.0980998 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.0980998 = score(doc=4506,freq=2.0), product of:
                0.18110901 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051718395 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    8.10.2000 11:52:22
    Source
    Library science with a slant to documentation. 28(1991) no.4, S.125-130
    Type
    a
  18. Kurz, C.: Womit sich Strafverfolger bald befassen müssen : ChatGPT (2023) 0.04
    0.043849945 = product of:
      0.06577492 = sum of:
        0.0060776267 = weight(_text_:a in 203) [ClassicSimilarity], result of:
          0.0060776267 = score(doc=203,freq=2.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.10191591 = fieldWeight in 203, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=203)
        0.059697293 = product of:
          0.119394585 = sum of:
            0.119394585 = weight(_text_:de in 203) [ClassicSimilarity], result of:
              0.119394585 = score(doc=203,freq=4.0), product of:
                0.22225924 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.051718395 = queryNorm
                0.53718615 = fieldWeight in 203, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0625 = fieldNorm(doc=203)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    https://netzpolitik.org/2023/chatgpt-womit-sich-strafverfolger-bald-befassen-muessen/?utm_source=pocket-newtab-global-de-DE#!
    Type
    a
  19. Bischoff, M.: Wie eine KI lernt, sich selbst zu erklären (2023) 0.04
    0.043849945 = product of:
      0.06577492 = sum of:
        0.0060776267 = weight(_text_:a in 956) [ClassicSimilarity], result of:
          0.0060776267 = score(doc=956,freq=2.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.10191591 = fieldWeight in 956, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=956)
        0.059697293 = product of:
          0.119394585 = sum of:
            0.119394585 = weight(_text_:de in 956) [ClassicSimilarity], result of:
              0.119394585 = score(doc=956,freq=4.0), product of:
                0.22225924 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.051718395 = queryNorm
                0.53718615 = fieldWeight in 956, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0625 = fieldNorm(doc=956)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    https://www.spektrum.de/news/sprachmodelle-auf-dem-weg-zu-einer-erklaerbaren-ki/2132727#Echobox=1682669561?utm_source=pocket-newtab-global-de-DE
    Type
    a
  20. Klein, A.; Weis, U.; Stede, M.: ¬Der Einsatz von Sprachverarbeitungstools beim Sprachenlernen im Intranet (2000) 0.04
    0.04233952 = product of:
      0.06350928 = sum of:
        0.0107438285 = weight(_text_:a in 5542) [ClassicSimilarity], result of:
          0.0107438285 = score(doc=5542,freq=4.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.18016359 = fieldWeight in 5542, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=5542)
        0.05276545 = product of:
          0.1055309 = sum of:
            0.1055309 = weight(_text_:de in 5542) [ClassicSimilarity], result of:
              0.1055309 = score(doc=5542,freq=2.0), product of:
                0.22225924 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.051718395 = queryNorm
                0.47480997 = fieldWeight in 5542, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5542)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Sprachtechnologie für eine dynamische Wirtschaft im Medienzeitalter - Language technologies for dynamic business in the age of the media - L'ingénierie linguistique au service de la dynamisation économique à l'ère du multimédia: Tagungsakten der XXVI. Jahrestagung der Internationalen Vereinigung Sprache und Wirtschaft e.V., 23.-25.11.2000, Fachhochschule Köln. Hrsg.: K.-D. Schmitz
    Type
    a

Languages

Types

  • a 629
  • el 76
  • m 43
  • s 23
  • x 9
  • p 7
  • b 1
  • d 1
  • pat 1
  • r 1
  • More… Less…

Subjects

Classifications