Search (713 results, page 1 of 36)

  • × theme_ss:"Computerlinguistik"
  1. Lezius, W.; Rapp, R.; Wettler, M.: ¬A morphology-system and part-of-speech tagger for German (1996) 0.11
    0.1085256 = product of:
      0.16278839 = sum of:
        0.009385608 = weight(_text_:a in 1693) [ClassicSimilarity], result of:
          0.009385608 = score(doc=1693,freq=4.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.18016359 = fieldWeight in 1693, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=1693)
        0.15340279 = sum of:
          0.092189826 = weight(_text_:de in 1693) [ClassicSimilarity], result of:
            0.092189826 = score(doc=1693,freq=2.0), product of:
              0.19416152 = queryWeight, product of:
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.045180224 = queryNorm
              0.47480997 = fieldWeight in 1693, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.078125 = fieldNorm(doc=1693)
          0.061212968 = weight(_text_:22 in 1693) [ClassicSimilarity], result of:
            0.061212968 = score(doc=1693,freq=2.0), product of:
              0.15821345 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045180224 = queryNorm
              0.38690117 = fieldWeight in 1693, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=1693)
      0.6666667 = coord(2/3)
    
    Date
    22. 3.2015 9:37:18
    Imprint
    Berlin : Mouton de Gruyter
    Type
    a
  2. Rieger, F.: Lügende Computer (2023) 0.09
    0.08535436 = product of:
      0.12803154 = sum of:
        0.0053093014 = weight(_text_:a in 912) [ClassicSimilarity], result of:
          0.0053093014 = score(doc=912,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.10191591 = fieldWeight in 912, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=912)
        0.12272224 = sum of:
          0.07375186 = weight(_text_:de in 912) [ClassicSimilarity], result of:
            0.07375186 = score(doc=912,freq=2.0), product of:
              0.19416152 = queryWeight, product of:
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.045180224 = queryNorm
              0.37984797 = fieldWeight in 912, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.0625 = fieldNorm(doc=912)
          0.048970375 = weight(_text_:22 in 912) [ClassicSimilarity], result of:
            0.048970375 = score(doc=912,freq=2.0), product of:
              0.15821345 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045180224 = queryNorm
              0.30952093 = fieldWeight in 912, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=912)
      0.6666667 = coord(2/3)
    
    Date
    16. 3.2023 19:22:55
    Source
    https://steadyhq.com/de/realitatsabzweig/posts/3ed79605-0650-4725-ab35-43f1243b57ee
    Type
    a
  3. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.08
    0.08014647 = sum of:
      0.05381863 = product of:
        0.21527451 = sum of:
          0.21527451 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.21527451 = score(doc=562,freq=2.0), product of:
              0.38303843 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.045180224 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.007963953 = weight(_text_:a in 562) [ClassicSimilarity], result of:
        0.007963953 = score(doc=562,freq=8.0), product of:
          0.05209492 = queryWeight, product of:
            1.153047 = idf(docFreq=37942, maxDocs=44218)
            0.045180224 = queryNorm
          0.15287387 = fieldWeight in 562, product of:
            2.828427 = tf(freq=8.0), with freq of:
              8.0 = termFreq=8.0
            1.153047 = idf(docFreq=37942, maxDocs=44218)
            0.046875 = fieldNorm(doc=562)
      0.01836389 = product of:
        0.03672778 = sum of:
          0.03672778 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.03672778 = score(doc=562,freq=2.0), product of:
              0.15821345 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045180224 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Abstract
    Document representations for text classification are typically based on the classical Bag-Of-Words paradigm. This approach comes with deficiencies that motivate the integration of features on a higher semantic level than single words. In this paper we propose an enhancement of the classical document representation through concepts extracted from background knowledge. Boosting is used for actual classification. Experimental evaluations on two well known text corpora support our approach through consistent improvement of the results.
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
    Type
    a
  4. Chibout, K.; Vilnat, A.: Primitive sémantiques, classification des verbes et polysémie (1999) 0.07
    0.067716956 = product of:
      0.101575434 = sum of:
        0.009385608 = weight(_text_:a in 6229) [ClassicSimilarity], result of:
          0.009385608 = score(doc=6229,freq=4.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.18016359 = fieldWeight in 6229, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=6229)
        0.092189826 = product of:
          0.18437965 = sum of:
            0.18437965 = weight(_text_:de in 6229) [ClassicSimilarity], result of:
              0.18437965 = score(doc=6229,freq=8.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.94961995 = fieldWeight in 6229, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6229)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Imprint
    Lille : Université Charles-de-Gaulle
    Source
    Organisation des connaissances en vue de leur intégration dans les systèmes de représentation et de recherche d'information. Ed.: J. Maniez, et al
    Type
    a
  5. Sidhom, S.; Hassoun, M.: Morpho-syntactic parsing to text mining environment : NP recognition model to knowledge visualization and information (2003) 0.07
    0.06588431 = product of:
      0.09882645 = sum of:
        0.0066366266 = weight(_text_:a in 3546) [ClassicSimilarity], result of:
          0.0066366266 = score(doc=3546,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.12739488 = fieldWeight in 3546, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=3546)
        0.092189826 = product of:
          0.18437965 = sum of:
            0.18437965 = weight(_text_:de in 3546) [ClassicSimilarity], result of:
              0.18437965 = score(doc=3546,freq=8.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.94961995 = fieldWeight in 3546, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3546)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Tendencias de investigación en organización del conocimient: IV Cologuio International de Ciencas de la Documentación , VI Congreso del Capitulo Espanol de ISKO = Trends in knowledge organization research. Eds.: J.A. Frias u. C. Travieso
    Type
    a
  6. Ferret, O.; Grau, B.; Masson, N.: Utilisation d'un réseau de cooccurences lexikales pour a méliorer une analyse thématique fondée sur la distribution des mots (1999) 0.06
    0.06205046 = product of:
      0.09307569 = sum of:
        0.010618603 = weight(_text_:a in 6295) [ClassicSimilarity], result of:
          0.010618603 = score(doc=6295,freq=8.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.20383182 = fieldWeight in 6295, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=6295)
        0.08245709 = product of:
          0.16491418 = sum of:
            0.16491418 = weight(_text_:de in 6295) [ClassicSimilarity], result of:
              0.16491418 = score(doc=6295,freq=10.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.8493659 = fieldWeight in 6295, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6295)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    Übers. d. Titels: Use of a network of lexical co-occurences to improve a thematic analysis based on distribution of words
    Imprint
    Lille : Université Charles-de-Gaulle
    Source
    Organisation des connaissances en vue de leur intégration dans les systèmes de représentation et de recherche d'information. Ed.: J. Maniez, et al
    Type
    a
  7. Sienel, J.; Weiss, M.; Laube, M.: Sprachtechnologien für die Informationsgesellschaft des 21. Jahrhunderts (2000) 0.05
    0.05334647 = product of:
      0.080019705 = sum of:
        0.0033183133 = weight(_text_:a in 5557) [ClassicSimilarity], result of:
          0.0033183133 = score(doc=5557,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.06369744 = fieldWeight in 5557, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5557)
        0.076701395 = sum of:
          0.046094913 = weight(_text_:de in 5557) [ClassicSimilarity], result of:
            0.046094913 = score(doc=5557,freq=2.0), product of:
              0.19416152 = queryWeight, product of:
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.045180224 = queryNorm
              0.23740499 = fieldWeight in 5557, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5557)
          0.030606484 = weight(_text_:22 in 5557) [ClassicSimilarity], result of:
            0.030606484 = score(doc=5557,freq=2.0), product of:
              0.15821345 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045180224 = queryNorm
              0.19345059 = fieldWeight in 5557, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5557)
      0.6666667 = coord(2/3)
    
    Date
    26.12.2000 13:22:17
    Source
    Sprachtechnologie für eine dynamische Wirtschaft im Medienzeitalter - Language technologies for dynamic business in the age of the media - L'ingénierie linguistique au service de la dynamisation économique à l'ère du multimédia: Tagungsakten der XXVI. Jahrestagung der Internationalen Vereinigung Sprache und Wirtschaft e.V., 23.-25.11.2000, Fachhochschule Köln. Hrsg.: K.-D. Schmitz
    Type
    a
  8. Pinker, S.: Wörter und Regeln : Die Natur der Sprache (2000) 0.05
    0.05334647 = product of:
      0.080019705 = sum of:
        0.0033183133 = weight(_text_:a in 734) [ClassicSimilarity], result of:
          0.0033183133 = score(doc=734,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.06369744 = fieldWeight in 734, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=734)
        0.076701395 = sum of:
          0.046094913 = weight(_text_:de in 734) [ClassicSimilarity], result of:
            0.046094913 = score(doc=734,freq=2.0), product of:
              0.19416152 = queryWeight, product of:
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.045180224 = queryNorm
              0.23740499 = fieldWeight in 734, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.0390625 = fieldNorm(doc=734)
          0.030606484 = weight(_text_:22 in 734) [ClassicSimilarity], result of:
            0.030606484 = score(doc=734,freq=2.0), product of:
              0.15821345 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045180224 = queryNorm
              0.19345059 = fieldWeight in 734, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=734)
      0.6666667 = coord(2/3)
    
    Abstract
    Wie lernen Kinder sprechen? Welche Hinweise geben gerade ihre Fehler beim Spracherwerb auf den Ablauf des Lernprozesses - getreu dem Motto: "Kinder sagen die töllsten Sachen«? Und wie helfen beziehungsweise warum scheitern bislang Computer bei der Simulation neuronaler Netzwerke, die am komplizierten Gewebe der menschlichen Sprache mitwirken? In seinem neuen Buch Wörter und Regeln hat der bekannte US-amerikanische Kognitionswissenschaftler Steven Pinker (Der Sprachinstinkt) wieder einmal eine ebenso informative wie kurzweifige Erkundungstour ins Reich der Sprache unternommen. Was die Sache besonders spannend und lesenswert macht: Souverän beleuchtet der Professor am Massachusetts Institute of Technology sowohl natur- als auch geisteswissenschaftliche Aspekte. So vermittelt er einerseits linguistische Grundlagen in den Fußspuren Ferdinand de Saussures, etwa die einer generativen Grammatik, liefert einen Exkurs durch die Sprachgeschichte und widmet ein eigenes Kapitel den Schrecken der deutschen Sprache". Andererseits lässt er aber auch die neuesten bildgebenden Verfahren nicht außen vor, die zeigen, was im Gehirn bei der Sprachverarbeitung abläuft. Pinkers Theorie, die sich in diesem Puzzle verschiedenster Aspekte wiederfindet: Sprache besteht im Kein aus zwei Bestandteilen - einem mentalen Lexikon aus erinnerten Wörtern und einer mentalen Grammatik aus verschiedenen kombinatorischen Regeln. Konkret heißt das: Wir prägen uns bekannte Größen und ihre abgestuften, sich kreuzenden Merkmale ein, aber wir erzeugen auch neue geistige Produkte, in dem wir Regeln anwenden. Gerade daraus, so schließt Pinker, erschließt sich der Reichtum und die ungeheure Ausdruckskraft unserer Sprache
    Date
    19. 7.2002 14:22:31
    Footnote
    Rez. in: Franfurter Rundschau Nr.43 vom 20.2.2001, S.23 (A. Barthelmy)
  9. Vazov, N.: Identification des differentes structures temporelles dans des textes et leur rôles dans le raisonnement temporel (1999) 0.05
    0.05270744 = product of:
      0.07906116 = sum of:
        0.0053093014 = weight(_text_:a in 6203) [ClassicSimilarity], result of:
          0.0053093014 = score(doc=6203,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.10191591 = fieldWeight in 6203, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=6203)
        0.07375186 = product of:
          0.14750372 = sum of:
            0.14750372 = weight(_text_:de in 6203) [ClassicSimilarity], result of:
              0.14750372 = score(doc=6203,freq=8.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.75969595 = fieldWeight in 6203, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6203)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Imprint
    Lille : Université Charles-de-Gaulle
    Source
    Organisation des connaissances en vue de leur intégration dans les systèmes de représentation et de recherche d'information. Ed.: J. Maniez, et al
    Type
    a
  10. Wauschkuhn, O.: ¬Ein Werkzeug zur partiellen syntaktischen Analyse deutscher Textkorpora (1996) 0.05
    0.049216103 = product of:
      0.07382415 = sum of:
        0.009291277 = weight(_text_:a in 7296) [ClassicSimilarity], result of:
          0.009291277 = score(doc=7296,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.17835285 = fieldWeight in 7296, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=7296)
        0.064532876 = product of:
          0.12906575 = sum of:
            0.12906575 = weight(_text_:de in 7296) [ClassicSimilarity], result of:
              0.12906575 = score(doc=7296,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.66473395 = fieldWeight in 7296, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.109375 = fieldNorm(doc=7296)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Imprint
    Berlin : Mouton de Gruyter
    Type
    a
  11. Konrad, K.; Maier, H.; Pinkal, M.; Milward, D.: CLEARS: ein Werkzeug für Ausbildung und Forschung in der Computerlinguistik (1996) 0.04
    0.042185232 = product of:
      0.06327785 = sum of:
        0.007963953 = weight(_text_:a in 7298) [ClassicSimilarity], result of:
          0.007963953 = score(doc=7298,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.15287387 = fieldWeight in 7298, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=7298)
        0.055313893 = product of:
          0.110627785 = sum of:
            0.110627785 = weight(_text_:de in 7298) [ClassicSimilarity], result of:
              0.110627785 = score(doc=7298,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.56977195 = fieldWeight in 7298, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7298)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Imprint
    Berlin : Mouton de Gruyter
    Type
    a
  12. Betrand-Gastaldy, S.: ¬La modelisation de l'analyse documentaire : à la convergence de la semiotique, de la psychologie cognitive et de l'intelligence (1995) 0.04
    0.042185232 = product of:
      0.06327785 = sum of:
        0.007963953 = weight(_text_:a in 5377) [ClassicSimilarity], result of:
          0.007963953 = score(doc=5377,freq=8.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.15287387 = fieldWeight in 5377, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=5377)
        0.055313893 = product of:
          0.110627785 = sum of:
            0.110627785 = weight(_text_:de in 5377) [ClassicSimilarity], result of:
              0.110627785 = score(doc=5377,freq=8.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.56977195 = fieldWeight in 5377, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5377)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Textual semiotics and cognitive psychology are advocated to model several types of documentary analysis. Proposes a theoretical model which combines elements from the 2 disciplines. Thanks to the addition of values of properties pertaining to different semiotic systems to the primary and secondary texts, one can retrieve the units and the characteristics valued by a group of indexers or by one individual. The cognitive studies of the experts confirm or complete the textual analysis. Examples from the findings obtained by the statistic-linguistic analysis of 2 corpora illustrate the usefulness of the methodology, especially for the conception of expert systems to assist whatever kind of reading
    Source
    Connectedness: information, systems, people, organizations. Proceedings of CAIS/ACSI 95, the proceedings of the 23rd Annual Conference of the Canadian Association for Information Science. Ed. by Hope A. Olson and Denis B. Ward
    Type
    a
  13. Figuerola, C.G.; Gomez, R.; Lopez de San Roman, E.: Stemming and n-grams in Spanish : an evaluation of their impact in information retrieval (2000) 0.04
    0.042185232 = product of:
      0.06327785 = sum of:
        0.007963953 = weight(_text_:a in 6501) [ClassicSimilarity], result of:
          0.007963953 = score(doc=6501,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.15287387 = fieldWeight in 6501, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=6501)
        0.055313893 = product of:
          0.110627785 = sum of:
            0.110627785 = weight(_text_:de in 6501) [ClassicSimilarity], result of:
              0.110627785 = score(doc=6501,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.56977195 = fieldWeight in 6501, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6501)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Type
    a
  14. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.04
    0.04181507 = product of:
      0.0627226 = sum of:
        0.05381863 = product of:
          0.21527451 = sum of:
            0.21527451 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.21527451 = score(doc=862,freq=2.0), product of:
                0.38303843 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.045180224 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.25 = coord(1/4)
        0.00890397 = weight(_text_:a in 862) [ClassicSimilarity], result of:
          0.00890397 = score(doc=862,freq=10.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.1709182 = fieldWeight in 862, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.6666667 = coord(2/3)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
    Type
    a
  15. Warner, A.J.: Natural language processing (1987) 0.04
    0.039725985 = product of:
      0.059588976 = sum of:
        0.010618603 = weight(_text_:a in 337) [ClassicSimilarity], result of:
          0.010618603 = score(doc=337,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.20383182 = fieldWeight in 337, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=337)
        0.048970375 = product of:
          0.09794075 = sum of:
            0.09794075 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.09794075 = score(doc=337,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
    Type
    a
  16. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.04
    0.039294697 = product of:
      0.058942042 = sum of:
        0.016092965 = weight(_text_:a in 4506) [ClassicSimilarity], result of:
          0.016092965 = score(doc=4506,freq=6.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.3089162 = fieldWeight in 4506, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=4506)
        0.04284908 = product of:
          0.08569816 = sum of:
            0.08569816 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.08569816 = score(doc=4506,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    8.10.2000 11:52:22
    Source
    Library science with a slant to documentation. 28(1991) no.4, S.125-130
    Type
    a
  17. Kurz, C.: Womit sich Strafverfolger bald befassen müssen : ChatGPT (2023) 0.04
    0.038306497 = product of:
      0.05745974 = sum of:
        0.0053093014 = weight(_text_:a in 203) [ClassicSimilarity], result of:
          0.0053093014 = score(doc=203,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.10191591 = fieldWeight in 203, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=203)
        0.05215044 = product of:
          0.10430088 = sum of:
            0.10430088 = weight(_text_:de in 203) [ClassicSimilarity], result of:
              0.10430088 = score(doc=203,freq=4.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.53718615 = fieldWeight in 203, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0625 = fieldNorm(doc=203)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    https://netzpolitik.org/2023/chatgpt-womit-sich-strafverfolger-bald-befassen-muessen/?utm_source=pocket-newtab-global-de-DE#!
    Type
    a
  18. Bischoff, M.: Wie eine KI lernt, sich selbst zu erklären (2023) 0.04
    0.038306497 = product of:
      0.05745974 = sum of:
        0.0053093014 = weight(_text_:a in 956) [ClassicSimilarity], result of:
          0.0053093014 = score(doc=956,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.10191591 = fieldWeight in 956, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=956)
        0.05215044 = product of:
          0.10430088 = sum of:
            0.10430088 = weight(_text_:de in 956) [ClassicSimilarity], result of:
              0.10430088 = score(doc=956,freq=4.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.53718615 = fieldWeight in 956, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0625 = fieldNorm(doc=956)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    https://www.spektrum.de/news/sprachmodelle-auf-dem-weg-zu-einer-erklaerbaren-ki/2132727#Echobox=1682669561?utm_source=pocket-newtab-global-de-DE
    Type
    a
  19. Klein, A.; Weis, U.; Stede, M.: ¬Der Einsatz von Sprachverarbeitungstools beim Sprachenlernen im Intranet (2000) 0.04
    0.036987014 = product of:
      0.05548052 = sum of:
        0.009385608 = weight(_text_:a in 5542) [ClassicSimilarity], result of:
          0.009385608 = score(doc=5542,freq=4.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.18016359 = fieldWeight in 5542, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=5542)
        0.046094913 = product of:
          0.092189826 = sum of:
            0.092189826 = weight(_text_:de in 5542) [ClassicSimilarity], result of:
              0.092189826 = score(doc=5542,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.47480997 = fieldWeight in 5542, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5542)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Sprachtechnologie für eine dynamische Wirtschaft im Medienzeitalter - Language technologies for dynamic business in the age of the media - L'ingénierie linguistique au service de la dynamisation économique à l'ère du multimédia: Tagungsakten der XXVI. Jahrestagung der Internationalen Vereinigung Sprache und Wirtschaft e.V., 23.-25.11.2000, Fachhochschule Köln. Hrsg.: K.-D. Schmitz
    Type
    a
  20. Schmolz, H.: Anaphora resolution and text retrieval : a lnguistic analysis of hypertexts (2015) 0.04
    0.03515436 = product of:
      0.05273154 = sum of:
        0.0066366266 = weight(_text_:a in 1172) [ClassicSimilarity], result of:
          0.0066366266 = score(doc=1172,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.12739488 = fieldWeight in 1172, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=1172)
        0.046094913 = product of:
          0.092189826 = sum of:
            0.092189826 = weight(_text_:de in 1172) [ClassicSimilarity], result of:
              0.092189826 = score(doc=1172,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.47480997 = fieldWeight in 1172, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1172)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Imprint
    Berlin : De Gruyter Mouton

Languages

Types

  • a 629
  • el 76
  • m 43
  • s 23
  • x 9
  • p 7
  • b 1
  • d 1
  • pat 1
  • r 1
  • More… Less…

Subjects

Classifications