Search (713 results, page 2 of 36)

  • × theme_ss:"Computerlinguistik"
  1. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.03
    0.034760237 = product of:
      0.052140355 = sum of:
        0.009291277 = weight(_text_:a in 3164) [ClassicSimilarity], result of:
          0.009291277 = score(doc=3164,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.17835285 = fieldWeight in 3164, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=3164)
        0.04284908 = product of:
          0.08569816 = sum of:
            0.08569816 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.08569816 = score(doc=3164,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
    Type
    a
  2. Somers, H.: Example-based machine translation : Review article (1999) 0.03
    0.034760237 = product of:
      0.052140355 = sum of:
        0.009291277 = weight(_text_:a in 6672) [ClassicSimilarity], result of:
          0.009291277 = score(doc=6672,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.17835285 = fieldWeight in 6672, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=6672)
        0.04284908 = product of:
          0.08569816 = sum of:
            0.08569816 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.08569816 = score(doc=6672,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    31. 7.1996 9:22:19
    Type
    a
  3. New tools for human translators (1997) 0.03
    0.034760237 = product of:
      0.052140355 = sum of:
        0.009291277 = weight(_text_:a in 1179) [ClassicSimilarity], result of:
          0.009291277 = score(doc=1179,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.17835285 = fieldWeight in 1179, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=1179)
        0.04284908 = product of:
          0.08569816 = sum of:
            0.08569816 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.08569816 = score(doc=1179,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    A special issue devoted to the theme of new tools for human tranlators
    Date
    31. 7.1996 9:22:19
  4. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.03
    0.034760237 = product of:
      0.052140355 = sum of:
        0.009291277 = weight(_text_:a in 3117) [ClassicSimilarity], result of:
          0.009291277 = score(doc=3117,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.17835285 = fieldWeight in 3117, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=3117)
        0.04284908 = product of:
          0.08569816 = sum of:
            0.08569816 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.08569816 = score(doc=3117,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    28. 2.1999 10:48:22
    Type
    a
  5. ¬Der Student aus dem Computer (2023) 0.03
    0.034760237 = product of:
      0.052140355 = sum of:
        0.009291277 = weight(_text_:a in 1079) [ClassicSimilarity], result of:
          0.009291277 = score(doc=1079,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.17835285 = fieldWeight in 1079, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=1079)
        0.04284908 = product of:
          0.08569816 = sum of:
            0.08569816 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.08569816 = score(doc=1079,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    27. 1.2023 16:22:55
    Type
    a
  6. Cruys, T. van de; Moirón, B.V.: Semantics-based multiword expression extraction (2007) 0.03
    0.03027086 = product of:
      0.04540629 = sum of:
        0.013139851 = weight(_text_:a in 2919) [ClassicSimilarity], result of:
          0.013139851 = score(doc=2919,freq=16.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.25222903 = fieldWeight in 2919, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2919)
        0.032266438 = product of:
          0.064532876 = sum of:
            0.064532876 = weight(_text_:de in 2919) [ClassicSimilarity], result of:
              0.064532876 = score(doc=2919,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.33236697 = fieldWeight in 2919, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2919)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper describes a fully unsupervised and automated method for large-scale extraction of multiword expressions (MWEs) from large corpora. The method aims at capturing the non-compositionality of MWEs; the intuition is that a noun within a MWE cannot easily be replaced by a semantically similar noun. To implement this intuition, a noun clustering is automatically extracted (using distributional similarity measures), which gives us clusters of semantically related nouns. Next, a number of statistical measures - based on selectional preferences - is developed that formalize the intuition of non-compositionality. Our approach has been tested on Dutch, and automatically evaluated using Dutch lexical resources.
    Source
    Proceedings of the Workshop on A Broader Perspective on Multiword Expressions, Prag 2007
    Type
    a
  7. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.03
    0.029794488 = product of:
      0.04469173 = sum of:
        0.007963953 = weight(_text_:a in 4483) [ClassicSimilarity], result of:
          0.007963953 = score(doc=4483,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.15287387 = fieldWeight in 4483, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=4483)
        0.03672778 = product of:
          0.07345556 = sum of:
            0.07345556 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.07345556 = score(doc=4483,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    15. 3.2000 10:22:37
    Type
    a
  8. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.03
    0.029794488 = product of:
      0.04469173 = sum of:
        0.007963953 = weight(_text_:a in 4888) [ClassicSimilarity], result of:
          0.007963953 = score(doc=4888,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.15287387 = fieldWeight in 4888, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=4888)
        0.03672778 = product of:
          0.07345556 = sum of:
            0.07345556 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.07345556 = score(doc=4888,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    1. 3.2013 14:56:22
  9. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.03
    0.029794488 = product of:
      0.04469173 = sum of:
        0.007963953 = weight(_text_:a in 5429) [ClassicSimilarity], result of:
          0.007963953 = score(doc=5429,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.15287387 = fieldWeight in 5429, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=5429)
        0.03672778 = product of:
          0.07345556 = sum of:
            0.07345556 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
              0.07345556 = score(doc=5429,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.46428138 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    c't. 2000, H.22, S.230-231
    Type
    a
  10. Sabah, G.: Knowledge representation and natural language understanding (1993) 0.03
    0.029589612 = product of:
      0.044384416 = sum of:
        0.0075084865 = weight(_text_:a in 7025) [ClassicSimilarity], result of:
          0.0075084865 = score(doc=7025,freq=4.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.14413087 = fieldWeight in 7025, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=7025)
        0.03687593 = product of:
          0.07375186 = sum of:
            0.07375186 = weight(_text_:de in 7025) [ClassicSimilarity], result of:
              0.07375186 = score(doc=7025,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.37984797 = fieldWeight in 7025, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7025)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Describes the basic artificial intelligence techniques in linguistic knowledge processing which attempts to get machines to understand natural languages. Focusses on how computing techniques can model the communication process. Briefly examines the theoretical and practical importance of this field. Introduces a sample of theories used to represent linguistic knowledge. Present semantic representations (various logics and semantic networks) and examines pragmatic aspects of communication (of discourse analysis). Describes parsing systems. Addresses architectural issues. Shows why Distributed Artificial Intelligence and reflective systems offers the best framework taking examples from the CARAMEL (Comprehension Automatique de Recites, Apprentissage et Modelisation des Exchanges langagiers)
    Type
    a
  11. Colace, F.; Santo, M. De; Greco, L.; Napoletano, P.: Weighted word pairs for query expansion (2015) 0.03
    0.029097255 = product of:
      0.04364588 = sum of:
        0.011379444 = weight(_text_:a in 2687) [ClassicSimilarity], result of:
          0.011379444 = score(doc=2687,freq=12.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.21843673 = fieldWeight in 2687, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2687)
        0.032266438 = product of:
          0.064532876 = sum of:
            0.064532876 = weight(_text_:de in 2687) [ClassicSimilarity], result of:
              0.064532876 = score(doc=2687,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.33236697 = fieldWeight in 2687, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2687)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper proposes a novel query expansion method to improve accuracy of text retrieval systems. Our method makes use of a minimal relevance feedback to expand the initial query with a structured representation composed of weighted pairs of words. Such a structure is obtained from the relevance feedback through a method for pairs of words selection based on the Probabilistic Topic Model. We compared our method with other baseline query expansion schemes and methods. Evaluations performed on TREC-8 demonstrated the effectiveness of the proposed method with respect to the baseline.
    Type
    a
  12. Collard, J.; Paiva, V. de; Fong, B.; Subrahmanian, E.: Extracting mathematical concepts from text (2022) 0.03
    0.02843627 = product of:
      0.042654403 = sum of:
        0.010387965 = weight(_text_:a in 668) [ClassicSimilarity], result of:
          0.010387965 = score(doc=668,freq=10.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.19940455 = fieldWeight in 668, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=668)
        0.032266438 = product of:
          0.064532876 = sum of:
            0.064532876 = weight(_text_:de in 668) [ClassicSimilarity], result of:
              0.064532876 = score(doc=668,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.33236697 = fieldWeight in 668, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=668)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    We investigate different systems for extracting mathematical entities from English texts in the mathematical field of category theory as a first step for constructing a mathematical knowledge graph. We consider four different term extractors and compare their results. This small experiment showcases some of the issues with the construction and evaluation of terms extracted from noisy domain text. We also make available two open corpora in research mathematics, in particular in category theory: a small corpus of 755 abstracts from the journal TAC (3188 sentences), and a larger corpus from the nLab community wiki (15,000 sentences).
    Type
    a
  13. Hutchins, J.: From first conception to first demonstration : the nascent years of machine translation, 1947-1954. A chronology (1997) 0.03
    0.02806764 = product of:
      0.042101458 = sum of:
        0.011494976 = weight(_text_:a in 1463) [ClassicSimilarity], result of:
          0.011494976 = score(doc=1463,freq=6.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.22065444 = fieldWeight in 1463, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=1463)
        0.030606484 = product of:
          0.061212968 = sum of:
            0.061212968 = weight(_text_:22 in 1463) [ClassicSimilarity], result of:
              0.061212968 = score(doc=1463,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.38690117 = fieldWeight in 1463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1463)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Chronicles the early history of applying electronic computers to the task of translating natural languages, from the 1st suggestions by Warren Weaver in Mar 1947 to the 1st demonstration of a working, if limited, program in Jan 1954
    Date
    31. 7.1996 9:22:19
    Type
    a
  14. Al-Khatib, K.; Ghosa, T.; Hou, Y.; Waard, A. de; Freitag, D.: Argument mining for scholarly document processing : taking stock and looking ahead (2021) 0.03
    0.02687528 = product of:
      0.04031292 = sum of:
        0.008046483 = weight(_text_:a in 568) [ClassicSimilarity], result of:
          0.008046483 = score(doc=568,freq=6.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.1544581 = fieldWeight in 568, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=568)
        0.032266438 = product of:
          0.064532876 = sum of:
            0.064532876 = weight(_text_:de in 568) [ClassicSimilarity], result of:
              0.064532876 = score(doc=568,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.33236697 = fieldWeight in 568, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=568)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Argument mining targets structures in natural language related to interpretation and persuasion. Most scholarly discourse involves interpreting experimental evidence and attempting to persuade other scientists to adopt the same conclusions, which could benefit from argument mining techniques. However, While various argument mining studies have addressed student essays and news articles, those that target scientific discourse are still scarce. This paper surveys existing work in argument mining of scholarly discourse, and provides an overview of current models, data, tasks, and applications. We identify a number of key challenges confronting argument mining in the scientific domain, and suggest some possible solutions and future directions.
    Type
    a
  15. Kuhlmann, U.; Monnerjahn, P.: Sprache auf Knopfdruck : Sieben automatische Übersetzungsprogramme im Test (2000) 0.02
    0.02482874 = product of:
      0.03724311 = sum of:
        0.0066366266 = weight(_text_:a in 5428) [ClassicSimilarity], result of:
          0.0066366266 = score(doc=5428,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.12739488 = fieldWeight in 5428, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=5428)
        0.030606484 = product of:
          0.061212968 = sum of:
            0.061212968 = weight(_text_:22 in 5428) [ClassicSimilarity], result of:
              0.061212968 = score(doc=5428,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.38690117 = fieldWeight in 5428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5428)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    c't. 2000, H.22, S.220-229
    Type
    a
  16. Budin, G.: Zum Entwicklungsstand der Terminologiewissenschaft (2019) 0.02
    0.024608051 = product of:
      0.036912076 = sum of:
        0.0046456386 = weight(_text_:a in 5604) [ClassicSimilarity], result of:
          0.0046456386 = score(doc=5604,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.089176424 = fieldWeight in 5604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5604)
        0.032266438 = product of:
          0.064532876 = sum of:
            0.064532876 = weight(_text_:de in 5604) [ClassicSimilarity], result of:
              0.064532876 = score(doc=5604,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.33236697 = fieldWeight in 5604, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5604)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    Vgl.: https://www.springer.com/de/book/9783662589489.
    Type
    a
  17. Stieler, W.: Anzeichen von Bewusstsein bei ChatGPT und Co.? (2023) 0.02
    0.024608051 = product of:
      0.036912076 = sum of:
        0.0046456386 = weight(_text_:a in 1047) [ClassicSimilarity], result of:
          0.0046456386 = score(doc=1047,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.089176424 = fieldWeight in 1047, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1047)
        0.032266438 = product of:
          0.064532876 = sum of:
            0.064532876 = weight(_text_:de in 1047) [ClassicSimilarity], result of:
              0.064532876 = score(doc=1047,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.33236697 = fieldWeight in 1047, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1047)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Ein interdisziplinäres Forschungsteam hat eine Liste von Eigenschaften aufgestellt, die auf Bewusstsein deuten, und aktuelle KI-Systeme darauf abgeklopft. Ein interdisziplinäres Forscherteam hat ein Paper [https://arxiv.org/abs/2308.08708] veröffentlicht, das eine Liste von 14 "Indikatoren" für Bewusstsein enthält, die aus sechs aktuellen Theorien über das Bewusstsein stammen. Aktuelle KI-Modelle wie GPT-3, Palm-E oder AdA von Deepmind weisen demnach einzelne dieser Indikatoren auf. "Es spricht viel dafür, dass die meisten oder alle Bedingungen für das Bewusstsein, die von derzeitigen Theorien vorgeschlagenen Bedingungen für das Bewusstsein mit den bestehenden Techniken der KI erfüllt werden können", schreiben die Autoren. Zum Team gehörte auch der Deep-Learning-Pionier Yoshua Bengio von der Université de Montréal.
    Type
    a
  18. Jaaranen, K.; Lehtola, A.; Tenni, J.; Bounsaythip, C.: Webtran tools for in-company language support (2000) 0.02
    0.024373945 = product of:
      0.036560915 = sum of:
        0.00890397 = weight(_text_:a in 5553) [ClassicSimilarity], result of:
          0.00890397 = score(doc=5553,freq=10.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.1709182 = fieldWeight in 5553, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=5553)
        0.027656946 = product of:
          0.055313893 = sum of:
            0.055313893 = weight(_text_:de in 5553) [ClassicSimilarity], result of:
              0.055313893 = score(doc=5553,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.28488597 = fieldWeight in 5553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5553)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Webtran tools for authoring and translating domain specific texts can make the multilingual text production in a company more efficient and less expensive. Tile tools have been in production use since spring 2000 for checking and translating product article texts of a specific domain, namely an in-company language in sales catalogues of a mail-order company. Webtran tools have been developed by VTT Information Technology. Use experiences have shown that an automatic translation process is faster than phrase-lexicon assisted manual translation, if an in-company language model is created to control and support the language used within the company
    Source
    Sprachtechnologie für eine dynamische Wirtschaft im Medienzeitalter - Language technologies for dynamic business in the age of the media - L'ingénierie linguistique au service de la dynamisation économique à l'ère du multimédia: Tagungsakten der XXVI. Jahrestagung der Internationalen Vereinigung Sprache und Wirtschaft e.V., 23.-25.11.2000, Fachhochschule Köln. Hrsg.: K.-D. Schmitz
    Type
    a
  19. Galvez, C.; Moya-Anegón, F. de; Solana, V.H.: Term conflation methods in information retrieval : non-linguistic and linguistic approaches (2005) 0.02
    0.024373945 = product of:
      0.036560915 = sum of:
        0.00890397 = weight(_text_:a in 4394) [ClassicSimilarity], result of:
          0.00890397 = score(doc=4394,freq=10.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.1709182 = fieldWeight in 4394, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4394)
        0.027656946 = product of:
          0.055313893 = sum of:
            0.055313893 = weight(_text_:de in 4394) [ClassicSimilarity], result of:
              0.055313893 = score(doc=4394,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.28488597 = fieldWeight in 4394, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4394)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose - To propose a categorization of the different conflation procedures at the two basic approaches, non-linguistic and linguistic techniques, and to justify the application of normalization methods within the framework of linguistic techniques. Design/methodology/approach - Presents a range of term conflation methods, that can be used in information retrieval. The uniterm and multiterm variants can be considered equivalent units for the purposes of automatic indexing. Stemming algorithms, segmentation rules, association measures and clustering techniques are well evaluated non-linguistic methods, and experiments with these techniques show a wide variety of results. Alternatively, the lemmatisation and the use of syntactic pattern-matching, through equivalence relations represented in finite-state transducers (FST), are emerging methods for the recognition and standardization of terms. Findings - The survey attempts to point out the positive and negative effects of the linguistic approach and its potential as a term conflation method. Originality/value - Outlines the importance of FSTs for the normalization of term variants.
    Type
    a
  20. Galvez, C.; Moya-Anegón, F. de: ¬An evaluation of conflation accuracy using finite-state transducers (2006) 0.02
    0.023747265 = product of:
      0.035620898 = sum of:
        0.007963953 = weight(_text_:a in 5599) [ClassicSimilarity], result of:
          0.007963953 = score(doc=5599,freq=8.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.15287387 = fieldWeight in 5599, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=5599)
        0.027656946 = product of:
          0.055313893 = sum of:
            0.055313893 = weight(_text_:de in 5599) [ClassicSimilarity], result of:
              0.055313893 = score(doc=5599,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.28488597 = fieldWeight in 5599, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5599)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose - To evaluate the accuracy of conflation methods based on finite-state transducers (FSTs). Design/methodology/approach - Incorrectly lemmatized and stemmed forms may lead to the retrieval of inappropriate documents. Experimental studies to date have focused on retrieval performance, but very few on conflation performance. The process of normalization we used involved a linguistic toolbox that allowed us to construct, through graphic interfaces, electronic dictionaries represented internally by FSTs. The lexical resources developed were applied to a Spanish test corpus for merging term variants in canonical lemmatized forms. Conflation performance was evaluated in terms of an adaptation of recall and precision measures, based on accuracy and coverage, not actual retrieval. The results were compared with those obtained using a Spanish version of the Porter algorithm. Findings - The conclusion is that the main strength of lemmatization is its accuracy, whereas its main limitation is the underanalysis of variant forms. Originality/value - The report outlines the potential of transducers in their application to normalization processes.
    Type
    a

Languages

Types

  • a 629
  • el 76
  • m 43
  • s 23
  • x 9
  • p 7
  • b 1
  • d 1
  • pat 1
  • r 1
  • More… Less…

Subjects

Classifications