Search (141 results, page 1 of 8)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.05
    0.054134153 = product of:
      0.081201226 = sum of:
        0.06936665 = product of:
          0.20809995 = sum of:
            0.20809995 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.20809995 = score(doc=562,freq=2.0), product of:
                0.37027273 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04367448 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.011834579 = product of:
          0.035503734 = sum of:
            0.035503734 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.035503734 = score(doc=562,freq=2.0), product of:
                0.15294059 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04367448 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Sheremet'eva, S.O.: Teoreticheskie i metodologicheskie problemy inzhenernoi lingvistiki (1998) 0.03
    0.03181152 = product of:
      0.09543456 = sum of:
        0.09543456 = product of:
          0.14315183 = sum of:
            0.08344181 = weight(_text_:theory in 6316) [ClassicSimilarity], result of:
              0.08344181 = score(doc=6316,freq=2.0), product of:
                0.18161562 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.04367448 = queryNorm
                0.4594418 = fieldWeight in 6316, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6316)
            0.059710022 = weight(_text_:29 in 6316) [ClassicSimilarity], result of:
              0.059710022 = score(doc=6316,freq=2.0), product of:
                0.15363316 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04367448 = queryNorm
                0.38865322 = fieldWeight in 6316, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6316)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Date
    6. 3.1999 13:56:29
    Footnote
    Übers. des Titels: Theory and methodology issues of linguistic engineering
  3. Way, E.C.: Knowledge representation and metaphor (oder: meaning) (1994) 0.03
    0.025353726 = product of:
      0.076061174 = sum of:
        0.076061174 = product of:
          0.114091754 = sum of:
            0.06675345 = weight(_text_:theory in 771) [ClassicSimilarity], result of:
              0.06675345 = score(doc=771,freq=2.0), product of:
                0.18161562 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.04367448 = queryNorm
                0.36755344 = fieldWeight in 771, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0625 = fieldNorm(doc=771)
            0.04733831 = weight(_text_:22 in 771) [ClassicSimilarity], result of:
              0.04733831 = score(doc=771,freq=2.0), product of:
                0.15294059 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04367448 = queryNorm
                0.30952093 = fieldWeight in 771, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=771)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Content
    Enthält folgende 9 Kapitel: The literal and the metaphoric; Views of metaphor; Knowledge representation; Representation schemes and conceptual graphs; The dynamic type hierarchy theory of metaphor; Computational approaches to metaphor; Thenature and structure of semantic hierarchies; Language games, open texture and family resemblance; Programming the dynamic type hierarchy; Subject index
    Footnote
    Bereits 1991 bei Kluwer publiziert // Rez. in: Knowledge organization 22(1995) no.1, S.48-49 (O. Sechser)
  4. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.02
    0.023122218 = product of:
      0.06936665 = sum of:
        0.06936665 = product of:
          0.20809995 = sum of:
            0.20809995 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.20809995 = score(doc=862,freq=2.0), product of:
                0.37027273 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04367448 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  5. Computational linguistics for the new millennium : divergence or synergy? Proceedings of the International Symposium held at the Ruprecht-Karls Universität Heidelberg, 21-22 July 2000. Festschrift in honour of Peter Hellwig on the occasion of his 60th birthday (2002) 0.02
    0.02263315 = product of:
      0.06789945 = sum of:
        0.06789945 = product of:
          0.10184917 = sum of:
            0.07226273 = weight(_text_:theory in 4900) [ClassicSimilarity], result of:
              0.07226273 = score(doc=4900,freq=6.0), product of:
                0.18161562 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.04367448 = queryNorm
                0.39788827 = fieldWeight in 4900, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4900)
            0.029586446 = weight(_text_:22 in 4900) [ClassicSimilarity], result of:
              0.029586446 = score(doc=4900,freq=2.0), product of:
                0.15294059 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04367448 = queryNorm
                0.19345059 = fieldWeight in 4900, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4900)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Content
    Contents: Manfred Klenner / Henriette Visser: Introduction - Khurshid Ahmad: Writing Linguistics: When I use a word it means what I choose it to mean - Jürgen Handke: 2000 and Beyond: The Potential of New Technologies in Linguistics - Jurij Apresjan / Igor Boguslavsky / Leonid Iomdin / Leonid Tsinman: Lexical Functions in NU: Possible Uses - Hubert Lehmann: Practical Machine Translation and Linguistic Theory - Karin Haenelt: A Contextbased Approach towards Content Processing of Electronic Documents - Petr Sgall / Eva Hajicová: Are Linguistic Frameworks Comparable? - Wolfgang Menzel: Theory and Applications in Computational Linguistics - Is there Common Ground? - Robert Porzel / Michael Strube: Towards Context-adaptive Natural Language Processing Systems - Nicoletta Calzolari: Language Resources in a Multilingual Setting: The European Perspective - Piek Vossen: Computational Linguistics for Theory and Practice.
  6. Saeed, K.; Dardzinska, A.: Natural language processing : word recognition without segmentation (2001) 0.02
    0.022268062 = product of:
      0.066804186 = sum of:
        0.066804186 = product of:
          0.10020628 = sum of:
            0.058409266 = weight(_text_:theory in 7707) [ClassicSimilarity], result of:
              0.058409266 = score(doc=7707,freq=2.0), product of:
                0.18161562 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.04367448 = queryNorm
                0.32160926 = fieldWeight in 7707, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7707)
            0.041797012 = weight(_text_:29 in 7707) [ClassicSimilarity], result of:
              0.041797012 = score(doc=7707,freq=2.0), product of:
                0.15363316 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04367448 = queryNorm
                0.27205724 = fieldWeight in 7707, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7707)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    In an earlier article about the methods of recognition of machine and hand-written cursive letters, we presented a model showing the possibility of processing, classifying, and hence recognizing such scripts as images. The practical results we obtained encouraged us to extend the theory to an algorithm for word recognition. In this article, we introduce our ideas, describe our achievements, and present our results of testing words for recognition without segmentation. This would lead to the possibility of applying the methods used in this work, together with other previously developed algorithms to process whole sentences and, hence, written and spoken texts with the goal of automatic recognition.
    Date
    16.12.2001 18:29:38
  7. Babik, W.: Keywords as linguistic tools in information and knowledge organization (2017) 0.02
    0.022268062 = product of:
      0.066804186 = sum of:
        0.066804186 = product of:
          0.10020628 = sum of:
            0.058409266 = weight(_text_:theory in 3510) [ClassicSimilarity], result of:
              0.058409266 = score(doc=3510,freq=2.0), product of:
                0.18161562 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.04367448 = queryNorm
                0.32160926 = fieldWeight in 3510, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3510)
            0.041797012 = weight(_text_:29 in 3510) [ClassicSimilarity], result of:
              0.041797012 = score(doc=3510,freq=2.0), product of:
                0.15363316 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04367448 = queryNorm
                0.27205724 = fieldWeight in 3510, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3510)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Source
    Theorie, Semantik und Organisation von Wissen: Proceedings der 13. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und dem 13. Internationalen Symposium der Informationswissenschaft der Higher Education Association for Information Science (HI) Potsdam (19.-20.03.2013): 'Theory, Information and Organization of Knowledge' / Proceedings der 14. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und Natural Language & Information Systems (NLDB) Passau (16.06.2015): 'Lexical Resources for Knowledge Organization' / Proceedings des Workshops der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) auf der SEMANTICS Leipzig (1.09.2014): 'Knowledge Organization and Semantic Web' / Proceedings des Workshops der Polnischen und Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) Cottbus (29.-30.09.2011): 'Economics of Knowledge Production and Organization'. Hrsg. von W. Babik, H.P. Ohly u. K. Weber
  8. Fóris, A.: Network theory and terminology (2013) 0.02
    0.019686382 = product of:
      0.059059143 = sum of:
        0.059059143 = product of:
          0.088588715 = sum of:
            0.059002265 = weight(_text_:theory in 1365) [ClassicSimilarity], result of:
              0.059002265 = score(doc=1365,freq=4.0), product of:
                0.18161562 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.04367448 = queryNorm
                0.3248744 = fieldWeight in 1365, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1365)
            0.029586446 = weight(_text_:22 in 1365) [ClassicSimilarity], result of:
              0.029586446 = score(doc=1365,freq=2.0), product of:
                0.15294059 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04367448 = queryNorm
                0.19345059 = fieldWeight in 1365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1365)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The paper aims to present the relations of network theory and terminology. The model of scale-free networks, which has been recently developed and widely applied since, can be effectively used in terminology research as well. Operation based on the principle of networks is a universal characteristic of complex systems. Networks are governed by general laws. The model of scale-free networks can be viewed as a statistical-probability model, and it can be described with mathematical tools. Its main feature is that "everything is connected to everything else," that is, every node is reachable (in a few steps) starting from any other node; this phenomena is called "the small world phenomenon." The existence of a linguistic network and the general laws of the operation of networks enable us to place issues of language use in the complex system of relations that reveal the deeper connection s between phenomena with the help of networks embedded in each other. The realization of the metaphor that language also has a network structure is the basis of the classification methods of the terminological system, and likewise of the ways of creating terminology databases, which serve the purpose of providing easy and versatile accessibility to specialised knowledge.
    Date
    2. 9.2014 21:22:48
  9. Melby, A.: Some notes on 'The proper place of men and machines in language translation' (1997) 0.02
    0.018492898 = product of:
      0.055478692 = sum of:
        0.055478692 = product of:
          0.08321804 = sum of:
            0.041797012 = weight(_text_:29 in 330) [ClassicSimilarity], result of:
              0.041797012 = score(doc=330,freq=2.0), product of:
                0.15363316 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04367448 = queryNorm
                0.27205724 = fieldWeight in 330, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=330)
            0.041421022 = weight(_text_:22 in 330) [ClassicSimilarity], result of:
              0.041421022 = score(doc=330,freq=2.0), product of:
                0.15294059 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04367448 = queryNorm
                0.2708308 = fieldWeight in 330, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=330)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Date
    31. 7.1996 9:22:19
    Source
    Machine translation. 12(1997) nos.1/2, S.29-34
  10. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.02
    0.015932571 = product of:
      0.04779771 = sum of:
        0.04779771 = product of:
          0.071696565 = sum of:
            0.029855011 = weight(_text_:29 in 2541) [ClassicSimilarity], result of:
              0.029855011 = score(doc=2541,freq=2.0), product of:
                0.15363316 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04367448 = queryNorm
                0.19432661 = fieldWeight in 2541, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2541)
            0.04184155 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
              0.04184155 = score(doc=2541,freq=4.0), product of:
                0.15294059 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04367448 = queryNorm
                0.27358043 = fieldWeight in 2541, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2541)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
  11. Kuhlen, R.: Morphologische Relationen durch Reduktionsalgorithmen (1974) 0.01
    0.013135536 = product of:
      0.039406605 = sum of:
        0.039406605 = product of:
          0.118219815 = sum of:
            0.118219815 = weight(_text_:29 in 4251) [ClassicSimilarity], result of:
              0.118219815 = score(doc=4251,freq=4.0), product of:
                0.15363316 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04367448 = queryNorm
                0.7694941 = fieldWeight in 4251, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4251)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    29. 1.2011 14:56:29
  12. Zhang, X: Rough set theory based automatic text categorization (2005) 0.01
    0.012846707 = product of:
      0.03854012 = sum of:
        0.03854012 = product of:
          0.11562036 = sum of:
            0.11562036 = weight(_text_:theory in 2822) [ClassicSimilarity], result of:
              0.11562036 = score(doc=2822,freq=6.0), product of:
                0.18161562 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.04367448 = queryNorm
                0.63662124 = fieldWeight in 2822, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2822)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Der Forschungsbericht "Rough Set Theory Based Automatic Text Categorization and the Handling of Semantic Heterogeneity" von Xueying Zhang ist in Buchform auf Englisch erschienen. Zhang hat in ihrer Arbeit ein Verfahren basierend auf der Rough Set Theory entwickelt, das Beziehungen zwischen Schlagwörtern verschiedener Vokabulare herstellt. Sie war von 2003 bis 2005 Mitarbeiterin des IZ und ist seit Oktober 2005 Associate Professor an der Nanjing University of Science and Technology.
  13. Barthel, J.; Ciesielski, R.: Regeln zu ChatGPT an Unis oft unklar : KI in der Bildung (2023) 0.01
    0.0114912 = product of:
      0.034473598 = sum of:
        0.034473598 = product of:
          0.103420794 = sum of:
            0.103420794 = weight(_text_:29 in 925) [ClassicSimilarity], result of:
              0.103420794 = score(doc=925,freq=6.0), product of:
                0.15363316 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04367448 = queryNorm
                0.6731671 = fieldWeight in 925, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=925)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    29. 3.2023 13:23:26
    29. 3.2023 13:29:19
  14. Wettler, M.; Rapp, R.; Ferber, R.: Freie Assoziationen und Kontiguitäten von Wörtern in Texten (1993) 0.01
    0.010615116 = product of:
      0.031845346 = sum of:
        0.031845346 = product of:
          0.09553603 = sum of:
            0.09553603 = weight(_text_:29 in 2140) [ClassicSimilarity], result of:
              0.09553603 = score(doc=2140,freq=2.0), product of:
                0.15363316 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04367448 = queryNorm
                0.6218451 = fieldWeight in 2140, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.125 = fieldNorm(doc=2140)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    4.11.1998 14:30:29
  15. Warner, A.J.: Natural language processing (1987) 0.01
    0.010519626 = product of:
      0.031558875 = sum of:
        0.031558875 = product of:
          0.09467662 = sum of:
            0.09467662 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.09467662 = score(doc=337,freq=2.0), product of:
                0.15294059 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04367448 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  16. Xianghao, G.; Yixin, Z.; Li, Y.: ¬A new method of news test understanding and abstracting based on speech acts theory (1998) 0.01
    0.010489292 = product of:
      0.031467877 = sum of:
        0.031467877 = product of:
          0.094403625 = sum of:
            0.094403625 = weight(_text_:theory in 3532) [ClassicSimilarity], result of:
              0.094403625 = score(doc=3532,freq=4.0), product of:
                0.18161562 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.04367448 = queryNorm
                0.51979905 = fieldWeight in 3532, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3532)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Presents a method for the automated analysis and comprehension of foreign affairs news produced by a Chinese news agency. Notes that the development of the method was prededed by a study of the structuring rules of the news. Describes how an abstract of the news story is produced automatically from the analysis. Stresses the main aim of the work which is to use specch act theory to analyse and classify sentences
  17. Heinrichs, J.: Language theory for the computer : monodimensional semantics or multidimensional semiotics? (1996) 0.01
    0.010365643 = product of:
      0.031096928 = sum of:
        0.031096928 = product of:
          0.09329078 = sum of:
            0.09329078 = weight(_text_:theory in 5364) [ClassicSimilarity], result of:
              0.09329078 = score(doc=5364,freq=10.0), product of:
                0.18161562 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.04367448 = queryNorm
                0.5136716 = fieldWeight in 5364, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5364)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Computer linguistics continues to be in need of an integrative language-theory model. Maria Theresia Rolland proposes such a model in her book 'Sprachverarbeitung durch Logotechnik' (1994). Relying upon the language theory of Leo Weisgerber, she pursues a pure 'content oriented' approach, by which she understands an approach in terms of the semantics of words. Starting from the 'implications' of word-contents, she attempts to construct a complete grammar of the German language. The reviewer begins his comments with an immanent critique, calling attention to a number of serious contradictions in Rolland's concept, among them, her refusal to take syntax into account despite its undeniable real presence.In the second part of his comments, the reviewer then takes up his own semiotic language theory published in 1981, showing that semantics is but one of four semiotic dimensions of language, the other dimanesion being the sigmatic, the pragmatic and the syntactic. Without taking all four dimensions into account, no theory can offer an adequate integrative language model. Indeed, without all four dimensions, one cannot even develop an adequate grammar of German sentence construction. The fourfold semiotic model dicloses as well the universally valid structures of language as the intersubjective expression of human self-awareness. Only on the basis of these universal structures, it is argued, is it possible to identify the specific structures of a native-language, and that on all four levels. This position has important consequences for the problems of computer translation and the comparative study and use of languages
  18. Hahn, U.; Reimer, U.: Informationslinguistische Konzepte der Volltextverarbeitung in TOPIC (1983) 0.01
    0.009288225 = product of:
      0.027864676 = sum of:
        0.027864676 = product of:
          0.083594024 = sum of:
            0.083594024 = weight(_text_:29 in 450) [ClassicSimilarity], result of:
              0.083594024 = score(doc=450,freq=2.0), product of:
                0.15363316 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04367448 = queryNorm
                0.5441145 = fieldWeight in 450, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=450)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    Deutscher Dokumentartag 1982, Lübeck-Travemünde, 29.-30.9.1982: Fachinformation im Zeitalter der Informationsindustrie. Bearb.: H. Strohl-Goebel
  19. Proszeky, G.: Language technology tools in the translator's practice (1999) 0.01
    0.009288225 = product of:
      0.027864676 = sum of:
        0.027864676 = product of:
          0.083594024 = sum of:
            0.083594024 = weight(_text_:29 in 6873) [ClassicSimilarity], result of:
              0.083594024 = score(doc=6873,freq=2.0), product of:
                0.15363316 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04367448 = queryNorm
                0.5441145 = fieldWeight in 6873, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6873)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    30. 3.2002 18:29:40
  20. Warner, J.: Analogies between linguistics and information theory (2007) 0.01
    0.0092713125 = product of:
      0.027813938 = sum of:
        0.027813938 = product of:
          0.08344181 = sum of:
            0.08344181 = weight(_text_:theory in 138) [ClassicSimilarity], result of:
              0.08344181 = score(doc=138,freq=8.0), product of:
                0.18161562 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.04367448 = queryNorm
                0.4594418 = fieldWeight in 138, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=138)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    An analogy is established between the syntagm and paradigm from Saussurean linguistics and the message and messages for selection from the information theory initiated by Claude Shannon. The analogy is pursued both as an end in itself and for its analytic value in understanding patterns of retrieval from full-text systems. The multivalency of individual words when isolated from their syntagm is contrasted with the relative stability of meaning of multiword sequences, when searching ordinary written discourse. The syntagm is understood as the linear sequence of oral and written language. Saussure's understanding of the word, as a unit that compels recognition by the mind, is endorsed, although not regarded as final. The lesser multivalency of multiword sequences is understood as the greater determination of signification by the extended syntagm. The paradigm is primarily understood as the network of associations a word acquires when considered apart from the syntagm. The restriction of information theory to expression or signals, and its focus on the combinatorial aspects of the message, is sustained. The message in the model of communication in information theory can include sequences of written language. Shannon's understanding of the written word, as a cohesive group of letters, with strong internal statistical influences, is added to the Saussurean conception. Sequences of more than one word are regarded as weakly correlated concatenations of cohesive units.

Years

Languages

  • e 104
  • d 32
  • ru 2
  • chi 1
  • More… Less…

Types

  • a 109
  • m 19
  • el 13
  • s 10
  • x 3
  • p 2
  • d 1
  • More… Less…

Classifications