Search (276 results, page 1 of 14)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.08
    0.07571052 = sum of:
      0.061679702 = product of:
        0.24671881 = sum of:
          0.24671881 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.24671881 = score(doc=562,freq=2.0), product of:
              0.43898734 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05177952 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.01403082 = product of:
        0.042092457 = sum of:
          0.042092457 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.042092457 = score(doc=562,freq=2.0), product of:
              0.18132305 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05177952 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Pirkola, A.; Hedlund, T.; Keskustalo, H.; Järvelin, K.: Dictionary-based cross-language information retrieval : problems, methods, and research findings (2001) 0.05
    0.050500378 = product of:
      0.101000756 = sum of:
        0.101000756 = product of:
          0.15150113 = sum of:
            0.10206422 = weight(_text_:k in 3908) [ClassicSimilarity], result of:
              0.10206422 = score(doc=3908,freq=2.0), product of:
                0.1848414 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.05177952 = queryNorm
                0.5521719 = fieldWeight in 3908, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3908)
            0.04943691 = weight(_text_:h in 3908) [ClassicSimilarity], result of:
              0.04943691 = score(doc=3908,freq=2.0), product of:
                0.12864359 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.05177952 = queryNorm
                0.38429362 = fieldWeight in 3908, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3908)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
  3. Somers, H.: Example-based machine translation : Review article (1999) 0.05
    0.04921755 = product of:
      0.0984351 = sum of:
        0.0984351 = product of:
          0.14765264 = sum of:
            0.04943691 = weight(_text_:h in 6672) [ClassicSimilarity], result of:
              0.04943691 = score(doc=6672,freq=2.0), product of:
                0.12864359 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.05177952 = queryNorm
                0.38429362 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
            0.09821574 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.09821574 = score(doc=6672,freq=2.0), product of:
                0.18132305 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05177952 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  4. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.05
    0.04921755 = product of:
      0.0984351 = sum of:
        0.0984351 = product of:
          0.14765264 = sum of:
            0.04943691 = weight(_text_:h in 3117) [ClassicSimilarity], result of:
              0.04943691 = score(doc=3117,freq=2.0), product of:
                0.12864359 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.05177952 = queryNorm
                0.38429362 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
            0.09821574 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.09821574 = score(doc=3117,freq=2.0), product of:
                0.18132305 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05177952 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    28. 2.1999 10:48:22
  5. Konrad, K.; Maier, H.; Pinkal, M.; Milward, D.: CLEARS: ein Werkzeug für Ausbildung und Forschung in der Computerlinguistik (1996) 0.04
    0.043286037 = product of:
      0.08657207 = sum of:
        0.08657207 = product of:
          0.1298581 = sum of:
            0.087483615 = weight(_text_:k in 7298) [ClassicSimilarity], result of:
              0.087483615 = score(doc=7298,freq=2.0), product of:
                0.1848414 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.05177952 = queryNorm
                0.47329018 = fieldWeight in 7298, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7298)
            0.04237449 = weight(_text_:h in 7298) [ClassicSimilarity], result of:
              0.04237449 = score(doc=7298,freq=2.0), product of:
                0.12864359 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.05177952 = queryNorm
                0.32939452 = fieldWeight in 7298, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7298)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
  6. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.04
    0.04218647 = product of:
      0.08437294 = sum of:
        0.08437294 = product of:
          0.1265594 = sum of:
            0.04237449 = weight(_text_:h in 5429) [ClassicSimilarity], result of:
              0.04237449 = score(doc=5429,freq=2.0), product of:
                0.12864359 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.05177952 = queryNorm
                0.32939452 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
            0.084184915 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
              0.084184915 = score(doc=5429,freq=2.0), product of:
                0.18132305 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05177952 = queryNorm
                0.46428138 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.230-231
  7. Maschinelle Spracherkennung (1994) 0.04
    0.040947277 = product of:
      0.081894554 = sum of:
        0.081894554 = product of:
          0.12284183 = sum of:
            0.07290301 = weight(_text_:k in 7147) [ClassicSimilarity], result of:
              0.07290301 = score(doc=7147,freq=2.0), product of:
                0.1848414 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.05177952 = queryNorm
                0.39440846 = fieldWeight in 7147, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.078125 = fieldNorm(doc=7147)
            0.04993882 = weight(_text_:h in 7147) [ClassicSimilarity], result of:
              0.04993882 = score(doc=7147,freq=4.0), product of:
                0.12864359 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.05177952 = queryNorm
                0.3881952 = fieldWeight in 7147, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.078125 = fieldNorm(doc=7147)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Content
    Besteht aus folgenden einzelnen Beiträgen: FELLBAUM, K.: Prinzipien, Stand der Technik, sprecherabhängige Einzelworterkennung; SPIES, M.: Grundzüge der Spracherkennung in einem Diktiersystem; STEINBIß, V.: Pausenlos diktieren: kontinuierliche Spracherkennung in der Radiologie; MANGOLD, H.: Das Telefon als intelligenter Gesprächspartner; WAHLSTER, W.: Verbmobil: Übersetzungshilfe für Verhandlungsdialoge
    Source
    Spektrum der Wissenschaft. 1994, H.3, S.86-104
  8. Working with conceptual structures : contributions to ICCS 2000. 8th International Conference on Conceptual Structures: Logical, Linguistic, and Computational Issues. Darmstadt, August 14-18, 2000 (2000) 0.04
    0.039066486 = sum of:
      0.010403389 = product of:
        0.041613556 = sum of:
          0.041613556 = weight(_text_:authors in 5089) [ClassicSimilarity], result of:
            0.041613556 = score(doc=5089,freq=2.0), product of:
              0.2360532 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05177952 = queryNorm
              0.17628889 = fieldWeight in 5089, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.02734375 = fieldNorm(doc=5089)
        0.25 = coord(1/4)
      0.028663095 = product of:
        0.04299464 = sum of:
          0.025516056 = weight(_text_:k in 5089) [ClassicSimilarity], result of:
            0.025516056 = score(doc=5089,freq=2.0), product of:
              0.1848414 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.05177952 = queryNorm
              0.13804297 = fieldWeight in 5089, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.02734375 = fieldNorm(doc=5089)
          0.017478587 = weight(_text_:h in 5089) [ClassicSimilarity], result of:
            0.017478587 = score(doc=5089,freq=4.0), product of:
              0.12864359 = queryWeight, product of:
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.05177952 = queryNorm
              0.13586831 = fieldWeight in 5089, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.02734375 = fieldNorm(doc=5089)
        0.6666667 = coord(2/3)
    
    Abstract
    The 8th International Conference on Conceptual Structures - Logical, Linguistic, and Computational Issues (ICCS 2000) brings together a wide range of researchers and practitioners working with conceptual structures. During the last few years, the ICCS conference series has considerably widened its scope on different kinds of conceptual structures, stimulating research across domain boundaries. We hope that this stimulation is further enhanced by ICCS 2000 joining the long tradition of conferences in Darmstadt with extensive, lively discussions. This volume consists of contributions presented at ICCS 2000, complementing the volume "Conceptual Structures: Logical, Linguistic, and Computational Issues" (B. Ganter, G.W. Mineau (Eds.), LNAI 1867, Springer, Berlin-Heidelberg 2000). It contains submissions reviewed by the program committee, and position papers. We wish to express our appreciation to all the authors of submitted papers, to the general chair, the program chair, the editorial board, the program committee, and to the additional reviewers for making ICCS 2000 a valuable contribution in the knowledge processing research field. Special thanks go to the local organizers for making the conference an enjoyable and inspiring event. We are grateful to Darmstadt University of Technology, the Ernst Schröder Center for Conceptual Knowledge Processing, the Center for Interdisciplinary Studies in Technology, the Deutsche Forschungsgemeinschaft, Land Hessen, and NaviCon GmbH for their generous support
    Content
    Concepts & Language: Knowledge organization by procedures of natural language processing. A case study using the method GABEK (J. Zelger, J. Gadner) - Computer aided narrative analysis using conceptual graphs (H. Schärfe, P. 0hrstrom) - Pragmatic representation of argumentative text: a challenge for the conceptual graph approach (H. Irandoust, B. Moulin) - Conceptual graphs as a knowledge representation core in a complex language learning environment (G. Angelova, A. Nenkova, S. Boycheva, T. Nikolov) - Conceptual Modeling and Ontologies: Relationships and actions in conceptual categories (Ch. Landauer, K.L. Bellman) - Concept approximations for formal concept analysis (J. Saquer, J.S. Deogun) - Faceted information representation (U. Priß) - Simple concept graphs with universal quantifiers (J. Tappe) - A framework for comparing methods for using or reusing multiple ontologies in an application (J. van ZyI, D. Corbett) - Designing task/method knowledge-based systems with conceptual graphs (M. Leclère, F.Trichet, Ch. Choquet) - A logical ontology (J. Farkas, J. Sarbo) - Algorithms and Tools: Fast concept analysis (Ch. Lindig) - A framework for conceptual graph unification (D. Corbett) - Visual CP representation of knowledge (H.D. Pfeiffer, R.T. Hartley) - Maximal isojoin for representing software textual specifications and detecting semantic anomalies (Th. Charnois) - Troika: using grids, lattices and graphs in knowledge acquisition (H.S. Delugach, B.E. Lampkin) - Open world theorem prover for conceptual graphs (J.E. Heaton, P. Kocura) - NetCare: a practical conceptual graphs software tool (S. Polovina, D. Strang) - CGWorld - a web based workbench for conceptual graphs management and applications (P. Dobrev, K. Toutanova) - Position papers: The edition project: Peirce's existential graphs (R. Mülller) - Mining association rules using formal concept analysis (N. Pasquier) - Contextual logic summary (R Wille) - Information channels and conceptual scaling (K.E. Wolff) - Spatial concepts - a rule exploration (S. Rudolph) - The TEXT-TO-ONTO learning environment (A. Mädche, St. Staab) - Controlling the semantics of metadata on audio-visual documents using ontologies (Th. Dechilly, B. Bachimont) - Building the ontological foundations of a terminology from natural language to conceptual graphs with Ribosome, a knowledge extraction system (Ch. Jacquelinet, A. Burgun) - CharGer: some lessons learned and new directions (H.S. Delugach) - Knowledge management using conceptual graphs (W.K. Pun)
  9. Wilhelm, K.: ¬Die Macht der Grammatik (2000) 0.04
    0.036071695 = product of:
      0.07214339 = sum of:
        0.07214339 = product of:
          0.108215086 = sum of:
            0.07290301 = weight(_text_:k in 5510) [ClassicSimilarity], result of:
              0.07290301 = score(doc=5510,freq=2.0), product of:
                0.1848414 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.05177952 = queryNorm
                0.39440846 = fieldWeight in 5510, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5510)
            0.03531208 = weight(_text_:h in 5510) [ClassicSimilarity], result of:
              0.03531208 = score(doc=5510,freq=2.0), product of:
                0.12864359 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.05177952 = queryNorm
                0.27449545 = fieldWeight in 5510, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5510)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Max Planck Forschung. 2000, H.1, S.26-32
  10. Hartnett, K.: Sind Sprachmodelle bald die besseren Mathematiker? (2023) 0.04
    0.036071695 = product of:
      0.07214339 = sum of:
        0.07214339 = product of:
          0.108215086 = sum of:
            0.07290301 = weight(_text_:k in 988) [ClassicSimilarity], result of:
              0.07290301 = score(doc=988,freq=2.0), product of:
                0.1848414 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.05177952 = queryNorm
                0.39440846 = fieldWeight in 988, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.078125 = fieldNorm(doc=988)
            0.03531208 = weight(_text_:h in 988) [ClassicSimilarity], result of:
              0.03531208 = score(doc=988,freq=2.0), product of:
                0.12864359 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.05177952 = queryNorm
                0.27449545 = fieldWeight in 988, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.078125 = fieldNorm(doc=988)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Spektrum der Wissenschaft. 2023, H.7, S.28-31
  11. Deventer, J.P. van; Kruger, C.J.; Johnson, R.D.: Delineating knowledge management through lexical analysis : a retrospective (2015) 0.04
    0.035709426 = sum of:
      0.02752478 = product of:
        0.11009912 = sum of:
          0.11009912 = weight(_text_:authors in 3807) [ClassicSimilarity], result of:
            0.11009912 = score(doc=3807,freq=14.0), product of:
              0.2360532 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05177952 = queryNorm
              0.46641657 = fieldWeight in 3807, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3807)
        0.25 = coord(1/4)
      0.008184645 = product of:
        0.024553934 = sum of:
          0.024553934 = weight(_text_:22 in 3807) [ClassicSimilarity], result of:
            0.024553934 = score(doc=3807,freq=2.0), product of:
              0.18132305 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05177952 = queryNorm
              0.1354154 = fieldWeight in 3807, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3807)
        0.33333334 = coord(1/3)
    
    Abstract
    Purpose Academic authors tend to define terms that meet their own needs. Knowledge Management (KM) is a term that comes to mind and is examined in this study. Lexicographical research identified KM terms used by authors from 1996 to 2006 in academic outlets to define KM. Data were collected based on strict criteria which included that definitions should be unique instances. From 2006 onwards, these authors could not identify new unique instances of definitions with repetitive usage of such definition instances. Analysis revealed that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, and Process) and Contextualised Content (Information). The paper aims to discuss these issues. Design/methodology/approach The aim of this paper is to add to the body of knowledge in the KM discipline and supply KM practitioners and scholars with insight into what is commonly regarded to be KM so as to reignite the debate on what one could consider as KM. The lexicon used by KM scholars was evaluated though the application of lexicographical research methods as extended though Knowledge Discovery and Text Analysis methods. Findings By simplifying term relationships through the application of lexicographical research methods, as extended though Knowledge Discovery and Text Analysis methods, it was found that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, Process) and Contextualised Content (Information). One would therefore be able to indicate that KM, from an academic point of view, refers to people processing contextualised content.
    Research limitations/implications In total, 42 definitions were identified spanning a period of 11 years. This represented the first use of KM through the estimated apex of terms used. From 2006 onwards definitions were used in repetition, and all definitions that were considered to repeat were therefore subsequently excluded as not being unique instances. All definitions listed are by no means complete and exhaustive. The definitions are viewed outside the scope and context in which they were originally formulated and then used to review the key concepts in the definitions themselves. Social implications When the authors refer to the aforementioned discussion of KM content as well as the presentation of the method followed in this paper, the authors may have a few implications for future research in KM. First the research validates ideas presented by the OECD in 2005 pertaining to KM. It also validates that through the evolution of KM, the authors ended with a description of KM that may be seen as a standardised description. If the authors as academics and practitioners, for example, refer to KM as the same construct and/or idea, it has the potential to speculatively, distinguish between what KM may or may not be. Originality/value By simplifying the term used to define KM, by focusing on the most common definitions, the paper assist in refocusing KM by reconsidering the dimensions that is the most common in how it has been defined over time. This would hopefully assist in reigniting discussions about KM and how it may be used to the benefit of an organisation.
    Date
    20. 1.2015 18:30:22
  12. Sprachtechnologie für eine dynamische Wirtschaft im Medienzeitalter - Language technologies for dynamic business in the age of the media - L'ingénierie linguistique au service de la dynamisation économique à l'ère du multimédia : Tagungsakten der XXVI. Jahrestagung der Internationalen Vereinigung Sprache und Wirtschaft e.V., 23.-25.11.2000 Fachhochschule Köln (2000) 0.04
    0.035492487 = product of:
      0.070984975 = sum of:
        0.070984975 = product of:
          0.106477454 = sum of:
            0.08150805 = weight(_text_:k in 5527) [ClassicSimilarity], result of:
              0.08150805 = score(doc=5527,freq=10.0), product of:
                0.1848414 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.05177952 = queryNorm
                0.44096208 = fieldWeight in 5527, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5527)
            0.02496941 = weight(_text_:h in 5527) [ClassicSimilarity], result of:
              0.02496941 = score(doc=5527,freq=4.0), product of:
                0.12864359 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.05177952 = queryNorm
                0.1940976 = fieldWeight in 5527, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5527)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Content
    Enthält die Beiträge: WRIGHT, S.E.: Leveraging terminology resources across application boundaries: accessing resources in future integrated environments; PALME, K.: E-Commerce: Verhindert Sprache Business-to-business?; RÜEGGER, R.: Die qualität der virtuellen Information als Wettbewerbsvorteil: Information im Internet ist Sprache - noch; SCHIRMER, K. u. J. HALLER: Zugang zu mehrsprachigen Nachrichten im Internet; WEISS, A. u. W. WIEDEN: Die Herstellung mehrsprachiger Informations- und Wissensressourcen in Unternehmen; FULFORD, H.: Monolingual or multilingual web sites? An exploratory study of UK SMEs; SCHMIDTKE-NIKELLA, M.: Effiziente Hypermediaentwicklung: Die Autorenentlastung durch eine Engine; SCHMIDT, R.: Maschinelle Text-Ton-Synchronisation in Wissenschaft und Wirtschaft; HELBIG, H. u.a.: Natürlichsprachlicher Zugang zu Informationsanbietern im Internet und zu lokalen Datenbanken; SIENEL, J. u.a.: Sprachtechnologien für die Informationsgesellschaft des 21. Jahrhunderts; ERBACH, G.: Sprachdialogsysteme für Telefondienste: Stand der Technik und zukünftige Entwicklungen; SUSEN, A.: Spracherkennung: Akteulle Einsatzmöglichkeiten im Bereich der Telekommunikation; BENZMÜLLER, R.: Logox WebSpeech: die neue Technologie für sprechende Internetseiten; JAARANEN, K. u.a.: Webtran tools for in-company language support; SCHMITZ, K.-D.: Projektforschung und Infrastrukturen im Bereich der Terminologie: Wie kann die Wirtschaft davon profitieren?; SCHRÖTER, F. u. U. MEYER: Entwicklung sprachlicher Handlungskompetenz in englisch mit hilfe eines Multimedia-Sprachlernsystems; KLEIN, A.: Der Einsatz von Sprachverarbeitungstools beim Sprachenlernen im Intranet; HAUER, M.: Knowledge Management braucht Terminologie Management; HEYER, G. u.a.: Texttechnologische Anwendungen am Beispiel Text Mining
    Editor
    Schmitz, K.-D.
  13. Kuhlmann, U.; Monnerjahn, P.: Sprache auf Knopfdruck : Sieben automatische Übersetzungsprogramme im Test (2000) 0.04
    0.035155393 = product of:
      0.07031079 = sum of:
        0.07031079 = product of:
          0.10546618 = sum of:
            0.03531208 = weight(_text_:h in 5428) [ClassicSimilarity], result of:
              0.03531208 = score(doc=5428,freq=2.0), product of:
                0.12864359 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.05177952 = queryNorm
                0.27449545 = fieldWeight in 5428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5428)
            0.0701541 = weight(_text_:22 in 5428) [ClassicSimilarity], result of:
              0.0701541 = score(doc=5428,freq=2.0), product of:
                0.18132305 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05177952 = queryNorm
                0.38690117 = fieldWeight in 5428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5428)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.220-229
  14. Sparck Jones, K.; Galliers, J.R.: Evaluating natural language processing systems : an analysis and review (1996) 0.03
    0.032414984 = sum of:
      0.017834382 = product of:
        0.07133753 = sum of:
          0.07133753 = weight(_text_:authors in 2934) [ClassicSimilarity], result of:
            0.07133753 = score(doc=2934,freq=2.0), product of:
              0.2360532 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05177952 = queryNorm
              0.30220953 = fieldWeight in 2934, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=2934)
        0.25 = coord(1/4)
      0.014580603 = product of:
        0.043741807 = sum of:
          0.043741807 = weight(_text_:k in 2934) [ClassicSimilarity], result of:
            0.043741807 = score(doc=2934,freq=2.0), product of:
              0.1848414 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.05177952 = queryNorm
              0.23664509 = fieldWeight in 2934, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.046875 = fieldNorm(doc=2934)
        0.33333334 = coord(1/3)
    
    Abstract
    This comprehensive state-of-the-art book is the first devoted to the important and timely issue of evaluating NLP systems. It addresses the whole area of NLP system evaluation, including aims and scope, problems and methodology. The authors provide a wide-ranging and careful analysis of evaluation concepts, reinforced with extensive illustrations; they relate systems to their environments and develop a framework for proper evaluation. The discussion of principles is completed by a detailed review of practice and strategies in the field, covering both systems for specific tasks, like translation, and core language processors. The methodology lessons drawn from the analysis and review are applied in a series of example cases. A comprehensive bibliography, a subject index, and term glossary are included
  15. Kettunen, K.: Reductive and generative approaches to management of morphological variation of keywords in monolingual information retrieval : an overview (2009) 0.03
    0.032414984 = sum of:
      0.017834382 = product of:
        0.07133753 = sum of:
          0.07133753 = weight(_text_:authors in 2835) [ClassicSimilarity], result of:
            0.07133753 = score(doc=2835,freq=2.0), product of:
              0.2360532 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05177952 = queryNorm
              0.30220953 = fieldWeight in 2835, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=2835)
        0.25 = coord(1/4)
      0.014580603 = product of:
        0.043741807 = sum of:
          0.043741807 = weight(_text_:k in 2835) [ClassicSimilarity], result of:
            0.043741807 = score(doc=2835,freq=2.0), product of:
              0.1848414 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.05177952 = queryNorm
              0.23664509 = fieldWeight in 2835, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.046875 = fieldNorm(doc=2835)
        0.33333334 = coord(1/3)
    
    Abstract
    Purpose - The purpose of this article is to discuss advantages and disadvantages of various means to manage morphological variation of keywords in monolingual information retrieval. Design/methodology/approach - The authors present a compilation of query results from 11 mostly European languages and a new general classification of the language dependent techniques for management of morphological variation. Variants of the different techniques are compared in some detail in terms of retrieval effectiveness and other criteria. The paper consists mainly of an overview of different management methods for keyword variation in information retrieval. Typical IR retrieval results of 11 languages and a new classification for keyword management methods are also presented. Findings - The main results of the paper are an overall comparison of reductive and generative keyword management methods in terms of retrieval effectiveness and other broader criteria. Originality/value - The paper is of value to anyone who wants to get an overall picture of keyword management techniques used in IR.
  16. Al-Shawakfa, E.; Al-Badarneh, A.; Shatnawi, S.; Al-Rabab'ah, K.; Bani-Ismail, B.: ¬A comparison study of some Arabic root finding algorithms (2010) 0.03
    0.032414984 = sum of:
      0.017834382 = product of:
        0.07133753 = sum of:
          0.07133753 = weight(_text_:authors in 3457) [ClassicSimilarity], result of:
            0.07133753 = score(doc=3457,freq=2.0), product of:
              0.2360532 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05177952 = queryNorm
              0.30220953 = fieldWeight in 3457, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=3457)
        0.25 = coord(1/4)
      0.014580603 = product of:
        0.043741807 = sum of:
          0.043741807 = weight(_text_:k in 3457) [ClassicSimilarity], result of:
            0.043741807 = score(doc=3457,freq=2.0), product of:
              0.1848414 = queryWeight, product of:
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.05177952 = queryNorm
              0.23664509 = fieldWeight in 3457, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.569778 = idf(docFreq=3384, maxDocs=44218)
                0.046875 = fieldNorm(doc=3457)
        0.33333334 = coord(1/3)
    
    Abstract
    Arabic has a complex structure, which makes it difficult to apply natural language processing (NLP). Much research on Arabic NLP (ANLP) does exist; however, it is not as mature as that of other languages. Finding Arabic roots is an important step toward conducting effective research on most of ANLP applications. The authors have studied and compared six root-finding algorithms with success rates of over 90%. All algorithms of this study did not use the same testing corpus and/or benchmarking measures. They unified the testing process by implementing their own algorithm descriptions and building a corpus out of 3823 triliteral roots, applying 73 triliteral patterns, and with 18 affixes, producing around 27.6 million words. They tested the algorithms with the generated corpus and have obtained interesting results; they offer to share the corpus freely for benchmarking and ANLP research.
  17. Ali, C.B.; Haddad, H.; Slimani, Y.: Multi-word terms selection for information retrieval (2022) 0.03
    0.031627063 = sum of:
      0.025741715 = product of:
        0.10296686 = sum of:
          0.10296686 = weight(_text_:authors in 900) [ClassicSimilarity], result of:
            0.10296686 = score(doc=900,freq=6.0), product of:
              0.2360532 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05177952 = queryNorm
              0.43620193 = fieldWeight in 900, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=900)
        0.25 = coord(1/4)
      0.005885347 = product of:
        0.01765604 = sum of:
          0.01765604 = weight(_text_:h in 900) [ClassicSimilarity], result of:
            0.01765604 = score(doc=900,freq=2.0), product of:
              0.12864359 = queryWeight, product of:
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.05177952 = queryNorm
              0.13724773 = fieldWeight in 900, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.0390625 = fieldNorm(doc=900)
        0.33333334 = coord(1/3)
    
    Abstract
    Purpose A number of approaches and algorithms have been proposed over the years as a basis for automatic indexing. Many of these approaches suffer from precision inefficiency at low recall. The choice of indexing units has a great impact on search system effectiveness. The authors dive beyond simple terms indexing to propose a framework for multi-word terms (MWT) filtering and indexing. Design/methodology/approach In this paper, the authors rely on ranking MWT to filter them, keeping the most effective ones for the indexing process. The proposed model is based on filtering MWT according to their ability to capture the document topic and distinguish between different documents from the same collection. The authors rely on the hypothesis that the best MWT are those that achieve the greatest association degree. The experiments are carried out with English and French languages data sets. Findings The results indicate that this approach achieved precision enhancements at low recall, and it performed better than more advanced models based on terms dependencies. Originality/value Using and testing different association measures to select MWT that best describe the documents to enhance the precision in the first retrieved documents.
  18. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.03
    0.030839851 = product of:
      0.061679702 = sum of:
        0.061679702 = product of:
          0.24671881 = sum of:
            0.24671881 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.24671881 = score(doc=862,freq=2.0), product of:
                0.43898734 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05177952 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  19. Wu, H.; He, J.; Pei, Y.: Scientific impact at the topic level : a case study in computational linguistics (2010) 0.03
    0.029046264 = sum of:
      0.020806778 = product of:
        0.08322711 = sum of:
          0.08322711 = weight(_text_:authors in 4103) [ClassicSimilarity], result of:
            0.08322711 = score(doc=4103,freq=2.0), product of:
              0.2360532 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05177952 = queryNorm
              0.35257778 = fieldWeight in 4103, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4103)
        0.25 = coord(1/4)
      0.008239485 = product of:
        0.024718454 = sum of:
          0.024718454 = weight(_text_:h in 4103) [ClassicSimilarity], result of:
            0.024718454 = score(doc=4103,freq=2.0), product of:
              0.12864359 = queryWeight, product of:
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.05177952 = queryNorm
              0.19214681 = fieldWeight in 4103, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4103)
        0.33333334 = coord(1/3)
    
    Abstract
    In this article, we propose to apply the topic model and topic-level eigenfactor (TEF) algorithm to assess the relative importance of academic entities including articles, authors, journals, and conferences. Scientific impact is measured by the biased PageRank score toward topics created by the latent topic model. The TEF metric considers the impact of an academic entity in multiple granular views as well as in a global view. Experiments on a computational linguistics corpus show that the method is a useful and promising measure to assess scientific impact.
  20. Radev, D.R.; Joseph, M.T.; Gibson, B.; Muthukrishnan, P.: ¬A bibliometric and network analysis of the field of computational linguistics (2016) 0.03
    0.029046264 = sum of:
      0.020806778 = product of:
        0.08322711 = sum of:
          0.08322711 = weight(_text_:authors in 2764) [ClassicSimilarity], result of:
            0.08322711 = score(doc=2764,freq=2.0), product of:
              0.2360532 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05177952 = queryNorm
              0.35257778 = fieldWeight in 2764, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2764)
        0.25 = coord(1/4)
      0.008239485 = product of:
        0.024718454 = sum of:
          0.024718454 = weight(_text_:h in 2764) [ClassicSimilarity], result of:
            0.024718454 = score(doc=2764,freq=2.0), product of:
              0.12864359 = queryWeight, product of:
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.05177952 = queryNorm
              0.19214681 = fieldWeight in 2764, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2764)
        0.33333334 = coord(1/3)
    
    Abstract
    The ACL Anthology is a large collection of research papers in computational linguistics. Citation data were obtained using text extraction from a collection of PDF files with significant manual postprocessing performed to clean up the results. Manual annotation of the references was then performed to complete the citation network. We analyzed the networks of paper citations, author citations, and author collaborations in an attempt to identify the most central papers and authors. The analysis includes general network statistics, PageRank, metrics across publication years and venues, the impact factor and h-index, as well as other measures.

Languages

  • e 151
  • d 120
  • m 3
  • chi 1
  • f 1
  • More… Less…

Types

  • a 225
  • m 32
  • el 18
  • s 14
  • x 6
  • d 2
  • p 2
  • More… Less…

Subjects

Classifications