Search (275 results, page 1 of 14)

  • × theme_ss:"Computerlinguistik"
  1. Wanner, L.: Lexical choice in text generation and machine translation (1996) 0.04
    0.042419046 = product of:
      0.08483809 = sum of:
        0.06111743 = weight(_text_:l in 8521) [ClassicSimilarity], result of:
          0.06111743 = score(doc=8521,freq=2.0), product of:
            0.17396861 = queryWeight, product of:
              3.9746525 = idf(docFreq=2257, maxDocs=44218)
              0.043769516 = queryNorm
            0.35131297 = fieldWeight in 8521, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9746525 = idf(docFreq=2257, maxDocs=44218)
              0.0625 = fieldNorm(doc=8521)
        0.023720661 = product of:
          0.047441322 = sum of:
            0.047441322 = weight(_text_:22 in 8521) [ClassicSimilarity], result of:
              0.047441322 = score(doc=8521,freq=2.0), product of:
                0.15327339 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043769516 = queryNorm
                0.30952093 = fieldWeight in 8521, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8521)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    31. 7.1996 9:22:19
  2. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.03
    0.034964345 = product of:
      0.06992869 = sum of:
        0.052138194 = product of:
          0.20855278 = sum of:
            0.20855278 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.20855278 = score(doc=562,freq=2.0), product of:
                0.37107843 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.043769516 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.25 = coord(1/4)
        0.017790494 = product of:
          0.03558099 = sum of:
            0.03558099 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.03558099 = score(doc=562,freq=2.0), product of:
                0.15327339 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043769516 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  3. Deventer, J.P. van; Kruger, C.J.; Johnson, R.D.: Delineating knowledge management through lexical analysis : a retrospective (2015) 0.03
    0.03150685 = product of:
      0.0630137 = sum of:
        0.05263591 = weight(_text_:van in 3807) [ClassicSimilarity], result of:
          0.05263591 = score(doc=3807,freq=2.0), product of:
            0.24408463 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.043769516 = queryNorm
            0.21564616 = fieldWeight in 3807, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3807)
        0.010377789 = product of:
          0.020755578 = sum of:
            0.020755578 = weight(_text_:22 in 3807) [ClassicSimilarity], result of:
              0.020755578 = score(doc=3807,freq=2.0), product of:
                0.15327339 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043769516 = queryNorm
                0.1354154 = fieldWeight in 3807, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3807)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    20. 1.2015 18:30:22
  4. Sembok, T.M.T.; Rijsbergen, C.J. van: SILOL: a simple logical-linguistic document retrieval system (1990) 0.03
    0.030077666 = product of:
      0.120310664 = sum of:
        0.120310664 = weight(_text_:van in 6684) [ClassicSimilarity], result of:
          0.120310664 = score(doc=6684,freq=2.0), product of:
            0.24408463 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.043769516 = queryNorm
            0.49290553 = fieldWeight in 6684, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.0625 = fieldNorm(doc=6684)
      0.25 = coord(1/4)
    
  5. ¬Der Student aus dem Computer (2023) 0.03
    0.029200982 = product of:
      0.11680393 = sum of:
        0.11680393 = sum of:
          0.03378162 = weight(_text_:der in 1079) [ClassicSimilarity], result of:
            0.03378162 = score(doc=1079,freq=2.0), product of:
              0.09777089 = queryWeight, product of:
                2.2337668 = idf(docFreq=12875, maxDocs=44218)
                0.043769516 = queryNorm
              0.34551817 = fieldWeight in 1079, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.2337668 = idf(docFreq=12875, maxDocs=44218)
                0.109375 = fieldNorm(doc=1079)
          0.08302231 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
            0.08302231 = score(doc=1079,freq=2.0), product of:
              0.15327339 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043769516 = queryNorm
              0.5416616 = fieldWeight in 1079, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=1079)
      0.25 = coord(1/4)
    
    Date
    27. 1.2023 16:22:55
  6. Working with conceptual structures : contributions to ICCS 2000. 8th International Conference on Conceptual Structures: Logical, Linguistic, and Computational Issues. Darmstadt, August 14-18, 2000 (2000) 0.03
    0.028429307 = product of:
      0.056858614 = sum of:
        0.05263591 = weight(_text_:van in 5089) [ClassicSimilarity], result of:
          0.05263591 = score(doc=5089,freq=2.0), product of:
            0.24408463 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.043769516 = queryNorm
            0.21564616 = fieldWeight in 5089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5089)
        0.0042227027 = product of:
          0.008445405 = sum of:
            0.008445405 = weight(_text_:der in 5089) [ClassicSimilarity], result of:
              0.008445405 = score(doc=5089,freq=2.0), product of:
                0.09777089 = queryWeight, product of:
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.043769516 = queryNorm
                0.08637954 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    Concepts & Language: Knowledge organization by procedures of natural language processing. A case study using the method GABEK (J. Zelger, J. Gadner) - Computer aided narrative analysis using conceptual graphs (H. Schärfe, P. 0hrstrom) - Pragmatic representation of argumentative text: a challenge for the conceptual graph approach (H. Irandoust, B. Moulin) - Conceptual graphs as a knowledge representation core in a complex language learning environment (G. Angelova, A. Nenkova, S. Boycheva, T. Nikolov) - Conceptual Modeling and Ontologies: Relationships and actions in conceptual categories (Ch. Landauer, K.L. Bellman) - Concept approximations for formal concept analysis (J. Saquer, J.S. Deogun) - Faceted information representation (U. Priß) - Simple concept graphs with universal quantifiers (J. Tappe) - A framework for comparing methods for using or reusing multiple ontologies in an application (J. van ZyI, D. Corbett) - Designing task/method knowledge-based systems with conceptual graphs (M. Leclère, F.Trichet, Ch. Choquet) - A logical ontology (J. Farkas, J. Sarbo) - Algorithms and Tools: Fast concept analysis (Ch. Lindig) - A framework for conceptual graph unification (D. Corbett) - Visual CP representation of knowledge (H.D. Pfeiffer, R.T. Hartley) - Maximal isojoin for representing software textual specifications and detecting semantic anomalies (Th. Charnois) - Troika: using grids, lattices and graphs in knowledge acquisition (H.S. Delugach, B.E. Lampkin) - Open world theorem prover for conceptual graphs (J.E. Heaton, P. Kocura) - NetCare: a practical conceptual graphs software tool (S. Polovina, D. Strang) - CGWorld - a web based workbench for conceptual graphs management and applications (P. Dobrev, K. Toutanova) - Position papers: The edition project: Peirce's existential graphs (R. Mülller) - Mining association rules using formal concept analysis (N. Pasquier) - Contextual logic summary (R Wille) - Information channels and conceptual scaling (K.E. Wolff) - Spatial concepts - a rule exploration (S. Rudolph) - The TEXT-TO-ONTO learning environment (A. Mädche, St. Staab) - Controlling the semantics of metadata on audio-visual documents using ontologies (Th. Dechilly, B. Bachimont) - Building the ontological foundations of a terminology from natural language to conceptual graphs with Ribosome, a knowledge extraction system (Ch. Jacquelinet, A. Burgun) - CharGer: some lessons learned and new directions (H.S. Delugach) - Knowledge management using conceptual graphs (W.K. Pun)
    Series
    Berichte aus der Informatik
  7. Guthrie, L.; Pustejovsky, J.; Wilks, Y.; Slator, B.M.: ¬The role of lexicons in natural language processing (1996) 0.03
    0.026738876 = product of:
      0.106955506 = sum of:
        0.106955506 = weight(_text_:l in 6825) [ClassicSimilarity], result of:
          0.106955506 = score(doc=6825,freq=2.0), product of:
            0.17396861 = queryWeight, product of:
              3.9746525 = idf(docFreq=2257, maxDocs=44218)
              0.043769516 = queryNorm
            0.6147977 = fieldWeight in 6825, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9746525 = idf(docFreq=2257, maxDocs=44218)
              0.109375 = fieldNorm(doc=6825)
      0.25 = coord(1/4)
    
  8. Altmann, E.G.; Cristadoro, G.; Esposti, M.D.: On the origin of long-range correlations in texts (2012) 0.03
    0.026538493 = product of:
      0.053076986 = sum of:
        0.04583807 = weight(_text_:l in 330) [ClassicSimilarity], result of:
          0.04583807 = score(doc=330,freq=2.0), product of:
            0.17396861 = queryWeight, product of:
              3.9746525 = idf(docFreq=2257, maxDocs=44218)
              0.043769516 = queryNorm
            0.26348472 = fieldWeight in 330, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9746525 = idf(docFreq=2257, maxDocs=44218)
              0.046875 = fieldNorm(doc=330)
        0.0072389184 = product of:
          0.014477837 = sum of:
            0.014477837 = weight(_text_:der in 330) [ClassicSimilarity], result of:
              0.014477837 = score(doc=330,freq=2.0), product of:
                0.09777089 = queryWeight, product of:
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.043769516 = queryNorm
                0.14807922 = fieldWeight in 330, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.046875 = fieldNorm(doc=330)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    Vgl. die Pressemitteilung zum Artikel: Auf der Suche nach dem entscheidenden Wort: die Häufung bestimmter Wörter innerhalb eines Textes macht diese zu Schlüsselwörtern [11. Juli 2012]. Unter: http://www.mpg.de/5894319/statistische_Textanalyse?filter_order=L. Vgl. auch: http://arxiv.org/list/cs.CL/current.
  9. Luo, L.; Ju, J.; Li, Y.-F.; Haffari, G.; Xiong, B.; Pan, S.: ChatRule: mining logical rules with large language models for knowledge graph reasoning (2023) 0.03
    0.026511904 = product of:
      0.053023808 = sum of:
        0.038198393 = weight(_text_:l in 1171) [ClassicSimilarity], result of:
          0.038198393 = score(doc=1171,freq=2.0), product of:
            0.17396861 = queryWeight, product of:
              3.9746525 = idf(docFreq=2257, maxDocs=44218)
              0.043769516 = queryNorm
            0.2195706 = fieldWeight in 1171, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9746525 = idf(docFreq=2257, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1171)
        0.014825413 = product of:
          0.029650826 = sum of:
            0.029650826 = weight(_text_:22 in 1171) [ClassicSimilarity], result of:
              0.029650826 = score(doc=1171,freq=2.0), product of:
                0.15327339 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043769516 = queryNorm
                0.19345059 = fieldWeight in 1171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1171)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    23.11.2023 19:07:22
  10. Cruys, T. van de; Moirón, B.V.: Semantics-based multiword expression extraction (2007) 0.03
    0.026317956 = product of:
      0.10527182 = sum of:
        0.10527182 = weight(_text_:van in 2919) [ClassicSimilarity], result of:
          0.10527182 = score(doc=2919,freq=2.0), product of:
            0.24408463 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.043769516 = queryNorm
            0.43129233 = fieldWeight in 2919, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2919)
      0.25 = coord(1/4)
    
  11. Renker, L.: Exploration von Textkorpora : Topic Models als Grundlage der Interaktion (2015) 0.03
    0.025843661 = product of:
      0.051687323 = sum of:
        0.038198393 = weight(_text_:l in 2380) [ClassicSimilarity], result of:
          0.038198393 = score(doc=2380,freq=2.0), product of:
            0.17396861 = queryWeight, product of:
              3.9746525 = idf(docFreq=2257, maxDocs=44218)
              0.043769516 = queryNorm
            0.2195706 = fieldWeight in 2380, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9746525 = idf(docFreq=2257, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2380)
        0.013488929 = product of:
          0.026977858 = sum of:
            0.026977858 = weight(_text_:der in 2380) [ClassicSimilarity], result of:
              0.026977858 = score(doc=2380,freq=10.0), product of:
                0.09777089 = queryWeight, product of:
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.043769516 = queryNorm
                0.27592933 = fieldWeight in 2380, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2380)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Das Internet birgt schier endlose Informationen. Ein zentrales Problem besteht heutzutage darin diese auch zugänglich zu machen. Es ist ein fundamentales Domänenwissen erforderlich, um in einer Volltextsuche die korrekten Suchanfragen zu formulieren. Das ist jedoch oftmals nicht vorhanden, so dass viel Zeit aufgewandt werden muss, um einen Überblick des behandelten Themas zu erhalten. In solchen Situationen findet sich ein Nutzer in einem explorativen Suchvorgang, in dem er sich schrittweise an ein Thema heranarbeiten muss. Für die Organisation von Daten werden mittlerweile ganz selbstverständlich Verfahren des Machine Learnings verwendet. In den meisten Fällen bleiben sie allerdings für den Anwender unsichtbar. Die interaktive Verwendung in explorativen Suchprozessen könnte die menschliche Urteilskraft enger mit der maschinellen Verarbeitung großer Datenmengen verbinden. Topic Models sind ebensolche Verfahren. Sie finden in einem Textkorpus verborgene Themen, die sich relativ gut von Menschen interpretieren lassen und sind daher vielversprechend für die Anwendung in explorativen Suchprozessen. Nutzer können damit beim Verstehen unbekannter Quellen unterstützt werden. Bei der Betrachtung entsprechender Forschungsarbeiten fiel auf, dass Topic Models vorwiegend zur Erzeugung statischer Visualisierungen verwendet werden. Das Sensemaking ist ein wesentlicher Bestandteil der explorativen Suche und wird dennoch nur in sehr geringem Umfang genutzt, um algorithmische Neuerungen zu begründen und in einen umfassenden Kontext zu setzen. Daraus leitet sich die Vermutung ab, dass die Verwendung von Modellen des Sensemakings und die nutzerzentrierte Konzeption von explorativen Suchen, neue Funktionen für die Interaktion mit Topic Models hervorbringen und einen Kontext für entsprechende Forschungsarbeiten bieten können.
    Footnote
    Masterthesis zur Erlangung des akademischen Grades Master of Science (M.Sc.) vorgelegt an der Fachhochschule Köln / Fakultät für Informatik und Ingenieurswissenschaften im Studiengang Medieninformatik.
  12. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.03
    0.025029413 = product of:
      0.10011765 = sum of:
        0.10011765 = sum of:
          0.028955674 = weight(_text_:der in 4888) [ClassicSimilarity], result of:
            0.028955674 = score(doc=4888,freq=2.0), product of:
              0.09777089 = queryWeight, product of:
                2.2337668 = idf(docFreq=12875, maxDocs=44218)
                0.043769516 = queryNorm
              0.29615843 = fieldWeight in 4888, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.2337668 = idf(docFreq=12875, maxDocs=44218)
                0.09375 = fieldNorm(doc=4888)
          0.07116198 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
            0.07116198 = score(doc=4888,freq=2.0), product of:
              0.15327339 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043769516 = queryNorm
              0.46428138 = fieldWeight in 4888, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=4888)
      0.25 = coord(1/4)
    
    Abstract
    Mit einem Überblick über: Probleme, Methoden, Stand der Forschung u. Literatur.
    Date
    1. 3.2013 14:56:22
  13. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.03
    0.025029413 = product of:
      0.10011765 = sum of:
        0.10011765 = sum of:
          0.028955674 = weight(_text_:der in 5429) [ClassicSimilarity], result of:
            0.028955674 = score(doc=5429,freq=2.0), product of:
              0.09777089 = queryWeight, product of:
                2.2337668 = idf(docFreq=12875, maxDocs=44218)
                0.043769516 = queryNorm
              0.29615843 = fieldWeight in 5429, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.2337668 = idf(docFreq=12875, maxDocs=44218)
                0.09375 = fieldNorm(doc=5429)
          0.07116198 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
            0.07116198 = score(doc=5429,freq=2.0), product of:
              0.15327339 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043769516 = queryNorm
              0.46428138 = fieldWeight in 5429, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=5429)
      0.25 = coord(1/4)
    
    Abstract
    Noch immer ist der menschliche Übersetzer dem Computer in sprachlicher Hinsicht überlegen. Zwar ist die Übersetzungssoftware besser geworden, aber die systembedingten Probleme bleiben
    Source
    c't. 2000, H.22, S.230-231
  14. Alonge, A.; Calzolari, N.; Vossen, P.; Bloksma, L.; Castellon, I.; Marti, M.A.; Peters, W.: ¬The linguistic design of the EuroWordNet database (1998) 0.02
    0.022919035 = product of:
      0.09167614 = sum of:
        0.09167614 = weight(_text_:l in 6440) [ClassicSimilarity], result of:
          0.09167614 = score(doc=6440,freq=2.0), product of:
            0.17396861 = queryWeight, product of:
              3.9746525 = idf(docFreq=2257, maxDocs=44218)
              0.043769516 = queryNorm
            0.52696943 = fieldWeight in 6440, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9746525 = idf(docFreq=2257, maxDocs=44218)
              0.09375 = fieldNorm(doc=6440)
      0.25 = coord(1/4)
    
  15. Rodriguez, H.; Climent, S.; Vossen, P.; Bloksma, L.; Peters, W.; Alonge, A.; Bertagna, F.; Roventini, A.: ¬The top-down strategy for building EuroWordNet : vocabulary coverage, base concept and top ontology (1998) 0.02
    0.022919035 = product of:
      0.09167614 = sum of:
        0.09167614 = weight(_text_:l in 6441) [ClassicSimilarity], result of:
          0.09167614 = score(doc=6441,freq=2.0), product of:
            0.17396861 = queryWeight, product of:
              3.9746525 = idf(docFreq=2257, maxDocs=44218)
              0.043769516 = queryNorm
            0.52696943 = fieldWeight in 6441, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9746525 = idf(docFreq=2257, maxDocs=44218)
              0.09375 = fieldNorm(doc=6441)
      0.25 = coord(1/4)
    
  16. Vossen, P.; Bloksma, L.; Alonge, A.; Marinai, E.; Peters, C.; Catellon, I.; Marti, M.A.; Rigau, G.: Compatibility in interpretation of relations in EuroWordNet (1998) 0.02
    0.022919035 = product of:
      0.09167614 = sum of:
        0.09167614 = weight(_text_:l in 6442) [ClassicSimilarity], result of:
          0.09167614 = score(doc=6442,freq=2.0), product of:
            0.17396861 = queryWeight, product of:
              3.9746525 = idf(docFreq=2257, maxDocs=44218)
              0.043769516 = queryNorm
            0.52696943 = fieldWeight in 6442, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9746525 = idf(docFreq=2257, maxDocs=44218)
              0.09375 = fieldNorm(doc=6442)
      0.25 = coord(1/4)
    
  17. Semantic role universals and argument linking : theoretical, typological, and psycholinguistic perspectives (2006) 0.02
    0.021268122 = product of:
      0.08507249 = sum of:
        0.08507249 = weight(_text_:van in 3670) [ClassicSimilarity], result of:
          0.08507249 = score(doc=3670,freq=4.0), product of:
            0.24408463 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.043769516 = queryNorm
            0.34853685 = fieldWeight in 3670, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.03125 = fieldNorm(doc=3670)
      0.25 = coord(1/4)
    
    Content
    Inhalt: Argument hierarchy and other factors determining argument realization / Dieter Wunderlich - Mismatches in semantic-role hierarchies and the dimensions of role semantics / Beatrice Primus - Thematic roles : universal, particular, and idiosyncratic aspects / Manfred Bierwisch - Experiencer constructions in Daghestanian languages / Bernard Comrie and Helma van den Berg - Clause-level vs. predicate-level linking / Balthasar Bickel - From meaning to syntax semantic roles and beyond / Walter Bisang - Meaning, form and function in basic case roles / Georg Bossong - Semantic macroroles and language processing / Robert D. Van Valin, Jr. - Thematic roles as event structure relations / Maria Mercedes Pinango - Generalised semantic roles and syntactic templates: Anew framework for language comprehension / Ina Bornkessel and Matthias Schlesewsky
  18. Lorenz, S.: Konzeption und prototypische Realisierung einer begriffsbasierten Texterschließung (2006) 0.02
    0.019753624 = product of:
      0.079014495 = sum of:
        0.079014495 = sum of:
          0.04343351 = weight(_text_:der in 1746) [ClassicSimilarity], result of:
            0.04343351 = score(doc=1746,freq=18.0), product of:
              0.09777089 = queryWeight, product of:
                2.2337668 = idf(docFreq=12875, maxDocs=44218)
                0.043769516 = queryNorm
              0.44423765 = fieldWeight in 1746, product of:
                4.2426405 = tf(freq=18.0), with freq of:
                  18.0 = termFreq=18.0
                2.2337668 = idf(docFreq=12875, maxDocs=44218)
                0.046875 = fieldNorm(doc=1746)
          0.03558099 = weight(_text_:22 in 1746) [ClassicSimilarity], result of:
            0.03558099 = score(doc=1746,freq=2.0), product of:
              0.15327339 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043769516 = queryNorm
              0.23214069 = fieldWeight in 1746, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1746)
      0.25 = coord(1/4)
    
    Abstract
    Im Rahmen dieser Arbeit wird eine Vorgehensweise entwickelt, die die Fixierung auf das Wort und die damit verbundenen Schwächen überwindet. Sie gestattet die Extraktion von Informationen anhand der repräsentierten Begriffe und bildet damit die Basis einer inhaltlichen Texterschließung. Die anschließende prototypische Realisierung dient dazu, die Konzeption zu überprüfen sowie ihre Möglichkeiten und Grenzen abzuschätzen und zu bewerten. Arbeiten zum Information Extraction widmen sich fast ausschließlich dem Englischen, wobei insbesondere im Bereich der Named Entities sehr gute Ergebnisse erzielt werden. Deutlich schlechter sehen die Resultate für weniger regelmäßige Sprachen wie beispielsweise das Deutsche aus. Aus diesem Grund sowie praktischen Erwägungen wie insbesondere der Vertrautheit des Autors damit, soll diese Sprache primär Gegenstand der Untersuchungen sein. Die Lösung von einer engen Termorientierung bei gleichzeitiger Betonung der repräsentierten Begriffe legt nahe, dass nicht nur die verwendeten Worte sekundär werden sondern auch die verwendete Sprache. Um den Rahmen dieser Arbeit nicht zu sprengen wird bei der Untersuchung dieses Punktes das Augenmerk vor allem auf die mit unterschiedlichen Sprachen verbundenen Schwierigkeiten und Besonderheiten gelegt.
    Content
    Dissertation an der Universität Trier - Fachbereich IV - zur Erlangung der Würde eines Doktors der Wirtschafts- und Sozialwissenschaften. Vgl.: http://ubt.opus.hbz-nrw.de/volltexte/2006/377/pdf/LorenzSaschaDiss.pdf.
    Date
    22. 3.2015 9:17:30
  19. Pinker, S.: Wörter und Regeln : Die Natur der Sprache (2000) 0.02
    0.01695082 = product of:
      0.06780328 = sum of:
        0.06780328 = sum of:
          0.03815245 = weight(_text_:der in 734) [ClassicSimilarity], result of:
            0.03815245 = score(doc=734,freq=20.0), product of:
              0.09777089 = queryWeight, product of:
                2.2337668 = idf(docFreq=12875, maxDocs=44218)
                0.043769516 = queryNorm
              0.390223 = fieldWeight in 734, product of:
                4.472136 = tf(freq=20.0), with freq of:
                  20.0 = termFreq=20.0
                2.2337668 = idf(docFreq=12875, maxDocs=44218)
                0.0390625 = fieldNorm(doc=734)
          0.029650826 = weight(_text_:22 in 734) [ClassicSimilarity], result of:
            0.029650826 = score(doc=734,freq=2.0), product of:
              0.15327339 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043769516 = queryNorm
              0.19345059 = fieldWeight in 734, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=734)
      0.25 = coord(1/4)
    
    Abstract
    Wie lernen Kinder sprechen? Welche Hinweise geben gerade ihre Fehler beim Spracherwerb auf den Ablauf des Lernprozesses - getreu dem Motto: "Kinder sagen die töllsten Sachen«? Und wie helfen beziehungsweise warum scheitern bislang Computer bei der Simulation neuronaler Netzwerke, die am komplizierten Gewebe der menschlichen Sprache mitwirken? In seinem neuen Buch Wörter und Regeln hat der bekannte US-amerikanische Kognitionswissenschaftler Steven Pinker (Der Sprachinstinkt) wieder einmal eine ebenso informative wie kurzweifige Erkundungstour ins Reich der Sprache unternommen. Was die Sache besonders spannend und lesenswert macht: Souverän beleuchtet der Professor am Massachusetts Institute of Technology sowohl natur- als auch geisteswissenschaftliche Aspekte. So vermittelt er einerseits linguistische Grundlagen in den Fußspuren Ferdinand de Saussures, etwa die einer generativen Grammatik, liefert einen Exkurs durch die Sprachgeschichte und widmet ein eigenes Kapitel den Schrecken der deutschen Sprache". Andererseits lässt er aber auch die neuesten bildgebenden Verfahren nicht außen vor, die zeigen, was im Gehirn bei der Sprachverarbeitung abläuft. Pinkers Theorie, die sich in diesem Puzzle verschiedenster Aspekte wiederfindet: Sprache besteht im Kein aus zwei Bestandteilen - einem mentalen Lexikon aus erinnerten Wörtern und einer mentalen Grammatik aus verschiedenen kombinatorischen Regeln. Konkret heißt das: Wir prägen uns bekannte Größen und ihre abgestuften, sich kreuzenden Merkmale ein, aber wir erzeugen auch neue geistige Produkte, in dem wir Regeln anwenden. Gerade daraus, so schließt Pinker, erschließt sich der Reichtum und die ungeheure Ausdruckskraft unserer Sprache
    Date
    19. 7.2002 14:22:31
  20. Lezius, W.: Morphy - Morphologie und Tagging für das Deutsche (2013) 0.02
    0.016686276 = product of:
      0.0667451 = sum of:
        0.0667451 = sum of:
          0.019303782 = weight(_text_:der in 1490) [ClassicSimilarity], result of:
            0.019303782 = score(doc=1490,freq=2.0), product of:
              0.09777089 = queryWeight, product of:
                2.2337668 = idf(docFreq=12875, maxDocs=44218)
                0.043769516 = queryNorm
              0.19743896 = fieldWeight in 1490, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.2337668 = idf(docFreq=12875, maxDocs=44218)
                0.0625 = fieldNorm(doc=1490)
          0.047441322 = weight(_text_:22 in 1490) [ClassicSimilarity], result of:
            0.047441322 = score(doc=1490,freq=2.0), product of:
              0.15327339 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043769516 = queryNorm
              0.30952093 = fieldWeight in 1490, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1490)
      0.25 = coord(1/4)
    
    Abstract
    Morphy ist ein frei verfügbares Softwarepaket für die morphologische Analyse und Synthese und die kontextsensitive Wortartenbestimmung des Deutschen. Die Verwendung der Software unterliegt keinen Beschränkungen. Da die Weiterentwicklung eingestellt worden ist, verwenden Sie Morphy as is, d.h. auf eigenes Risiko, ohne jegliche Haftung und Gewährleistung und vor allem ohne Support. Morphy ist nur für die Windows-Plattform verfügbar und nur auf Standalone-PCs lauffähig.
    Date
    22. 3.2015 9:30:24

Authors

Years

Languages

  • d 185
  • e 85
  • m 4
  • More… Less…

Types

  • a 199
  • m 44
  • el 36
  • s 20
  • x 12
  • d 2
  • p 2
  • More… Less…

Subjects

Classifications