Search (136 results, page 1 of 7)

  • × theme_ss:"Computerlinguistik"
  1. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.09
    0.08716979 = product of:
      0.17433958 = sum of:
        0.17433958 = sum of:
          0.089470625 = weight(_text_:p in 5429) [ClassicSimilarity], result of:
            0.089470625 = score(doc=5429,freq=2.0), product of:
              0.18768665 = queryWeight, product of:
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.052200247 = queryNorm
              0.47670212 = fieldWeight in 5429, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.09375 = fieldNorm(doc=5429)
          0.08486895 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
            0.08486895 = score(doc=5429,freq=2.0), product of:
              0.18279637 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052200247 = queryNorm
              0.46428138 = fieldWeight in 5429, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=5429)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.230-231
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.08
    0.084548526 = sum of:
      0.06218087 = product of:
        0.24872348 = sum of:
          0.24872348 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
            0.24872348 = score(doc=862,freq=2.0), product of:
              0.44255427 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.052200247 = queryNorm
              0.56201804 = fieldWeight in 862, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=862)
        0.25 = coord(1/4)
      0.022367656 = product of:
        0.044735312 = sum of:
          0.044735312 = weight(_text_:p in 862) [ClassicSimilarity], result of:
            0.044735312 = score(doc=862,freq=2.0), product of:
              0.18768665 = queryWeight, product of:
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.052200247 = queryNorm
              0.23835106 = fieldWeight in 862, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.046875 = fieldNorm(doc=862)
        0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
    Type
    p
  3. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.08
    0.0833981 = sum of:
      0.06218087 = product of:
        0.24872348 = sum of:
          0.24872348 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.24872348 = score(doc=562,freq=2.0), product of:
              0.44255427 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.052200247 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.021217238 = product of:
        0.042434476 = sum of:
          0.042434476 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.042434476 = score(doc=562,freq=2.0), product of:
              0.18279637 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052200247 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  4. Kuhlmann, U.; Monnerjahn, P.: Sprache auf Knopfdruck : Sieben automatische Übersetzungsprogramme im Test (2000) 0.07
    0.07264149 = product of:
      0.14528298 = sum of:
        0.14528298 = sum of:
          0.074558854 = weight(_text_:p in 5428) [ClassicSimilarity], result of:
            0.074558854 = score(doc=5428,freq=2.0), product of:
              0.18768665 = queryWeight, product of:
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.052200247 = queryNorm
              0.39725178 = fieldWeight in 5428, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.078125 = fieldNorm(doc=5428)
          0.07072413 = weight(_text_:22 in 5428) [ClassicSimilarity], result of:
            0.07072413 = score(doc=5428,freq=2.0), product of:
              0.18279637 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052200247 = queryNorm
              0.38690117 = fieldWeight in 5428, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=5428)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.220-229
  5. Basili, R.; Pazienza, M.T.; Velardi, P.: ¬An empirical symbolic approach to natural language processing (1996) 0.06
    0.058113195 = product of:
      0.11622639 = sum of:
        0.11622639 = sum of:
          0.059647083 = weight(_text_:p in 6753) [ClassicSimilarity], result of:
            0.059647083 = score(doc=6753,freq=2.0), product of:
              0.18768665 = queryWeight, product of:
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.052200247 = queryNorm
              0.31780142 = fieldWeight in 6753, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.0625 = fieldNorm(doc=6753)
          0.056579303 = weight(_text_:22 in 6753) [ClassicSimilarity], result of:
            0.056579303 = score(doc=6753,freq=2.0), product of:
              0.18279637 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052200247 = queryNorm
              0.30952093 = fieldWeight in 6753, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=6753)
      0.5 = coord(1/2)
    
    Date
    6. 3.1997 16:22:15
  6. Schneider, J.W.; Borlund, P.: ¬A bibliometric-based semiautomatic approach to identification of candidate thesaurus terms : parsing and filtering of noun phrases from citation contexts (2005) 0.05
    0.050849043 = product of:
      0.101698086 = sum of:
        0.101698086 = sum of:
          0.052191198 = weight(_text_:p in 156) [ClassicSimilarity], result of:
            0.052191198 = score(doc=156,freq=2.0), product of:
              0.18768665 = queryWeight, product of:
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.052200247 = queryNorm
              0.27807623 = fieldWeight in 156, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.0546875 = fieldNorm(doc=156)
          0.04950689 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
            0.04950689 = score(doc=156,freq=2.0), product of:
              0.18279637 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052200247 = queryNorm
              0.2708308 = fieldWeight in 156, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=156)
      0.5 = coord(1/2)
    
    Date
    8. 3.2007 19:55:22
  7. Radev, D.R.; Joseph, M.T.; Gibson, B.; Muthukrishnan, P.: ¬A bibliometric and network analysis of the field of computational linguistics (2016) 0.05
    0.047071442 = sum of:
      0.020975841 = product of:
        0.083903365 = sum of:
          0.083903365 = weight(_text_:authors in 2764) [ClassicSimilarity], result of:
            0.083903365 = score(doc=2764,freq=2.0), product of:
              0.23797122 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052200247 = queryNorm
              0.35257778 = fieldWeight in 2764, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2764)
        0.25 = coord(1/4)
      0.026095599 = product of:
        0.052191198 = sum of:
          0.052191198 = weight(_text_:p in 2764) [ClassicSimilarity], result of:
            0.052191198 = score(doc=2764,freq=2.0), product of:
              0.18768665 = queryWeight, product of:
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.052200247 = queryNorm
              0.27807623 = fieldWeight in 2764, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2764)
        0.5 = coord(1/2)
    
    Abstract
    The ACL Anthology is a large collection of research papers in computational linguistics. Citation data were obtained using text extraction from a collection of PDF files with significant manual postprocessing performed to clean up the results. Manual annotation of the references was then performed to complete the citation network. We analyzed the networks of paper citations, author citations, and author collaborations in an attempt to identify the most central papers and authors. The analysis includes general network statistics, PageRank, metrics across publication years and venues, the impact factor and h-index, as well as other measures.
  8. Chou, C.; Chu, T.: ¬An analysis of BERT (NLP) for assisted subject indexing for Project Gutenberg (2022) 0.05
    0.047071442 = sum of:
      0.020975841 = product of:
        0.083903365 = sum of:
          0.083903365 = weight(_text_:authors in 1139) [ClassicSimilarity], result of:
            0.083903365 = score(doc=1139,freq=2.0), product of:
              0.23797122 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052200247 = queryNorm
              0.35257778 = fieldWeight in 1139, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1139)
        0.25 = coord(1/4)
      0.026095599 = product of:
        0.052191198 = sum of:
          0.052191198 = weight(_text_:p in 1139) [ClassicSimilarity], result of:
            0.052191198 = score(doc=1139,freq=2.0), product of:
              0.18768665 = queryWeight, product of:
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.052200247 = queryNorm
              0.27807623 = fieldWeight in 1139, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1139)
        0.5 = coord(1/2)
    
    Abstract
    In light of AI (Artificial Intelligence) and NLP (Natural language processing) technologies, this article examines the feasibility of using AI/NLP models to enhance the subject indexing of digital resources. While BERT (Bidirectional Encoder Representations from Transformers) models are widely used in scholarly communities, the authors assess whether BERT models can be used in machine-assisted indexing in the Project Gutenberg collection, through suggesting Library of Congress subject headings filtered by certain Library of Congress Classification subclass labels. The findings of this study are informative for further research on BERT models to assist with automatic subject indexing for digital library collections.
    Source
    Cataloging and classification quarterly. 60(2022) no.8, p.807-835
  9. Lawrie, D.; Mayfield, J.; McNamee, P.; Oard, P.W.: Cross-language person-entity linking from 20 languages (2015) 0.04
    0.043584894 = product of:
      0.08716979 = sum of:
        0.08716979 = sum of:
          0.044735312 = weight(_text_:p in 1848) [ClassicSimilarity], result of:
            0.044735312 = score(doc=1848,freq=2.0), product of:
              0.18768665 = queryWeight, product of:
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.052200247 = queryNorm
              0.23835106 = fieldWeight in 1848, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.046875 = fieldNorm(doc=1848)
          0.042434476 = weight(_text_:22 in 1848) [ClassicSimilarity], result of:
            0.042434476 = score(doc=1848,freq=2.0), product of:
              0.18279637 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052200247 = queryNorm
              0.23214069 = fieldWeight in 1848, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1848)
      0.5 = coord(1/2)
    
    Abstract
    The goal of entity linking is to associate references to an entity that is found in unstructured natural language content to an authoritative inventory of known entities. This article describes the construction of 6 test collections for cross-language person-entity linking that together span 22 languages. Fully automated components were used together with 2 crowdsourced validation stages to affordably generate ground-truth annotations with an accuracy comparable to that of a completely manual process. The resulting test collections each contain between 642 (Arabic) and 2,361 (Romanian) person references in non-English texts for which the correct resolution in English Wikipedia is known, plus a similar number of references for which no correct resolution into English Wikipedia is believed to exist. Fully automated cross-language person-name linking experiments with 20 non-English languages yielded a resolution accuracy of between 0.84 (Serbian) and 0.98 (Romanian), which compares favorably with previously reported cross-language entity linking results for Spanish.
  10. Deventer, J.P. van; Kruger, C.J.; Johnson, R.D.: Delineating knowledge management through lexical analysis : a retrospective (2015) 0.04
    0.040125154 = sum of:
      0.02774843 = product of:
        0.11099372 = sum of:
          0.11099372 = weight(_text_:authors in 3807) [ClassicSimilarity], result of:
            0.11099372 = score(doc=3807,freq=14.0), product of:
              0.23797122 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052200247 = queryNorm
              0.46641657 = fieldWeight in 3807, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3807)
        0.25 = coord(1/4)
      0.012376723 = product of:
        0.024753446 = sum of:
          0.024753446 = weight(_text_:22 in 3807) [ClassicSimilarity], result of:
            0.024753446 = score(doc=3807,freq=2.0), product of:
              0.18279637 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052200247 = queryNorm
              0.1354154 = fieldWeight in 3807, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3807)
        0.5 = coord(1/2)
    
    Abstract
    Purpose Academic authors tend to define terms that meet their own needs. Knowledge Management (KM) is a term that comes to mind and is examined in this study. Lexicographical research identified KM terms used by authors from 1996 to 2006 in academic outlets to define KM. Data were collected based on strict criteria which included that definitions should be unique instances. From 2006 onwards, these authors could not identify new unique instances of definitions with repetitive usage of such definition instances. Analysis revealed that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, and Process) and Contextualised Content (Information). The paper aims to discuss these issues. Design/methodology/approach The aim of this paper is to add to the body of knowledge in the KM discipline and supply KM practitioners and scholars with insight into what is commonly regarded to be KM so as to reignite the debate on what one could consider as KM. The lexicon used by KM scholars was evaluated though the application of lexicographical research methods as extended though Knowledge Discovery and Text Analysis methods. Findings By simplifying term relationships through the application of lexicographical research methods, as extended though Knowledge Discovery and Text Analysis methods, it was found that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, Process) and Contextualised Content (Information). One would therefore be able to indicate that KM, from an academic point of view, refers to people processing contextualised content.
    Research limitations/implications In total, 42 definitions were identified spanning a period of 11 years. This represented the first use of KM through the estimated apex of terms used. From 2006 onwards definitions were used in repetition, and all definitions that were considered to repeat were therefore subsequently excluded as not being unique instances. All definitions listed are by no means complete and exhaustive. The definitions are viewed outside the scope and context in which they were originally formulated and then used to review the key concepts in the definitions themselves. Social implications When the authors refer to the aforementioned discussion of KM content as well as the presentation of the method followed in this paper, the authors may have a few implications for future research in KM. First the research validates ideas presented by the OECD in 2005 pertaining to KM. It also validates that through the evolution of KM, the authors ended with a description of KM that may be seen as a standardised description. If the authors as academics and practitioners, for example, refer to KM as the same construct and/or idea, it has the potential to speculatively, distinguish between what KM may or may not be. Originality/value By simplifying the term used to define KM, by focusing on the most common definitions, the paper assist in refocusing KM by reconsidering the dimensions that is the most common in how it has been defined over time. This would hopefully assist in reigniting discussions about KM and how it may be used to the benefit of an organisation.
    Date
    20. 1.2015 18:30:22
  11. Luo, L.; Ju, J.; Li, Y.-F.; Haffari, G.; Xiong, B.; Pan, S.: ChatRule: mining logical rules with large language models for knowledge graph reasoning (2023) 0.04
    0.036320746 = product of:
      0.07264149 = sum of:
        0.07264149 = sum of:
          0.037279427 = weight(_text_:p in 1171) [ClassicSimilarity], result of:
            0.037279427 = score(doc=1171,freq=2.0), product of:
              0.18768665 = queryWeight, product of:
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.052200247 = queryNorm
              0.19862589 = fieldWeight in 1171, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1171)
          0.035362065 = weight(_text_:22 in 1171) [ClassicSimilarity], result of:
            0.035362065 = score(doc=1171,freq=2.0), product of:
              0.18279637 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052200247 = queryNorm
              0.19345059 = fieldWeight in 1171, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1171)
      0.5 = coord(1/2)
    
    Date
    23.11.2023 19:07:22
    Type
    p
  12. Park, J.S.; O'Brien, J.C.; Cai, C.J.; Ringel Morris, M.; Liang, P.; Bernstein, M.S.: Generative agents : interactive simulacra of human behavior (2023) 0.03
    0.03362246 = sum of:
      0.014982744 = product of:
        0.059930976 = sum of:
          0.059930976 = weight(_text_:authors in 972) [ClassicSimilarity], result of:
            0.059930976 = score(doc=972,freq=2.0), product of:
              0.23797122 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052200247 = queryNorm
              0.25184128 = fieldWeight in 972, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=972)
        0.25 = coord(1/4)
      0.018639714 = product of:
        0.037279427 = sum of:
          0.037279427 = weight(_text_:p in 972) [ClassicSimilarity], result of:
            0.037279427 = score(doc=972,freq=2.0), product of:
              0.18768665 = queryWeight, product of:
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.052200247 = queryNorm
              0.19862589 = fieldWeight in 972, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.0390625 = fieldNorm(doc=972)
        0.5 = coord(1/2)
    
    Abstract
    Believable proxies of human behavior can empower interactive applications ranging from immersive environments to rehearsal spaces for interpersonal communication to prototyping tools. In this paper, we introduce generative agents--computational software agents that simulate believable human behavior. Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day. To enable generative agents, we describe an architecture that extends a large language model to store a complete record of the agent's experiences using natural language, synthesize those memories over time into higher-level reflections, and retrieve them dynamically to plan behavior. We instantiate generative agents to populate an interactive sandbox environment inspired by The Sims, where end users can interact with a small town of twenty five agents using natural language. In an evaluation, these generative agents produce believable individual and emergent social behaviors: for example, starting with only a single user-specified notion that one agent wants to throw a Valentine's Day party, the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time. We demonstrate through ablation that the components of our agent architecture--observation, planning, and reflection--each contribute critically to the believability of agent behavior. By fusing large language models with computational, interactive agents, this work introduces architectural and interaction patterns for enabling believable simulations of human behavior.
  13. Working with conceptual structures : contributions to ICCS 2000. 8th International Conference on Conceptual Structures: Logical, Linguistic, and Computational Issues. Darmstadt, August 14-18, 2000 (2000) 0.03
    0.033087373 = sum of:
      0.010487921 = product of:
        0.041951682 = sum of:
          0.041951682 = weight(_text_:authors in 5089) [ClassicSimilarity], result of:
            0.041951682 = score(doc=5089,freq=2.0), product of:
              0.23797122 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052200247 = queryNorm
              0.17628889 = fieldWeight in 5089, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.02734375 = fieldNorm(doc=5089)
        0.25 = coord(1/4)
      0.022599453 = product of:
        0.045198906 = sum of:
          0.045198906 = weight(_text_:p in 5089) [ClassicSimilarity], result of:
            0.045198906 = score(doc=5089,freq=6.0), product of:
              0.18768665 = queryWeight, product of:
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.052200247 = queryNorm
              0.24082111 = fieldWeight in 5089, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.02734375 = fieldNorm(doc=5089)
        0.5 = coord(1/2)
    
    Abstract
    The 8th International Conference on Conceptual Structures - Logical, Linguistic, and Computational Issues (ICCS 2000) brings together a wide range of researchers and practitioners working with conceptual structures. During the last few years, the ICCS conference series has considerably widened its scope on different kinds of conceptual structures, stimulating research across domain boundaries. We hope that this stimulation is further enhanced by ICCS 2000 joining the long tradition of conferences in Darmstadt with extensive, lively discussions. This volume consists of contributions presented at ICCS 2000, complementing the volume "Conceptual Structures: Logical, Linguistic, and Computational Issues" (B. Ganter, G.W. Mineau (Eds.), LNAI 1867, Springer, Berlin-Heidelberg 2000). It contains submissions reviewed by the program committee, and position papers. We wish to express our appreciation to all the authors of submitted papers, to the general chair, the program chair, the editorial board, the program committee, and to the additional reviewers for making ICCS 2000 a valuable contribution in the knowledge processing research field. Special thanks go to the local organizers for making the conference an enjoyable and inspiring event. We are grateful to Darmstadt University of Technology, the Ernst Schröder Center for Conceptual Knowledge Processing, the Center for Interdisciplinary Studies in Technology, the Deutsche Forschungsgemeinschaft, Land Hessen, and NaviCon GmbH for their generous support
    Content
    Concepts & Language: Knowledge organization by procedures of natural language processing. A case study using the method GABEK (J. Zelger, J. Gadner) - Computer aided narrative analysis using conceptual graphs (H. Schärfe, P. 0hrstrom) - Pragmatic representation of argumentative text: a challenge for the conceptual graph approach (H. Irandoust, B. Moulin) - Conceptual graphs as a knowledge representation core in a complex language learning environment (G. Angelova, A. Nenkova, S. Boycheva, T. Nikolov) - Conceptual Modeling and Ontologies: Relationships and actions in conceptual categories (Ch. Landauer, K.L. Bellman) - Concept approximations for formal concept analysis (J. Saquer, J.S. Deogun) - Faceted information representation (U. Priß) - Simple concept graphs with universal quantifiers (J. Tappe) - A framework for comparing methods for using or reusing multiple ontologies in an application (J. van ZyI, D. Corbett) - Designing task/method knowledge-based systems with conceptual graphs (M. Leclère, F.Trichet, Ch. Choquet) - A logical ontology (J. Farkas, J. Sarbo) - Algorithms and Tools: Fast concept analysis (Ch. Lindig) - A framework for conceptual graph unification (D. Corbett) - Visual CP representation of knowledge (H.D. Pfeiffer, R.T. Hartley) - Maximal isojoin for representing software textual specifications and detecting semantic anomalies (Th. Charnois) - Troika: using grids, lattices and graphs in knowledge acquisition (H.S. Delugach, B.E. Lampkin) - Open world theorem prover for conceptual graphs (J.E. Heaton, P. Kocura) - NetCare: a practical conceptual graphs software tool (S. Polovina, D. Strang) - CGWorld - a web based workbench for conceptual graphs management and applications (P. Dobrev, K. Toutanova) - Position papers: The edition project: Peirce's existential graphs (R. Mülller) - Mining association rules using formal concept analysis (N. Pasquier) - Contextual logic summary (R Wille) - Information channels and conceptual scaling (K.E. Wolff) - Spatial concepts - a rule exploration (S. Rudolph) - The TEXT-TO-ONTO learning environment (A. Mädche, St. Staab) - Controlling the semantics of metadata on audio-visual documents using ontologies (Th. Dechilly, B. Bachimont) - Building the ontological foundations of a terminology from natural language to conceptual graphs with Ribosome, a knowledge extraction system (Ch. Jacquelinet, A. Burgun) - CharGer: some lessons learned and new directions (H.S. Delugach) - Knowledge management using conceptual graphs (W.K. Pun)
  14. Peters, W.; Vossen, P.; Diez-Orzas, P.; Adriaens, G.: Cross-linguistic alignment of WordNets with an inter-lingual-index (1998) 0.03
    0.031632643 = product of:
      0.06326529 = sum of:
        0.06326529 = product of:
          0.12653057 = sum of:
            0.12653057 = weight(_text_:p in 6446) [ClassicSimilarity], result of:
              0.12653057 = score(doc=6446,freq=4.0), product of:
                0.18768665 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.052200247 = queryNorm
                0.67415863 = fieldWeight in 6446, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6446)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  15. Semantik und künstliche Intelligenz : Beiträge zur automatischen Sprachbearbeitung II (1977) 0.03
    0.029823542 = product of:
      0.059647083 = sum of:
        0.059647083 = product of:
          0.11929417 = sum of:
            0.11929417 = weight(_text_:p in 6130) [ClassicSimilarity], result of:
              0.11929417 = score(doc=6130,freq=2.0), product of:
                0.18768665 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.052200247 = queryNorm
                0.63560283 = fieldWeight in 6130, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.125 = fieldNorm(doc=6130)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Editor
    Eisenberg, P.
  16. Drouin, P.: Term extraction using non-technical corpora as a point of leverage (2003) 0.03
    0.029823542 = product of:
      0.059647083 = sum of:
        0.059647083 = product of:
          0.11929417 = sum of:
            0.11929417 = weight(_text_:p in 8797) [ClassicSimilarity], result of:
              0.11929417 = score(doc=8797,freq=2.0), product of:
                0.18768665 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.052200247 = queryNorm
                0.63560283 = fieldWeight in 8797, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.125 = fieldNorm(doc=8797)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  17. Warner, A.J.: Natural language processing (1987) 0.03
    0.028289651 = product of:
      0.056579303 = sum of:
        0.056579303 = product of:
          0.113158606 = sum of:
            0.113158606 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.113158606 = score(doc=337,freq=2.0), product of:
                0.18279637 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052200247 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  18. Tenopir, C.; Cahn, P.: TARGET & FREESTYLE : DIALOG and Mead join the relevance ranks (1994) 0.03
    0.026360538 = product of:
      0.052721076 = sum of:
        0.052721076 = product of:
          0.10544215 = sum of:
            0.10544215 = weight(_text_:p in 2777) [ClassicSimilarity], result of:
              0.10544215 = score(doc=2777,freq=4.0), product of:
                0.18768665 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.052200247 = queryNorm
                0.5617989 = fieldWeight in 2777, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2777)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.446-456.
  19. Minsky, M.L.: Materie, Geist, Modell (1977) 0.03
    0.026095599 = product of:
      0.052191198 = sum of:
        0.052191198 = product of:
          0.104382396 = sum of:
            0.104382396 = weight(_text_:p in 5547) [ClassicSimilarity], result of:
              0.104382396 = score(doc=5547,freq=2.0), product of:
                0.18768665 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.052200247 = queryNorm
                0.55615246 = fieldWeight in 5547, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5547)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Semantik und künstliche Intelligenz: Beiträge zur automatischen Sprachbearbeitung II. Hsrg. u. eingeleitet von P. Eisenberg
  20. Schank, R.C.: Computer, elementare Aktionen und linguistische Theorien (1977) 0.03
    0.026095599 = product of:
      0.052191198 = sum of:
        0.052191198 = product of:
          0.104382396 = sum of:
            0.104382396 = weight(_text_:p in 6142) [ClassicSimilarity], result of:
              0.104382396 = score(doc=6142,freq=2.0), product of:
                0.18768665 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.052200247 = queryNorm
                0.55615246 = fieldWeight in 6142, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6142)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Semantik und künstliche Intelligenz: Beiträge zur automatischen Sprachbearbeitung II. Hrsg. und eingeleitet von P. Eisenberg

Years

Languages

  • e 108
  • d 26
  • m 1
  • slv 1
  • More… Less…

Types

  • a 104
  • el 19
  • m 15
  • p 7
  • s 7
  • x 2
  • b 1
  • d 1
  • More… Less…

Classifications