Search (120 results, page 1 of 6)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.08
    0.08276204 = sum of:
      0.061706625 = product of:
        0.2468265 = sum of:
          0.2468265 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.2468265 = score(doc=562,freq=2.0), product of:
              0.43917897 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05180212 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.021055417 = product of:
        0.042110834 = sum of:
          0.042110834 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.042110834 = score(doc=562,freq=2.0), product of:
              0.1814022 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05180212 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Melucci, M.; Orio, N.: Design, implementation, and evaluation of a methodology for automatic stemmer generation (2007) 0.06
    0.058055796 = sum of:
      0.02081586 = product of:
        0.08326344 = sum of:
          0.08326344 = weight(_text_:authors in 268) [ClassicSimilarity], result of:
            0.08326344 = score(doc=268,freq=2.0), product of:
              0.23615624 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05180212 = queryNorm
              0.35257778 = fieldWeight in 268, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=268)
        0.25 = coord(1/4)
      0.037239935 = product of:
        0.07447987 = sum of:
          0.07447987 = weight(_text_:n in 268) [ClassicSimilarity], result of:
            0.07447987 = score(doc=268,freq=2.0), product of:
              0.22335295 = queryWeight, product of:
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.05180212 = queryNorm
              0.33346266 = fieldWeight in 268, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.0546875 = fieldNorm(doc=268)
        0.5 = coord(1/2)
    
    Abstract
    The authors describe a statistical approach based on hidden Markov models (HMMs), for generating stemmers automatically. The proposed approach requires little effort to insert new languages in the system even if minimal linguistic knowledge is available. This is a key advantage especially for digital libraries, which are often developed for a specific institution or government because the program can manage a great amount of documents written in local languages. The evaluation described in the article shows that the stemmers implemented by means of HMMs are as effective as those based on linguistic rules.
  3. Semantik, Lexikographie und Computeranwendungen : Workshop ... (Bonn) : 1995.01.27-28 (1996) 0.04
    0.044146135 = product of:
      0.08829227 = sum of:
        0.08829227 = sum of:
          0.053199906 = weight(_text_:n in 190) [ClassicSimilarity], result of:
            0.053199906 = score(doc=190,freq=2.0), product of:
              0.22335295 = queryWeight, product of:
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.05180212 = queryNorm
              0.23818761 = fieldWeight in 190, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.0390625 = fieldNorm(doc=190)
          0.03509236 = weight(_text_:22 in 190) [ClassicSimilarity], result of:
            0.03509236 = score(doc=190,freq=2.0), product of:
              0.1814022 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05180212 = queryNorm
              0.19345059 = fieldWeight in 190, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=190)
      0.5 = coord(1/2)
    
    Date
    14. 4.2007 10:04:22
    Editor
    Weber, N.
  4. ISO/DIS 1087-2:1994-09: Terminology work, vocabulary : pt.2: computational aids (1994) 0.04
    0.042559925 = product of:
      0.08511985 = sum of:
        0.08511985 = product of:
          0.1702397 = sum of:
            0.1702397 = weight(_text_:n in 2912) [ClassicSimilarity], result of:
              0.1702397 = score(doc=2912,freq=2.0), product of:
                0.22335295 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.05180212 = queryNorm
                0.76220036 = fieldWeight in 2912, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.125 = fieldNorm(doc=2912)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    n
  5. ISO/TR 12618:1994: Computational aids in terminology : creation and use of terminological databases and text corpora (1994) 0.04
    0.042559925 = product of:
      0.08511985 = sum of:
        0.08511985 = product of:
          0.1702397 = sum of:
            0.1702397 = weight(_text_:n in 2913) [ClassicSimilarity], result of:
              0.1702397 = score(doc=2913,freq=2.0), product of:
                0.22335295 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.05180212 = queryNorm
                0.76220036 = fieldWeight in 2913, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.125 = fieldNorm(doc=2913)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    n
  6. Sager, N.: Natural language information processing (1981) 0.04
    0.042559925 = product of:
      0.08511985 = sum of:
        0.08511985 = product of:
          0.1702397 = sum of:
            0.1702397 = weight(_text_:n in 5313) [ClassicSimilarity], result of:
              0.1702397 = score(doc=5313,freq=2.0), product of:
                0.22335295 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.05180212 = queryNorm
                0.76220036 = fieldWeight in 5313, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.125 = fieldNorm(doc=5313)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  7. Deventer, J.P. van; Kruger, C.J.; Johnson, R.D.: Delineating knowledge management through lexical analysis : a retrospective (2015) 0.04
    0.03981912 = sum of:
      0.027536796 = product of:
        0.110147186 = sum of:
          0.110147186 = weight(_text_:authors in 3807) [ClassicSimilarity], result of:
            0.110147186 = score(doc=3807,freq=14.0), product of:
              0.23615624 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05180212 = queryNorm
              0.46641657 = fieldWeight in 3807, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3807)
        0.25 = coord(1/4)
      0.012282327 = product of:
        0.024564654 = sum of:
          0.024564654 = weight(_text_:22 in 3807) [ClassicSimilarity], result of:
            0.024564654 = score(doc=3807,freq=2.0), product of:
              0.1814022 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05180212 = queryNorm
              0.1354154 = fieldWeight in 3807, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3807)
        0.5 = coord(1/2)
    
    Abstract
    Purpose Academic authors tend to define terms that meet their own needs. Knowledge Management (KM) is a term that comes to mind and is examined in this study. Lexicographical research identified KM terms used by authors from 1996 to 2006 in academic outlets to define KM. Data were collected based on strict criteria which included that definitions should be unique instances. From 2006 onwards, these authors could not identify new unique instances of definitions with repetitive usage of such definition instances. Analysis revealed that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, and Process) and Contextualised Content (Information). The paper aims to discuss these issues. Design/methodology/approach The aim of this paper is to add to the body of knowledge in the KM discipline and supply KM practitioners and scholars with insight into what is commonly regarded to be KM so as to reignite the debate on what one could consider as KM. The lexicon used by KM scholars was evaluated though the application of lexicographical research methods as extended though Knowledge Discovery and Text Analysis methods. Findings By simplifying term relationships through the application of lexicographical research methods, as extended though Knowledge Discovery and Text Analysis methods, it was found that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, Process) and Contextualised Content (Information). One would therefore be able to indicate that KM, from an academic point of view, refers to people processing contextualised content.
    Research limitations/implications In total, 42 definitions were identified spanning a period of 11 years. This represented the first use of KM through the estimated apex of terms used. From 2006 onwards definitions were used in repetition, and all definitions that were considered to repeat were therefore subsequently excluded as not being unique instances. All definitions listed are by no means complete and exhaustive. The definitions are viewed outside the scope and context in which they were originally formulated and then used to review the key concepts in the definitions themselves. Social implications When the authors refer to the aforementioned discussion of KM content as well as the presentation of the method followed in this paper, the authors may have a few implications for future research in KM. First the research validates ideas presented by the OECD in 2005 pertaining to KM. It also validates that through the evolution of KM, the authors ended with a description of KM that may be seen as a standardised description. If the authors as academics and practitioners, for example, refer to KM as the same construct and/or idea, it has the potential to speculatively, distinguish between what KM may or may not be. Originality/value By simplifying the term used to define KM, by focusing on the most common definitions, the paper assist in refocusing KM by reconsidering the dimensions that is the most common in how it has been defined over time. This would hopefully assist in reigniting discussions about KM and how it may be used to the benefit of an organisation.
    Date
    20. 1.2015 18:30:22
  8. Gonzalo, J.; Verdejo, F.; Peters, C.; Calzolari, N.: Applying EuroWordNet to cross-language text retrieval (1998) 0.04
    0.037239935 = product of:
      0.07447987 = sum of:
        0.07447987 = product of:
          0.14895974 = sum of:
            0.14895974 = weight(_text_:n in 6445) [ClassicSimilarity], result of:
              0.14895974 = score(doc=6445,freq=2.0), product of:
                0.22335295 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.05180212 = queryNorm
                0.6669253 = fieldWeight in 6445, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6445)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  9. Ahmed, F.; Nürnberger, A.: Evaluation of n-gram conflation approaches for Arabic text retrieval (2009) 0.04
    0.035687584 = product of:
      0.07137517 = sum of:
        0.07137517 = product of:
          0.14275034 = sum of:
            0.14275034 = weight(_text_:n in 2941) [ClassicSimilarity], result of:
              0.14275034 = score(doc=2941,freq=10.0), product of:
                0.22335295 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.05180212 = queryNorm
                0.63912445 = fieldWeight in 2941, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2941)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper we present a language-independent approach for conflation that does not depend on predefined rules or prior knowledge of the target language. The proposed unsupervised method is based on an enhancement of the pure n-gram model that can group related words based on various string-similarity measures, while restricting the search to specific locations of the target word by taking into account the order of n-grams. We show that the method is effective to achieve high score similarities for all word-form variations and reduces the ambiguity, i.e., obtains a higher precision and recall, compared to pure n-gram-based approaches for English, Portuguese, and Arabic. The proposed method is especially suited for conflation approaches in Arabic, since Arabic is a highly inflectional language. Therefore, we present in addition an adaptive user interface for Arabic text retrieval called araSearch. araSearch serves as a metasearch interface to existing search engines. The system is able to extend a query using the proposed conflation approach such that additional results for relevant subwords can be found automatically.
    Object
    n-grams
  10. Vichot, F.; Wolinksi, F.; Tomeh, J.; Guennou, S.; Dillet, B.; Aydjian, S.: High precision hypertext navigation based on NLP automation extractions (1997) 0.03
    0.031919945 = product of:
      0.06383989 = sum of:
        0.06383989 = product of:
          0.12767978 = sum of:
            0.12767978 = weight(_text_:n in 733) [ClassicSimilarity], result of:
              0.12767978 = score(doc=733,freq=2.0), product of:
                0.22335295 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.05180212 = queryNorm
                0.57165027 = fieldWeight in 733, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.09375 = fieldNorm(doc=733)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Hypertext - Information Retrieval - Multimedia '97: Theorien, Modelle und Implementierungen integrierter elektronischer Informationssysteme. Proceedings HIM '97. Hrsg.: N. Fuhr u.a
  11. Alonge, A.; Calzolari, N.; Vossen, P.; Bloksma, L.; Castellon, I.; Marti, M.A.; Peters, W.: ¬The linguistic design of the EuroWordNet database (1998) 0.03
    0.031919945 = product of:
      0.06383989 = sum of:
        0.06383989 = product of:
          0.12767978 = sum of:
            0.12767978 = weight(_text_:n in 6440) [ClassicSimilarity], result of:
              0.12767978 = score(doc=6440,freq=2.0), product of:
                0.22335295 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.05180212 = queryNorm
                0.57165027 = fieldWeight in 6440, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6440)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  12. Figuerola, C.G.; Gomez, R.; Lopez de San Roman, E.: Stemming and n-grams in Spanish : an evaluation of their impact in information retrieval (2000) 0.03
    0.031919945 = product of:
      0.06383989 = sum of:
        0.06383989 = product of:
          0.12767978 = sum of:
            0.12767978 = weight(_text_:n in 6501) [ClassicSimilarity], result of:
              0.12767978 = score(doc=6501,freq=2.0), product of:
                0.22335295 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.05180212 = queryNorm
                0.57165027 = fieldWeight in 6501, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6501)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  13. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.03
    0.030853312 = product of:
      0.061706625 = sum of:
        0.061706625 = product of:
          0.2468265 = sum of:
            0.2468265 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.2468265 = score(doc=862,freq=2.0), product of:
                0.43917897 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05180212 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  14. Gencosman, B.C.; Ozmutlu, H.C.; Ozmutlu, S.: Character n-gram application for automatic new topic identification (2014) 0.03
    0.029739656 = product of:
      0.05947931 = sum of:
        0.05947931 = product of:
          0.11895862 = sum of:
            0.11895862 = weight(_text_:n in 2688) [ClassicSimilarity], result of:
              0.11895862 = score(doc=2688,freq=10.0), product of:
                0.22335295 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.05180212 = queryNorm
                0.53260374 = fieldWeight in 2688, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2688)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The widespread availability of the Internet and the variety of Internet-based applications have resulted in a significant increase in the amount of web pages. Determining the behaviors of search engine users has become a critical step in enhancing search engine performance. Search engine user behaviors can be determined by content-based or content-ignorant algorithms. Although many content-ignorant studies have been performed to automatically identify new topics, previous results have demonstrated that spelling errors can cause significant errors in topic shift estimates. In this study, we focused on minimizing the number of wrong estimates that were based on spelling errors. We developed a new hybrid algorithm combining character n-gram and neural network methodologies, and compared the experimental results with results from previous studies. For the FAST and Excite datasets, the proposed algorithm improved topic shift estimates by 6.987% and 2.639%, respectively. Moreover, we analyzed the performance of the character n-gram method in different aspects including the comparison with Levenshtein edit-distance method. The experimental results demonstrated that the character n-gram method outperformed to the Levensthein edit distance method in terms of topic identification.
    Object
    n-grams
  15. Working with conceptual structures : contributions to ICCS 2000. 8th International Conference on Conceptual Structures: Logical, Linguistic, and Computational Issues. Darmstadt, August 14-18, 2000 (2000) 0.03
    0.029027898 = sum of:
      0.01040793 = product of:
        0.04163172 = sum of:
          0.04163172 = weight(_text_:authors in 5089) [ClassicSimilarity], result of:
            0.04163172 = score(doc=5089,freq=2.0), product of:
              0.23615624 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05180212 = queryNorm
              0.17628889 = fieldWeight in 5089, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.02734375 = fieldNorm(doc=5089)
        0.25 = coord(1/4)
      0.018619968 = product of:
        0.037239935 = sum of:
          0.037239935 = weight(_text_:n in 5089) [ClassicSimilarity], result of:
            0.037239935 = score(doc=5089,freq=2.0), product of:
              0.22335295 = queryWeight, product of:
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.05180212 = queryNorm
              0.16673133 = fieldWeight in 5089, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.02734375 = fieldNorm(doc=5089)
        0.5 = coord(1/2)
    
    Abstract
    The 8th International Conference on Conceptual Structures - Logical, Linguistic, and Computational Issues (ICCS 2000) brings together a wide range of researchers and practitioners working with conceptual structures. During the last few years, the ICCS conference series has considerably widened its scope on different kinds of conceptual structures, stimulating research across domain boundaries. We hope that this stimulation is further enhanced by ICCS 2000 joining the long tradition of conferences in Darmstadt with extensive, lively discussions. This volume consists of contributions presented at ICCS 2000, complementing the volume "Conceptual Structures: Logical, Linguistic, and Computational Issues" (B. Ganter, G.W. Mineau (Eds.), LNAI 1867, Springer, Berlin-Heidelberg 2000). It contains submissions reviewed by the program committee, and position papers. We wish to express our appreciation to all the authors of submitted papers, to the general chair, the program chair, the editorial board, the program committee, and to the additional reviewers for making ICCS 2000 a valuable contribution in the knowledge processing research field. Special thanks go to the local organizers for making the conference an enjoyable and inspiring event. We are grateful to Darmstadt University of Technology, the Ernst Schröder Center for Conceptual Knowledge Processing, the Center for Interdisciplinary Studies in Technology, the Deutsche Forschungsgemeinschaft, Land Hessen, and NaviCon GmbH for their generous support
    Content
    Concepts & Language: Knowledge organization by procedures of natural language processing. A case study using the method GABEK (J. Zelger, J. Gadner) - Computer aided narrative analysis using conceptual graphs (H. Schärfe, P. 0hrstrom) - Pragmatic representation of argumentative text: a challenge for the conceptual graph approach (H. Irandoust, B. Moulin) - Conceptual graphs as a knowledge representation core in a complex language learning environment (G. Angelova, A. Nenkova, S. Boycheva, T. Nikolov) - Conceptual Modeling and Ontologies: Relationships and actions in conceptual categories (Ch. Landauer, K.L. Bellman) - Concept approximations for formal concept analysis (J. Saquer, J.S. Deogun) - Faceted information representation (U. Priß) - Simple concept graphs with universal quantifiers (J. Tappe) - A framework for comparing methods for using or reusing multiple ontologies in an application (J. van ZyI, D. Corbett) - Designing task/method knowledge-based systems with conceptual graphs (M. Leclère, F.Trichet, Ch. Choquet) - A logical ontology (J. Farkas, J. Sarbo) - Algorithms and Tools: Fast concept analysis (Ch. Lindig) - A framework for conceptual graph unification (D. Corbett) - Visual CP representation of knowledge (H.D. Pfeiffer, R.T. Hartley) - Maximal isojoin for representing software textual specifications and detecting semantic anomalies (Th. Charnois) - Troika: using grids, lattices and graphs in knowledge acquisition (H.S. Delugach, B.E. Lampkin) - Open world theorem prover for conceptual graphs (J.E. Heaton, P. Kocura) - NetCare: a practical conceptual graphs software tool (S. Polovina, D. Strang) - CGWorld - a web based workbench for conceptual graphs management and applications (P. Dobrev, K. Toutanova) - Position papers: The edition project: Peirce's existential graphs (R. Mülller) - Mining association rules using formal concept analysis (N. Pasquier) - Contextual logic summary (R Wille) - Information channels and conceptual scaling (K.E. Wolff) - Spatial concepts - a rule exploration (S. Rudolph) - The TEXT-TO-ONTO learning environment (A. Mädche, St. Staab) - Controlling the semantics of metadata on audio-visual documents using ontologies (Th. Dechilly, B. Bachimont) - Building the ontological foundations of a terminology from natural language to conceptual graphs with Ribosome, a knowledge extraction system (Ch. Jacquelinet, A. Burgun) - CharGer: some lessons learned and new directions (H.S. Delugach) - Knowledge management using conceptual graphs (W.K. Pun)
  16. Warner, A.J.: Natural language processing (1987) 0.03
    0.02807389 = product of:
      0.05614778 = sum of:
        0.05614778 = product of:
          0.11229556 = sum of:
            0.11229556 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.11229556 = score(doc=337,freq=2.0), product of:
                0.1814022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05180212 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  17. Chen, L.; Fang, H.: ¬An automatic method for ex-tracting innovative ideas based on the Scopus® database (2019) 0.03
    0.026599953 = product of:
      0.053199906 = sum of:
        0.053199906 = product of:
          0.10639981 = sum of:
            0.10639981 = weight(_text_:n in 5310) [ClassicSimilarity], result of:
              0.10639981 = score(doc=5310,freq=8.0), product of:
                0.22335295 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.05180212 = queryNorm
                0.47637522 = fieldWeight in 5310, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5310)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The novelty of knowledge claims in a research paper can be considered an evaluation criterion for papers to supplement citations. To provide a foundation for research evaluation from the perspective of innovativeness, we propose an automatic approach for extracting innovative ideas from the abstracts of technology and engineering papers. The approach extracts N-grams as candidates based on part-of-speech tagging and determines whether they are novel by checking the Scopus® database to determine whether they had ever been presented previously. Moreover, we discussed the distributions of innovative ideas in different abstract structures. To improve the performance by excluding noisy N-grams, a list of stopwords and a list of research description characteristics were developed. We selected abstracts of articles published from 2011 to 2017 with the topic of semantic analysis as the experimental texts. Excluding noisy N-grams, considering the distribution of innovative ideas in abstracts, and suitably combining N-grams can effectively improve the performance of automatic innovative idea extraction. Unlike co-word and co-citation analysis, innovative-idea extraction aims to identify the differences in a paper from all previously published papers.
  18. Dampz, N.: ChatGPT interpretiert jetzt auch Bilder : Neue Version (2023) 0.03
    0.026599953 = product of:
      0.053199906 = sum of:
        0.053199906 = product of:
          0.10639981 = sum of:
            0.10639981 = weight(_text_:n in 874) [ClassicSimilarity], result of:
              0.10639981 = score(doc=874,freq=2.0), product of:
                0.22335295 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.05180212 = queryNorm
                0.47637522 = fieldWeight in 874, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.078125 = fieldNorm(doc=874)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  19. Rorvig, M.; Smith, M.M.; Uemura, A.: ¬The N-gram hypothesis applied to matched sets of visualized Japanese-English technical documents (1999) 0.03
    0.026332611 = product of:
      0.052665222 = sum of:
        0.052665222 = product of:
          0.105330445 = sum of:
            0.105330445 = weight(_text_:n in 6675) [ClassicSimilarity], result of:
              0.105330445 = score(doc=6675,freq=4.0), product of:
                0.22335295 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.05180212 = queryNorm
                0.47158742 = fieldWeight in 6675, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6675)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Shape Recovery Analysis (SHERA), a new visual analytical technique, is applied to the N-Gram hypothesis on matched Japanese-English technical documents supplied by the National Center for Science Information Systems (NACSIS) in Japan. The results of the SHERA study reveal compaction in the translation of Japanese subject terms to English subject terms. Surprisingly, the bigram approach to the Japanese data yields a remarkable similarity to the matching visualized English texts
  20. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.02
    0.024564654 = product of:
      0.049129307 = sum of:
        0.049129307 = product of:
          0.098258615 = sum of:
            0.098258615 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.098258615 = score(doc=3164,freq=2.0), product of:
                0.1814022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05180212 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248

Years

Languages

  • e 94
  • d 22
  • f 2
  • m 1
  • More… Less…

Types

  • a 93
  • el 16
  • m 12
  • s 7
  • x 4
  • n 2
  • p 2
  • d 1
  • More… Less…

Classifications