Search (104 results, page 1 of 6)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.07
    0.07227165 = sum of:
      0.053885084 = product of:
        0.21554033 = sum of:
          0.21554033 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.21554033 = score(doc=562,freq=2.0), product of:
              0.38351142 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.045236014 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.018386567 = product of:
        0.036773134 = sum of:
          0.036773134 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.036773134 = score(doc=562,freq=2.0), product of:
              0.15840882 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045236014 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Semantik, Lexikographie und Computeranwendungen : Workshop ... (Bonn) : 1995.01.27-28 (1996) 0.04
    0.03855045 = product of:
      0.0771009 = sum of:
        0.0771009 = sum of:
          0.046456624 = weight(_text_:n in 190) [ClassicSimilarity], result of:
            0.046456624 = score(doc=190,freq=2.0), product of:
              0.19504215 = queryWeight, product of:
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.045236014 = queryNorm
              0.23818761 = fieldWeight in 190, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.3116565 = idf(docFreq=1611, maxDocs=44218)
                0.0390625 = fieldNorm(doc=190)
          0.030644279 = weight(_text_:22 in 190) [ClassicSimilarity], result of:
            0.030644279 = score(doc=190,freq=2.0), product of:
              0.15840882 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045236014 = queryNorm
              0.19345059 = fieldWeight in 190, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=190)
      0.5 = coord(1/2)
    
    Date
    14. 4.2007 10:04:22
    Editor
    Weber, N.
  3. ISO/DIS 1087-2:1994-09: Terminology work, vocabulary : pt.2: computational aids (1994) 0.04
    0.0371653 = product of:
      0.0743306 = sum of:
        0.0743306 = product of:
          0.1486612 = sum of:
            0.1486612 = weight(_text_:n in 2912) [ClassicSimilarity], result of:
              0.1486612 = score(doc=2912,freq=2.0), product of:
                0.19504215 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.045236014 = queryNorm
                0.76220036 = fieldWeight in 2912, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.125 = fieldNorm(doc=2912)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    n
  4. ISO/TR 12618:1994: Computational aids in terminology : creation and use of terminological databases and text corpora (1994) 0.04
    0.0371653 = product of:
      0.0743306 = sum of:
        0.0743306 = product of:
          0.1486612 = sum of:
            0.1486612 = weight(_text_:n in 2913) [ClassicSimilarity], result of:
              0.1486612 = score(doc=2913,freq=2.0), product of:
                0.19504215 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.045236014 = queryNorm
                0.76220036 = fieldWeight in 2913, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.125 = fieldNorm(doc=2913)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    n
  5. Sager, N.: Natural language information processing (1981) 0.04
    0.0371653 = product of:
      0.0743306 = sum of:
        0.0743306 = product of:
          0.1486612 = sum of:
            0.1486612 = weight(_text_:n in 5313) [ClassicSimilarity], result of:
              0.1486612 = score(doc=5313,freq=2.0), product of:
                0.19504215 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.045236014 = queryNorm
                0.76220036 = fieldWeight in 5313, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.125 = fieldNorm(doc=5313)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  6. Gonzalo, J.; Verdejo, F.; Peters, C.; Calzolari, N.: Applying EuroWordNet to cross-language text retrieval (1998) 0.03
    0.032519635 = product of:
      0.06503927 = sum of:
        0.06503927 = product of:
          0.13007854 = sum of:
            0.13007854 = weight(_text_:n in 6445) [ClassicSimilarity], result of:
              0.13007854 = score(doc=6445,freq=2.0), product of:
                0.19504215 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.045236014 = queryNorm
                0.6669253 = fieldWeight in 6445, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6445)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  7. Ahmed, F.; Nürnberger, A.: Evaluation of n-gram conflation approaches for Arabic text retrieval (2009) 0.03
    0.031164052 = product of:
      0.062328104 = sum of:
        0.062328104 = product of:
          0.12465621 = sum of:
            0.12465621 = weight(_text_:n in 2941) [ClassicSimilarity], result of:
              0.12465621 = score(doc=2941,freq=10.0), product of:
                0.19504215 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.045236014 = queryNorm
                0.63912445 = fieldWeight in 2941, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2941)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper we present a language-independent approach for conflation that does not depend on predefined rules or prior knowledge of the target language. The proposed unsupervised method is based on an enhancement of the pure n-gram model that can group related words based on various string-similarity measures, while restricting the search to specific locations of the target word by taking into account the order of n-grams. We show that the method is effective to achieve high score similarities for all word-form variations and reduces the ambiguity, i.e., obtains a higher precision and recall, compared to pure n-gram-based approaches for English, Portuguese, and Arabic. The proposed method is especially suited for conflation approaches in Arabic, since Arabic is a highly inflectional language. Therefore, we present in addition an adaptive user interface for Arabic text retrieval called araSearch. araSearch serves as a metasearch interface to existing search engines. The system is able to extend a query using the proposed conflation approach such that additional results for relevant subwords can be found automatically.
    Object
    n-grams
  8. Vichot, F.; Wolinksi, F.; Tomeh, J.; Guennou, S.; Dillet, B.; Aydjian, S.: High precision hypertext navigation based on NLP automation extractions (1997) 0.03
    0.027873974 = product of:
      0.05574795 = sum of:
        0.05574795 = product of:
          0.1114959 = sum of:
            0.1114959 = weight(_text_:n in 733) [ClassicSimilarity], result of:
              0.1114959 = score(doc=733,freq=2.0), product of:
                0.19504215 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.045236014 = queryNorm
                0.57165027 = fieldWeight in 733, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.09375 = fieldNorm(doc=733)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Hypertext - Information Retrieval - Multimedia '97: Theorien, Modelle und Implementierungen integrierter elektronischer Informationssysteme. Proceedings HIM '97. Hrsg.: N. Fuhr u.a
  9. Alonge, A.; Calzolari, N.; Vossen, P.; Bloksma, L.; Castellon, I.; Marti, M.A.; Peters, W.: ¬The linguistic design of the EuroWordNet database (1998) 0.03
    0.027873974 = product of:
      0.05574795 = sum of:
        0.05574795 = product of:
          0.1114959 = sum of:
            0.1114959 = weight(_text_:n in 6440) [ClassicSimilarity], result of:
              0.1114959 = score(doc=6440,freq=2.0), product of:
                0.19504215 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.045236014 = queryNorm
                0.57165027 = fieldWeight in 6440, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6440)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  10. Figuerola, C.G.; Gomez, R.; Lopez de San Roman, E.: Stemming and n-grams in Spanish : an evaluation of their impact in information retrieval (2000) 0.03
    0.027873974 = product of:
      0.05574795 = sum of:
        0.05574795 = product of:
          0.1114959 = sum of:
            0.1114959 = weight(_text_:n in 6501) [ClassicSimilarity], result of:
              0.1114959 = score(doc=6501,freq=2.0), product of:
                0.19504215 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.045236014 = queryNorm
                0.57165027 = fieldWeight in 6501, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6501)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  11. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.03
    0.026942542 = product of:
      0.053885084 = sum of:
        0.053885084 = product of:
          0.21554033 = sum of:
            0.21554033 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.21554033 = score(doc=862,freq=2.0), product of:
                0.38351142 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.045236014 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  12. Gencosman, B.C.; Ozmutlu, H.C.; Ozmutlu, S.: Character n-gram application for automatic new topic identification (2014) 0.03
    0.025970044 = product of:
      0.051940087 = sum of:
        0.051940087 = product of:
          0.103880174 = sum of:
            0.103880174 = weight(_text_:n in 2688) [ClassicSimilarity], result of:
              0.103880174 = score(doc=2688,freq=10.0), product of:
                0.19504215 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.045236014 = queryNorm
                0.53260374 = fieldWeight in 2688, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2688)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The widespread availability of the Internet and the variety of Internet-based applications have resulted in a significant increase in the amount of web pages. Determining the behaviors of search engine users has become a critical step in enhancing search engine performance. Search engine user behaviors can be determined by content-based or content-ignorant algorithms. Although many content-ignorant studies have been performed to automatically identify new topics, previous results have demonstrated that spelling errors can cause significant errors in topic shift estimates. In this study, we focused on minimizing the number of wrong estimates that were based on spelling errors. We developed a new hybrid algorithm combining character n-gram and neural network methodologies, and compared the experimental results with results from previous studies. For the FAST and Excite datasets, the proposed algorithm improved topic shift estimates by 6.987% and 2.639%, respectively. Moreover, we analyzed the performance of the character n-gram method in different aspects including the comparison with Levenshtein edit-distance method. The experimental results demonstrated that the character n-gram method outperformed to the Levensthein edit distance method in terms of topic identification.
    Object
    n-grams
  13. Warner, A.J.: Natural language processing (1987) 0.02
    0.024515422 = product of:
      0.049030844 = sum of:
        0.049030844 = product of:
          0.09806169 = sum of:
            0.09806169 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.09806169 = score(doc=337,freq=2.0), product of:
                0.15840882 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045236014 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  14. Chen, L.; Fang, H.: ¬An automatic method for ex-tracting innovative ideas based on the Scopus® database (2019) 0.02
    0.023228312 = product of:
      0.046456624 = sum of:
        0.046456624 = product of:
          0.09291325 = sum of:
            0.09291325 = weight(_text_:n in 5310) [ClassicSimilarity], result of:
              0.09291325 = score(doc=5310,freq=8.0), product of:
                0.19504215 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.045236014 = queryNorm
                0.47637522 = fieldWeight in 5310, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5310)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The novelty of knowledge claims in a research paper can be considered an evaluation criterion for papers to supplement citations. To provide a foundation for research evaluation from the perspective of innovativeness, we propose an automatic approach for extracting innovative ideas from the abstracts of technology and engineering papers. The approach extracts N-grams as candidates based on part-of-speech tagging and determines whether they are novel by checking the Scopus® database to determine whether they had ever been presented previously. Moreover, we discussed the distributions of innovative ideas in different abstract structures. To improve the performance by excluding noisy N-grams, a list of stopwords and a list of research description characteristics were developed. We selected abstracts of articles published from 2011 to 2017 with the topic of semantic analysis as the experimental texts. Excluding noisy N-grams, considering the distribution of innovative ideas in abstracts, and suitably combining N-grams can effectively improve the performance of automatic innovative idea extraction. Unlike co-word and co-citation analysis, innovative-idea extraction aims to identify the differences in a paper from all previously published papers.
  15. Dampz, N.: ChatGPT interpretiert jetzt auch Bilder : Neue Version (2023) 0.02
    0.023228312 = product of:
      0.046456624 = sum of:
        0.046456624 = product of:
          0.09291325 = sum of:
            0.09291325 = weight(_text_:n in 874) [ClassicSimilarity], result of:
              0.09291325 = score(doc=874,freq=2.0), product of:
                0.19504215 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.045236014 = queryNorm
                0.47637522 = fieldWeight in 874, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.078125 = fieldNorm(doc=874)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  16. Rorvig, M.; Smith, M.M.; Uemura, A.: ¬The N-gram hypothesis applied to matched sets of visualized Japanese-English technical documents (1999) 0.02
    0.022994855 = product of:
      0.04598971 = sum of:
        0.04598971 = product of:
          0.09197942 = sum of:
            0.09197942 = weight(_text_:n in 6675) [ClassicSimilarity], result of:
              0.09197942 = score(doc=6675,freq=4.0), product of:
                0.19504215 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.045236014 = queryNorm
                0.47158742 = fieldWeight in 6675, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6675)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Shape Recovery Analysis (SHERA), a new visual analytical technique, is applied to the N-Gram hypothesis on matched Japanese-English technical documents supplied by the National Center for Science Information Systems (NACSIS) in Japan. The results of the SHERA study reveal compaction in the translation of Japanese subject terms to English subject terms. Surprisingly, the bigram approach to the Japanese data yields a remarkable similarity to the matching visualized English texts
  17. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.02
    0.021450995 = product of:
      0.04290199 = sum of:
        0.04290199 = product of:
          0.08580398 = sum of:
            0.08580398 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.08580398 = score(doc=3164,freq=2.0), product of:
                0.15840882 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045236014 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  18. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.02
    0.021450995 = product of:
      0.04290199 = sum of:
        0.04290199 = product of:
          0.08580398 = sum of:
            0.08580398 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.08580398 = score(doc=4506,freq=2.0), product of:
                0.15840882 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045236014 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8.10.2000 11:52:22
  19. Somers, H.: Example-based machine translation : Review article (1999) 0.02
    0.021450995 = product of:
      0.04290199 = sum of:
        0.04290199 = product of:
          0.08580398 = sum of:
            0.08580398 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.08580398 = score(doc=6672,freq=2.0), product of:
                0.15840882 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045236014 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  20. New tools for human translators (1997) 0.02
    0.021450995 = product of:
      0.04290199 = sum of:
        0.04290199 = product of:
          0.08580398 = sum of:
            0.08580398 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.08580398 = score(doc=1179,freq=2.0), product of:
                0.15840882 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045236014 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19

Years

Languages

  • e 78
  • d 22
  • f 2
  • m 1
  • More… Less…

Types

  • a 80
  • el 15
  • m 9
  • s 7
  • x 4
  • n 2
  • p 2
  • d 1
  • More… Less…

Classifications