Search (21 results, page 1 of 2)

  • × year_i:[2010 TO 2020}
  • × theme_ss:"Computerlinguistik"
  1. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.09
    0.087931074 = product of:
      0.21982768 = sum of:
        0.20254931 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.20254931 = score(doc=563,freq=2.0), product of:
            0.36039644 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.042509552 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.017278373 = product of:
          0.034556746 = sum of:
            0.034556746 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.034556746 = score(doc=563,freq=2.0), product of:
                0.14886121 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042509552 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  2. Deventer, J.P. van; Kruger, C.J.; Johnson, R.D.: Delineating knowledge management through lexical analysis : a retrospective (2015) 0.01
    0.010501078 = product of:
      0.05250539 = sum of:
        0.05250539 = sum of:
          0.032347288 = weight(_text_:management in 3807) [ClassicSimilarity], result of:
            0.032347288 = score(doc=3807,freq=6.0), product of:
              0.14328322 = queryWeight, product of:
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.042509552 = queryNorm
              0.22575769 = fieldWeight in 3807, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3807)
          0.0201581 = weight(_text_:22 in 3807) [ClassicSimilarity], result of:
            0.0201581 = score(doc=3807,freq=2.0), product of:
              0.14886121 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.042509552 = queryNorm
              0.1354154 = fieldWeight in 3807, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3807)
      0.2 = coord(1/5)
    
    Abstract
    Purpose Academic authors tend to define terms that meet their own needs. Knowledge Management (KM) is a term that comes to mind and is examined in this study. Lexicographical research identified KM terms used by authors from 1996 to 2006 in academic outlets to define KM. Data were collected based on strict criteria which included that definitions should be unique instances. From 2006 onwards, these authors could not identify new unique instances of definitions with repetitive usage of such definition instances. Analysis revealed that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, and Process) and Contextualised Content (Information). The paper aims to discuss these issues. Design/methodology/approach The aim of this paper is to add to the body of knowledge in the KM discipline and supply KM practitioners and scholars with insight into what is commonly regarded to be KM so as to reignite the debate on what one could consider as KM. The lexicon used by KM scholars was evaluated though the application of lexicographical research methods as extended though Knowledge Discovery and Text Analysis methods. Findings By simplifying term relationships through the application of lexicographical research methods, as extended though Knowledge Discovery and Text Analysis methods, it was found that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, Process) and Contextualised Content (Information). One would therefore be able to indicate that KM, from an academic point of view, refers to people processing contextualised content.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 67(2015) no.2, S.203-229
  3. Lezius, W.: Morphy - Morphologie und Tagging für das Deutsche (2013) 0.00
    0.004607566 = product of:
      0.02303783 = sum of:
        0.02303783 = product of:
          0.04607566 = sum of:
            0.04607566 = weight(_text_:22 in 1490) [ClassicSimilarity], result of:
              0.04607566 = score(doc=1490,freq=2.0), product of:
                0.14886121 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042509552 = queryNorm
                0.30952093 = fieldWeight in 1490, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1490)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 3.2015 9:30:24
  4. Keselman, A.; Rosemblat, G.; Kilicoglu, H.; Fiszman, M.; Jin, H.; Shin, D.; Rindflesch, T.C.: Adapting semantic natural language processing technology to address information overload in influenza epidemic management (2010) 0.00
    0.003773064 = product of:
      0.018865319 = sum of:
        0.018865319 = product of:
          0.037730638 = sum of:
            0.037730638 = weight(_text_:management in 1312) [ClassicSimilarity], result of:
              0.037730638 = score(doc=1312,freq=4.0), product of:
                0.14328322 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.042509552 = queryNorm
                0.2633291 = fieldWeight in 1312, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1312)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The explosion of disaster health information results in information overload among response professionals. The objective of this project was to determine the feasibility of applying semantic natural language processing (NLP) technology to addressing this overload. The project characterizes concepts and relationships commonly used in disaster health-related documents on influenza pandemics, as the basis for adapting an existing semantic summarizer to the domain. Methods include human review and semantic NLP analysis of a set of relevant documents. This is followed by a pilot test in which two information specialists use the adapted application for a realistic information-seeking task. According to the results, the ontology of influenza epidemics management can be described via a manageable number of semantic relationships that involve concepts from a limited number of semantic types. Test users demonstrate several ways to engage with the application to obtain useful information. This suggests that existing semantic NLP algorithms can be adapted to support information summarization and visualization in influenza epidemics and other disaster health areas. However, additional research is needed in the areas of terminology development (as many relevant relationships and terms are not part of existing standardized vocabularies), NLP, and user interface design.
  5. Colace, F.; Santo, M. De; Greco, L.; Napoletano, P.: Weighted word pairs for query expansion (2015) 0.00
    0.0037351425 = product of:
      0.018675713 = sum of:
        0.018675713 = product of:
          0.037351426 = sum of:
            0.037351426 = weight(_text_:management in 2687) [ClassicSimilarity], result of:
              0.037351426 = score(doc=2687,freq=2.0), product of:
                0.14328322 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.042509552 = queryNorm
                0.2606825 = fieldWeight in 2687, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2687)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Information processing and management. 51(2015) no.1, S.179-193
  6. Lawrie, D.; Mayfield, J.; McNamee, P.; Oard, P.W.: Cross-language person-entity linking from 20 languages (2015) 0.00
    0.0034556747 = product of:
      0.017278373 = sum of:
        0.017278373 = product of:
          0.034556746 = sum of:
            0.034556746 = weight(_text_:22 in 1848) [ClassicSimilarity], result of:
              0.034556746 = score(doc=1848,freq=2.0), product of:
                0.14886121 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042509552 = queryNorm
                0.23214069 = fieldWeight in 1848, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1848)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The goal of entity linking is to associate references to an entity that is found in unstructured natural language content to an authoritative inventory of known entities. This article describes the construction of 6 test collections for cross-language person-entity linking that together span 22 languages. Fully automated components were used together with 2 crowdsourced validation stages to affordably generate ground-truth annotations with an accuracy comparable to that of a completely manual process. The resulting test collections each contain between 642 (Arabic) and 2,361 (Romanian) person references in non-English texts for which the correct resolution in English Wikipedia is known, plus a similar number of references for which no correct resolution into English Wikipedia is believed to exist. Fully automated cross-language person-name linking experiments with 20 non-English languages yielded a resolution accuracy of between 0.84 (Serbian) and 0.98 (Romanian), which compares favorably with previously reported cross-language entity linking results for Spanish.
  7. Doko, A.; Stula, , M.; Seric, L.: Improved sentence retrieval using local context and sentence length (2013) 0.00
    0.003201551 = product of:
      0.016007755 = sum of:
        0.016007755 = product of:
          0.03201551 = sum of:
            0.03201551 = weight(_text_:management in 2705) [ClassicSimilarity], result of:
              0.03201551 = score(doc=2705,freq=2.0), product of:
                0.14328322 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.042509552 = queryNorm
                0.22344214 = fieldWeight in 2705, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2705)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Information processing and management. 49(2013) no.6, S.1301-1312
  8. Fernández, R.T.; Losada, D.E.: Effective sentence retrieval based on query-independent evidence (2012) 0.00
    0.003201551 = product of:
      0.016007755 = sum of:
        0.016007755 = product of:
          0.03201551 = sum of:
            0.03201551 = weight(_text_:management in 2728) [ClassicSimilarity], result of:
              0.03201551 = score(doc=2728,freq=2.0), product of:
                0.14328322 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.042509552 = queryNorm
                0.22344214 = fieldWeight in 2728, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2728)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Information processing and management. 48(2012) no.6, S.1203-1229
  9. Clark, M.; Kim, Y.; Kruschwitz, U.; Song, D.; Albakour, D.; Dignum, S.; Beresi, U.C.; Fasli, M.; Roeck, A De: Automatically structuring domain knowledge from text : an overview of current research (2012) 0.00
    0.003201551 = product of:
      0.016007755 = sum of:
        0.016007755 = product of:
          0.03201551 = sum of:
            0.03201551 = weight(_text_:management in 2738) [ClassicSimilarity], result of:
              0.03201551 = score(doc=2738,freq=2.0), product of:
                0.14328322 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.042509552 = queryNorm
                0.22344214 = fieldWeight in 2738, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2738)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Information processing and management. 48(2012) no.3, S.552-568
  10. Scherer Auberson, K.: Counteracting concept drift in natural language classifiers : proposal for an automated method (2018) 0.00
    0.003201551 = product of:
      0.016007755 = sum of:
        0.016007755 = product of:
          0.03201551 = sum of:
            0.03201551 = weight(_text_:management in 2849) [ClassicSimilarity], result of:
              0.03201551 = score(doc=2849,freq=2.0), product of:
                0.14328322 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.042509552 = queryNorm
                0.22344214 = fieldWeight in 2849, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2849)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Content
    Diese Publikation entstand im Rahmen einer Thesis zum Master of Science FHO in Business Administration, Major Information and Data Management.
  11. Lu, K.; Cai, X.; Ajiferuke, I.; Wolfram, D.: Vocabulary size and its effect on topic representation (2017) 0.00
    0.003201551 = product of:
      0.016007755 = sum of:
        0.016007755 = product of:
          0.03201551 = sum of:
            0.03201551 = weight(_text_:management in 3414) [ClassicSimilarity], result of:
              0.03201551 = score(doc=3414,freq=2.0), product of:
                0.14328322 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.042509552 = queryNorm
                0.22344214 = fieldWeight in 3414, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3414)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Information processing and management. 53(2017) no.3, S.653-665
  12. AL-Smadi, M.; Jaradat, Z.; AL-Ayyoub, M.; Jararweh, Y.: Paraphrase identification and semantic text similarity analysis in Arabic news tweets using lexical, syntactic, and semantic features (2017) 0.00
    0.003201551 = product of:
      0.016007755 = sum of:
        0.016007755 = product of:
          0.03201551 = sum of:
            0.03201551 = weight(_text_:management in 5095) [ClassicSimilarity], result of:
              0.03201551 = score(doc=5095,freq=2.0), product of:
                0.14328322 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.042509552 = queryNorm
                0.22344214 = fieldWeight in 5095, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5095)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Information processing and management. 53(2017) no.3, S.640-652
  13. Fóris, A.: Network theory and terminology (2013) 0.00
    0.002879729 = product of:
      0.014398645 = sum of:
        0.014398645 = product of:
          0.02879729 = sum of:
            0.02879729 = weight(_text_:22 in 1365) [ClassicSimilarity], result of:
              0.02879729 = score(doc=1365,freq=2.0), product of:
                0.14886121 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042509552 = queryNorm
                0.19345059 = fieldWeight in 1365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1365)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    2. 9.2014 21:22:48
  14. Brychcín, T.; Konopík, M.: HPS: High precision stemmer (2015) 0.00
    0.002667959 = product of:
      0.013339795 = sum of:
        0.013339795 = product of:
          0.02667959 = sum of:
            0.02667959 = weight(_text_:management in 2686) [ClassicSimilarity], result of:
              0.02667959 = score(doc=2686,freq=2.0), product of:
                0.14328322 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.042509552 = queryNorm
                0.18620178 = fieldWeight in 2686, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2686)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Information processing and management. 51(2015) no.1, S.68-91
  15. Gencosman, B.C.; Ozmutlu, H.C.; Ozmutlu, S.: Character n-gram application for automatic new topic identification (2014) 0.00
    0.002667959 = product of:
      0.013339795 = sum of:
        0.013339795 = product of:
          0.02667959 = sum of:
            0.02667959 = weight(_text_:management in 2688) [ClassicSimilarity], result of:
              0.02667959 = score(doc=2688,freq=2.0), product of:
                0.14328322 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.042509552 = queryNorm
                0.18620178 = fieldWeight in 2688, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2688)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Information processing and management. 50(2014) no.6, S.821-856
  16. Sankarasubramaniam, Y.; Ramanathan, K.; Ghosh, S.: Text summarization using Wikipedia (2014) 0.00
    0.002667959 = product of:
      0.013339795 = sum of:
        0.013339795 = product of:
          0.02667959 = sum of:
            0.02667959 = weight(_text_:management in 2693) [ClassicSimilarity], result of:
              0.02667959 = score(doc=2693,freq=2.0), product of:
                0.14328322 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.042509552 = queryNorm
                0.18620178 = fieldWeight in 2693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2693)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Information processing and management. 50(2014) no.3, S.443-461
  17. Agarwal, B.; Ramampiaro, H.; Langseth, H.; Ruocco, M.: ¬A deep network model for paraphrase detection in short text messages (2018) 0.00
    0.002667959 = product of:
      0.013339795 = sum of:
        0.013339795 = product of:
          0.02667959 = sum of:
            0.02667959 = weight(_text_:management in 5043) [ClassicSimilarity], result of:
              0.02667959 = score(doc=5043,freq=2.0), product of:
                0.14328322 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.042509552 = queryNorm
                0.18620178 = fieldWeight in 5043, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5043)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Information processing and management. 54(2018) no.6, S.922-937
  18. K., Vani; Gupta, D.: Unmasking text plagiarism using syntactic-semantic based natural language processing techniques : comparisons, analysis and challenges (2018) 0.00
    0.002667959 = product of:
      0.013339795 = sum of:
        0.013339795 = product of:
          0.02667959 = sum of:
            0.02667959 = weight(_text_:management in 5084) [ClassicSimilarity], result of:
              0.02667959 = score(doc=5084,freq=2.0), product of:
                0.14328322 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.042509552 = queryNorm
                0.18620178 = fieldWeight in 5084, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5084)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Information processing and management. 54(2018) no.3, S.408-432
  19. Fang, L.; Tuan, L.A.; Hui, S.C.; Wu, L.: Syntactic based approach for grammar question retrieval (2018) 0.00
    0.002667959 = product of:
      0.013339795 = sum of:
        0.013339795 = product of:
          0.02667959 = sum of:
            0.02667959 = weight(_text_:management in 5086) [ClassicSimilarity], result of:
              0.02667959 = score(doc=5086,freq=2.0), product of:
                0.14328322 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.042509552 = queryNorm
                0.18620178 = fieldWeight in 5086, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5086)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Information processing and management. 54(2018) no.2, S.184-202
  20. Rötzer, F.: KI-Programm besser als Menschen im Verständnis natürlicher Sprache (2018) 0.00
    0.002303783 = product of:
      0.011518915 = sum of:
        0.011518915 = product of:
          0.02303783 = sum of:
            0.02303783 = weight(_text_:22 in 4217) [ClassicSimilarity], result of:
              0.02303783 = score(doc=4217,freq=2.0), product of:
                0.14886121 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042509552 = queryNorm
                0.15476047 = fieldWeight in 4217, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4217)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 1.2018 11:32:44