Search (306 results, page 1 of 16)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.24
    0.23953128 = product of:
      0.31937504 = sum of:
        0.07504265 = product of:
          0.22512795 = sum of:
            0.22512795 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.22512795 = score(doc=562,freq=2.0), product of:
                0.4005707 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.047248192 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.22512795 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.22512795 = score(doc=562,freq=2.0), product of:
            0.4005707 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.047248192 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.019204432 = product of:
          0.038408864 = sum of:
            0.038408864 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.038408864 = score(doc=562,freq=2.0), product of:
                0.16545512 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047248192 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.15
    0.1500853 = product of:
      0.3001706 = sum of:
        0.07504265 = product of:
          0.22512795 = sum of:
            0.22512795 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.22512795 = score(doc=862,freq=2.0), product of:
                0.4005707 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.047248192 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.22512795 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.22512795 = score(doc=862,freq=2.0), product of:
            0.4005707 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.047248192 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.5 = coord(2/4)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.15
    0.1471357 = product of:
      0.2942714 = sum of:
        0.22512795 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.22512795 = score(doc=563,freq=2.0), product of:
            0.4005707 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.047248192 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.06914345 = sum of:
          0.030734586 = weight(_text_:science in 563) [ClassicSimilarity], result of:
            0.030734586 = score(doc=563,freq=4.0), product of:
              0.124457374 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.047248192 = queryNorm
              0.24694869 = fieldWeight in 563, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.046875 = fieldNorm(doc=563)
          0.038408864 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
            0.038408864 = score(doc=563,freq=2.0), product of:
              0.16545512 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047248192 = queryNorm
              0.23214069 = fieldWeight in 563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=563)
      0.5 = coord(2/4)
    
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  4. Pereira, C.N.; Grosz, B.J.: Natural language processing (1994) 0.04
    0.042459704 = product of:
      0.08491941 = sum of:
        0.059307255 = weight(_text_:management in 8602) [ClassicSimilarity], result of:
          0.059307255 = score(doc=8602,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.37240356 = fieldWeight in 8602, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.078125 = fieldNorm(doc=8602)
        0.025612153 = product of:
          0.051224306 = sum of:
            0.051224306 = weight(_text_:science in 8602) [ClassicSimilarity], result of:
              0.051224306 = score(doc=8602,freq=4.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.41158113 = fieldWeight in 8602, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.078125 = fieldNorm(doc=8602)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Footnote
    Rez. in: Information processing and management 32(1996) no.1, S.122-123 (P.B. Heidorn)
    LCSH
    Natural language processing (Computer science)
    Subject
    Natural language processing (Computer science)
  5. Warner, A.J.: Natural language processing (1987) 0.04
    0.04009433 = product of:
      0.16037732 = sum of:
        0.16037732 = sum of:
          0.057953686 = weight(_text_:science in 337) [ClassicSimilarity], result of:
            0.057953686 = score(doc=337,freq=2.0), product of:
              0.124457374 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.047248192 = queryNorm
              0.4656509 = fieldWeight in 337, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.125 = fieldNorm(doc=337)
          0.102423646 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
            0.102423646 = score(doc=337,freq=2.0), product of:
              0.16545512 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047248192 = queryNorm
              0.61904186 = fieldWeight in 337, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.125 = fieldNorm(doc=337)
      0.25 = coord(1/4)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  6. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.04
    0.03508254 = product of:
      0.14033017 = sum of:
        0.14033017 = sum of:
          0.05070948 = weight(_text_:science in 4506) [ClassicSimilarity], result of:
            0.05070948 = score(doc=4506,freq=2.0), product of:
              0.124457374 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.047248192 = queryNorm
              0.40744454 = fieldWeight in 4506, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.109375 = fieldNorm(doc=4506)
          0.08962069 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
            0.08962069 = score(doc=4506,freq=2.0), product of:
              0.16545512 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047248192 = queryNorm
              0.5416616 = fieldWeight in 4506, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=4506)
      0.25 = coord(1/4)
    
    Date
    8.10.2000 11:52:22
    Source
    Library science with a slant to documentation. 28(1991) no.4, S.125-130
  7. Prasad, A.R.D.; Kar, B.B.: Parsing Boolean search expression using definite clause grammars (1994) 0.03
    0.030967113 = product of:
      0.061934225 = sum of:
        0.047445804 = weight(_text_:management in 8188) [ClassicSimilarity], result of:
          0.047445804 = score(doc=8188,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.29792285 = fieldWeight in 8188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0625 = fieldNorm(doc=8188)
        0.014488421 = product of:
          0.028976843 = sum of:
            0.028976843 = weight(_text_:science in 8188) [ClassicSimilarity], result of:
              0.028976843 = score(doc=8188,freq=2.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.23282544 = fieldWeight in 8188, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8188)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Briefly discusses the role of search languages in information retrieval and broadly groups the search languages into 4 categories. Explains the idea of definite clause grammars and demonstrates how parsers for Boolean logic-based search languages can easily be developed. Presents a partial Prolog code of the parser that was used in an object-oriented bibliographic database management system
    Source
    Library science with a slant to documentation. 31(1994) no.1, S.24-26
  8. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.03
    0.030070748 = product of:
      0.12028299 = sum of:
        0.12028299 = sum of:
          0.043465264 = weight(_text_:science in 4483) [ClassicSimilarity], result of:
            0.043465264 = score(doc=4483,freq=2.0), product of:
              0.124457374 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.047248192 = queryNorm
              0.34923816 = fieldWeight in 4483, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.09375 = fieldNorm(doc=4483)
          0.07681773 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
            0.07681773 = score(doc=4483,freq=2.0), product of:
              0.16545512 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047248192 = queryNorm
              0.46428138 = fieldWeight in 4483, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=4483)
      0.25 = coord(1/4)
    
    Date
    15. 3.2000 10:22:37
    Source
    Journal of information science. 25(1999) no.2, S.113-131
  9. Montgomery, C.A.: Linguistics and information science (1972) 0.03
    0.028658492 = product of:
      0.057316985 = sum of:
        0.035584353 = weight(_text_:management in 6669) [ClassicSimilarity], result of:
          0.035584353 = score(doc=6669,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.22344214 = fieldWeight in 6669, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=6669)
        0.021732632 = product of:
          0.043465264 = sum of:
            0.043465264 = weight(_text_:science in 6669) [ClassicSimilarity], result of:
              0.043465264 = score(doc=6669,freq=8.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.34923816 = fieldWeight in 6669, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6669)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper defines the relationship between linguistics and information science in terms of a common interest in natural language. The notion of automated processing of natural language - i.e., machine simulation of the language processing activities of a human - provides novel possibilities for interaction between linguistics, who have a theoretical interest in such activities, and information scientists, who have more practical goals, e.g. simulating the language processing activities of an indexer with a machine. The concept of a natural language information system is introduces as a framenwork for reviewing automated language processing efforts by computational linguists and information scientists. In terms of this framework, the former have concentrated on automating the operations of the component for content analysis and representation, while the latter have emphasized the data management component. The complementary nature of these developments allows the postulation of an integrated approach to automated language processing. This approach, which is outlined in the final sections of the paper, incorporates current notions in linguistic theory and information science, as well as design features of recent computational linguistic models
    Source
    Journal of the American Society for Information Science. 23(1972), S.195-219
  10. Keselman, A.; Rosemblat, G.; Kilicoglu, H.; Fiszman, M.; Jin, H.; Shin, D.; Rindflesch, T.C.: Adapting semantic natural language processing technology to address information overload in influenza epidemic management (2010) 0.03
    0.025495915 = product of:
      0.05099183 = sum of:
        0.041936565 = weight(_text_:management in 1312) [ClassicSimilarity], result of:
          0.041936565 = score(doc=1312,freq=4.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.2633291 = fieldWeight in 1312, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1312)
        0.009055263 = product of:
          0.018110527 = sum of:
            0.018110527 = weight(_text_:science in 1312) [ClassicSimilarity], result of:
              0.018110527 = score(doc=1312,freq=2.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.1455159 = fieldWeight in 1312, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1312)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The explosion of disaster health information results in information overload among response professionals. The objective of this project was to determine the feasibility of applying semantic natural language processing (NLP) technology to addressing this overload. The project characterizes concepts and relationships commonly used in disaster health-related documents on influenza pandemics, as the basis for adapting an existing semantic summarizer to the domain. Methods include human review and semantic NLP analysis of a set of relevant documents. This is followed by a pilot test in which two information specialists use the adapted application for a realistic information-seeking task. According to the results, the ontology of influenza epidemics management can be described via a manageable number of semantic relationships that involve concepts from a limited number of semantic types. Test users demonstrate several ways to engage with the application to obtain useful information. This suggests that existing semantic NLP algorithms can be adapted to support information summarization and visualization in influenza epidemics and other disaster health areas. However, additional research is needed in the areas of terminology development (as many relevant relationships and terms are not part of existing standardized vocabularies), NLP, and user interface design.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.12, S.2531-2543
  11. Lu, K.; Cai, X.; Ajiferuke, I.; Wolfram, D.: Vocabulary size and its effect on topic representation (2017) 0.03
    0.025475822 = product of:
      0.050951645 = sum of:
        0.035584353 = weight(_text_:management in 3414) [ClassicSimilarity], result of:
          0.035584353 = score(doc=3414,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.22344214 = fieldWeight in 3414, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=3414)
        0.015367293 = product of:
          0.030734586 = sum of:
            0.030734586 = weight(_text_:science in 3414) [ClassicSimilarity], result of:
              0.030734586 = score(doc=3414,freq=4.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.24694869 = fieldWeight in 3414, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3414)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This study investigates how computational overhead for topic model training may be reduced by selectively removing terms from the vocabulary of text corpora being modeled. We compare the impact of removing singly occurring terms, the top 0.5%, 1% and 5% most frequently occurring terms and both top 0.5% most frequent and singly occurring terms, along with changes in the number of topics modeled (10, 20, 30, 40, 50, 100) using three datasets. Four outcome measures are compared. The removal of singly occurring terms has little impact on outcomes for all of the measures tested. Document discriminative capacity, as measured by the document space density, is reduced by the removal of frequently occurring terms, but increases with higher numbers of topics. Vocabulary size does not greatly influence entropy, but entropy is affected by the number of topics. Finally, topic similarity, as measured by pairwise topic similarity and Jensen-Shannon divergence, decreases with the removal of frequent terms. The findings have implications for information science research in information retrieval and informetrics that makes use of topic modeling.
    Content
    Vgl.: http://www.sciencedirect.com/science/article/pii/S0306457317300298.
    Source
    Information processing and management. 53(2017) no.3, S.653-665
  12. Paolillo, J.C.: Linguistics and the information sciences (2009) 0.02
    0.023879956 = product of:
      0.095519826 = sum of:
        0.095519826 = sum of:
          0.05070948 = weight(_text_:science in 3840) [ClassicSimilarity], result of:
            0.05070948 = score(doc=3840,freq=8.0), product of:
              0.124457374 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.047248192 = queryNorm
              0.40744454 = fieldWeight in 3840, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3840)
          0.044810344 = weight(_text_:22 in 3840) [ClassicSimilarity], result of:
            0.044810344 = score(doc=3840,freq=2.0), product of:
              0.16545512 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047248192 = queryNorm
              0.2708308 = fieldWeight in 3840, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3840)
      0.25 = coord(1/4)
    
    Abstract
    Linguistics is the scientific study of language which emphasizes language spoken in everyday settings by human beings. It has a long history of interdisciplinarity, both internally and in contribution to other fields, including information science. A linguistic perspective is beneficial in many ways in information science, since it examines the relationship between the forms of meaningful expressions and their social, cognitive, institutional, and communicative context, these being two perspectives on information that are actively studied, to different degrees, in information science. Examples of issues relevant to information science are presented for which the approach taken under a linguistic perspective is illustrated.
    Date
    27. 8.2011 14:22:33
  13. Bernth, A.; McCord, M.; Warburton, K.: Terminology extraction for global content management (2003) 0.02
    0.023722902 = product of:
      0.09489161 = sum of:
        0.09489161 = weight(_text_:management in 4122) [ClassicSimilarity], result of:
          0.09489161 = score(doc=4122,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.5958457 = fieldWeight in 4122, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.125 = fieldNorm(doc=4122)
      0.25 = coord(1/4)
    
  14. Deventer, J.P. van; Kruger, C.J.; Johnson, R.D.: Delineating knowledge management through lexical analysis : a retrospective (2015) 0.02
    0.023577852 = product of:
      0.047155704 = sum of:
        0.03595312 = weight(_text_:management in 3807) [ClassicSimilarity], result of:
          0.03595312 = score(doc=3807,freq=6.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.22575769 = fieldWeight in 3807, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3807)
        0.011202586 = product of:
          0.022405172 = sum of:
            0.022405172 = weight(_text_:22 in 3807) [ClassicSimilarity], result of:
              0.022405172 = score(doc=3807,freq=2.0), product of:
                0.16545512 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047248192 = queryNorm
                0.1354154 = fieldWeight in 3807, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3807)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose Academic authors tend to define terms that meet their own needs. Knowledge Management (KM) is a term that comes to mind and is examined in this study. Lexicographical research identified KM terms used by authors from 1996 to 2006 in academic outlets to define KM. Data were collected based on strict criteria which included that definitions should be unique instances. From 2006 onwards, these authors could not identify new unique instances of definitions with repetitive usage of such definition instances. Analysis revealed that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, and Process) and Contextualised Content (Information). The paper aims to discuss these issues. Design/methodology/approach The aim of this paper is to add to the body of knowledge in the KM discipline and supply KM practitioners and scholars with insight into what is commonly regarded to be KM so as to reignite the debate on what one could consider as KM. The lexicon used by KM scholars was evaluated though the application of lexicographical research methods as extended though Knowledge Discovery and Text Analysis methods. Findings By simplifying term relationships through the application of lexicographical research methods, as extended though Knowledge Discovery and Text Analysis methods, it was found that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, Process) and Contextualised Content (Information). One would therefore be able to indicate that KM, from an academic point of view, refers to people processing contextualised content.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 67(2015) no.2, S.203-229
  15. Scherer Auberson, K.: Counteracting concept drift in natural language classifiers : proposal for an automated method (2018) 0.02
    0.023225334 = product of:
      0.046450667 = sum of:
        0.035584353 = weight(_text_:management in 2849) [ClassicSimilarity], result of:
          0.035584353 = score(doc=2849,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.22344214 = fieldWeight in 2849, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=2849)
        0.010866316 = product of:
          0.021732632 = sum of:
            0.021732632 = weight(_text_:science in 2849) [ClassicSimilarity], result of:
              0.021732632 = score(doc=2849,freq=2.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.17461908 = fieldWeight in 2849, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2849)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    Diese Publikation entstand im Rahmen einer Thesis zum Master of Science FHO in Business Administration, Major Information and Data Management.
  16. AL-Smadi, M.; Jaradat, Z.; AL-Ayyoub, M.; Jararweh, Y.: Paraphrase identification and semantic text similarity analysis in Arabic news tweets using lexical, syntactic, and semantic features (2017) 0.02
    0.023225334 = product of:
      0.046450667 = sum of:
        0.035584353 = weight(_text_:management in 5095) [ClassicSimilarity], result of:
          0.035584353 = score(doc=5095,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.22344214 = fieldWeight in 5095, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=5095)
        0.010866316 = product of:
          0.021732632 = sum of:
            0.021732632 = weight(_text_:science in 5095) [ClassicSimilarity], result of:
              0.021732632 = score(doc=5095,freq=2.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.17461908 = fieldWeight in 5095, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5095)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    Vgl.: http://www.sciencedirect.com/science/article/pii/S0306457316302382.
    Source
    Information processing and management. 53(2017) no.3, S.640-652
  17. Kettunen, K.: Reductive and generative approaches to management of morphological variation of keywords in monolingual information retrieval : an overview (2009) 0.02
    0.021790877 = product of:
      0.08716351 = sum of:
        0.08716351 = weight(_text_:management in 2835) [ClassicSimilarity], result of:
          0.08716351 = score(doc=2835,freq=12.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.54731923 = fieldWeight in 2835, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=2835)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The purpose of this article is to discuss advantages and disadvantages of various means to manage morphological variation of keywords in monolingual information retrieval. Design/methodology/approach - The authors present a compilation of query results from 11 mostly European languages and a new general classification of the language dependent techniques for management of morphological variation. Variants of the different techniques are compared in some detail in terms of retrieval effectiveness and other criteria. The paper consists mainly of an overview of different management methods for keyword variation in information retrieval. Typical IR retrieval results of 11 languages and a new classification for keyword management methods are also presented. Findings - The main results of the paper are an overall comparison of reductive and generative keyword management methods in terms of retrieval effectiveness and other broader criteria. Originality/value - The paper is of value to anyone who wants to get an overall picture of keyword management techniques used in IR.
  18. Sheridan, P.; Smeaton, A.F.: ¬The application of morpho-syntactic language processing to effective phrase matching (1992) 0.02
    0.02075754 = product of:
      0.08303016 = sum of:
        0.08303016 = weight(_text_:management in 6575) [ClassicSimilarity], result of:
          0.08303016 = score(doc=6575,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.521365 = fieldWeight in 6575, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.109375 = fieldNorm(doc=6575)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 28(1992) no.3, S.349-369
  19. Salton, G.; Buckley, C.; Smith, M.: On the application of syntactic methodologies in automatic text analysis (1990) 0.02
    0.02075754 = product of:
      0.08303016 = sum of:
        0.08303016 = weight(_text_:management in 7864) [ClassicSimilarity], result of:
          0.08303016 = score(doc=7864,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.521365 = fieldWeight in 7864, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.109375 = fieldNorm(doc=7864)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 26(1990) no.1, S.73-92
  20. Haas, S.W.: ¬A feasibility study of the case hierarchy model for the construction and porting of natural language interfaces (1990) 0.02
    0.02075754 = product of:
      0.08303016 = sum of:
        0.08303016 = weight(_text_:management in 8071) [ClassicSimilarity], result of:
          0.08303016 = score(doc=8071,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.521365 = fieldWeight in 8071, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.109375 = fieldNorm(doc=8071)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 26(1990), S.615-628

Languages

  • e 276
  • d 27
  • m 2
  • f 1
  • More… Less…

Types

  • a 257
  • m 34
  • el 12
  • s 12
  • x 5
  • p 2
  • b 1
  • d 1
  • More… Less…

Classifications