Search (459 results, page 1 of 23)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.07
    0.068927675 = product of:
      0.10339151 = sum of:
        0.08232375 = product of:
          0.24697125 = sum of:
            0.24697125 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.24697125 = score(doc=562,freq=2.0), product of:
                0.43943653 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0518325 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.021067765 = product of:
          0.04213553 = sum of:
            0.04213553 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.04213553 = score(doc=562,freq=2.0), product of:
                0.18150859 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0518325 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Warner, A.J.: Natural language processing (1987) 0.06
    0.05627845 = product of:
      0.08441767 = sum of:
        0.02823696 = weight(_text_:information in 337) [ClassicSimilarity], result of:
          0.02823696 = score(doc=337,freq=2.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.3103276 = fieldWeight in 337, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.125 = fieldNorm(doc=337)
        0.056180708 = product of:
          0.112361416 = sum of:
            0.112361416 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.112361416 = score(doc=337,freq=2.0), product of:
                0.18150859 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0518325 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  3. Perez-Carballo, J.; Strzalkowski, T.: Natural language information retrieval : progress report (2000) 0.05
    0.05365639 = product of:
      0.080484584 = sum of:
        0.034941453 = weight(_text_:information in 6421) [ClassicSimilarity], result of:
          0.034941453 = score(doc=6421,freq=4.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.3840108 = fieldWeight in 6421, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=6421)
        0.045543127 = product of:
          0.09108625 = sum of:
            0.09108625 = weight(_text_:management in 6421) [ClassicSimilarity], result of:
              0.09108625 = score(doc=6421,freq=2.0), product of:
                0.17470726 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0518325 = queryNorm
                0.521365 = fieldWeight in 6421, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6421)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Information processing and management. 36(2000) no.1, S.155-205
  4. Deventer, J.P. van; Kruger, C.J.; Johnson, R.D.: Delineating knowledge management through lexical analysis : a retrospective (2015) 0.05
    0.049812775 = product of:
      0.07471916 = sum of:
        0.010698592 = weight(_text_:information in 3807) [ClassicSimilarity], result of:
          0.010698592 = score(doc=3807,freq=6.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.11757882 = fieldWeight in 3807, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3807)
        0.06402057 = sum of:
          0.039441507 = weight(_text_:management in 3807) [ClassicSimilarity], result of:
            0.039441507 = score(doc=3807,freq=6.0), product of:
              0.17470726 = queryWeight, product of:
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.0518325 = queryNorm
              0.22575769 = fieldWeight in 3807, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.3706124 = idf(docFreq=4130, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3807)
          0.02457906 = weight(_text_:22 in 3807) [ClassicSimilarity], result of:
            0.02457906 = score(doc=3807,freq=2.0), product of:
              0.18150859 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0518325 = queryNorm
              0.1354154 = fieldWeight in 3807, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3807)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose Academic authors tend to define terms that meet their own needs. Knowledge Management (KM) is a term that comes to mind and is examined in this study. Lexicographical research identified KM terms used by authors from 1996 to 2006 in academic outlets to define KM. Data were collected based on strict criteria which included that definitions should be unique instances. From 2006 onwards, these authors could not identify new unique instances of definitions with repetitive usage of such definition instances. Analysis revealed that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, and Process) and Contextualised Content (Information). The paper aims to discuss these issues. Design/methodology/approach The aim of this paper is to add to the body of knowledge in the KM discipline and supply KM practitioners and scholars with insight into what is commonly regarded to be KM so as to reignite the debate on what one could consider as KM. The lexicon used by KM scholars was evaluated though the application of lexicographical research methods as extended though Knowledge Discovery and Text Analysis methods. Findings By simplifying term relationships through the application of lexicographical research methods, as extended though Knowledge Discovery and Text Analysis methods, it was found that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, Process) and Contextualised Content (Information). One would therefore be able to indicate that KM, from an academic point of view, refers to people processing contextualised content.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 67(2015) no.2, S.203-229
  5. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.05
    0.0480569 = product of:
      0.07208535 = sum of:
        0.029949818 = weight(_text_:information in 4483) [ClassicSimilarity], result of:
          0.029949818 = score(doc=4483,freq=4.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.3291521 = fieldWeight in 4483, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=4483)
        0.04213553 = product of:
          0.08427106 = sum of:
            0.08427106 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.08427106 = score(doc=4483,freq=2.0), product of:
                0.18150859 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0518325 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    15. 3.2000 10:22:37
    Source
    Journal of information science. 25(1999) no.2, S.113-131
  6. Sheridan, P.; Smeaton, A.F.: ¬The application of morpho-syntactic language processing to effective phrase matching (1992) 0.05
    0.046833646 = product of:
      0.07025047 = sum of:
        0.02470734 = weight(_text_:information in 6575) [ClassicSimilarity], result of:
          0.02470734 = score(doc=6575,freq=2.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.27153665 = fieldWeight in 6575, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=6575)
        0.045543127 = product of:
          0.09108625 = sum of:
            0.09108625 = weight(_text_:management in 6575) [ClassicSimilarity], result of:
              0.09108625 = score(doc=6575,freq=2.0), product of:
                0.17470726 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0518325 = queryNorm
                0.521365 = fieldWeight in 6575, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6575)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Information processing and management. 28(1992) no.3, S.349-369
  7. Salton, G.; Buckley, C.; Smith, M.: On the application of syntactic methodologies in automatic text analysis (1990) 0.05
    0.046833646 = product of:
      0.07025047 = sum of:
        0.02470734 = weight(_text_:information in 7864) [ClassicSimilarity], result of:
          0.02470734 = score(doc=7864,freq=2.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.27153665 = fieldWeight in 7864, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=7864)
        0.045543127 = product of:
          0.09108625 = sum of:
            0.09108625 = weight(_text_:management in 7864) [ClassicSimilarity], result of:
              0.09108625 = score(doc=7864,freq=2.0), product of:
                0.17470726 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0518325 = queryNorm
                0.521365 = fieldWeight in 7864, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.109375 = fieldNorm(doc=7864)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Information processing and management. 26(1990) no.1, S.73-92
  8. Haas, S.W.: ¬A feasibility study of the case hierarchy model for the construction and porting of natural language interfaces (1990) 0.05
    0.046833646 = product of:
      0.07025047 = sum of:
        0.02470734 = weight(_text_:information in 8071) [ClassicSimilarity], result of:
          0.02470734 = score(doc=8071,freq=2.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.27153665 = fieldWeight in 8071, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=8071)
        0.045543127 = product of:
          0.09108625 = sum of:
            0.09108625 = weight(_text_:management in 8071) [ClassicSimilarity], result of:
              0.09108625 = score(doc=8071,freq=2.0), product of:
                0.17470726 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0518325 = queryNorm
                0.521365 = fieldWeight in 8071, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.109375 = fieldNorm(doc=8071)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Information processing and management. 26(1990), S.615-628
  9. Atlam, E.S.: Similarity measurement using term negative weight and its application to word similarity (2000) 0.05
    0.046833646 = product of:
      0.07025047 = sum of:
        0.02470734 = weight(_text_:information in 4844) [ClassicSimilarity], result of:
          0.02470734 = score(doc=4844,freq=2.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.27153665 = fieldWeight in 4844, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=4844)
        0.045543127 = product of:
          0.09108625 = sum of:
            0.09108625 = weight(_text_:management in 4844) [ClassicSimilarity], result of:
              0.09108625 = score(doc=4844,freq=2.0), product of:
                0.17470726 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0518325 = queryNorm
                0.521365 = fieldWeight in 4844, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4844)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Information processing and management. 36(2000) no.5, S.717-736
  10. Kettunen, K.: Reductive and generative approaches to management of morphological variation of keywords in monolingual information retrieval : an overview (2009) 0.04
    0.044100516 = product of:
      0.06615077 = sum of:
        0.018340444 = weight(_text_:information in 2835) [ClassicSimilarity], result of:
          0.018340444 = score(doc=2835,freq=6.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.20156369 = fieldWeight in 2835, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2835)
        0.047810324 = product of:
          0.09562065 = sum of:
            0.09562065 = weight(_text_:management in 2835) [ClassicSimilarity], result of:
              0.09562065 = score(doc=2835,freq=12.0), product of:
                0.17470726 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0518325 = queryNorm
                0.54731923 = fieldWeight in 2835, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2835)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose - The purpose of this article is to discuss advantages and disadvantages of various means to manage morphological variation of keywords in monolingual information retrieval. Design/methodology/approach - The authors present a compilation of query results from 11 mostly European languages and a new general classification of the language dependent techniques for management of morphological variation. Variants of the different techniques are compared in some detail in terms of retrieval effectiveness and other criteria. The paper consists mainly of an overview of different management methods for keyword variation in information retrieval. Typical IR retrieval results of 11 languages and a new classification for keyword management methods are also presented. Findings - The main results of the paper are an overall comparison of reductive and generative keyword management methods in terms of retrieval effectiveness and other broader criteria. Originality/value - The paper is of value to anyone who wants to get an overall picture of keyword management techniques used in IR.
  11. Rau, L.F.; Jacobs, P.S.; Zernik, U.: Information extraction and text summarization using linguistic knowledge acquisition (1989) 0.04
    0.040405143 = product of:
      0.060607713 = sum of:
        0.03458307 = weight(_text_:information in 6683) [ClassicSimilarity], result of:
          0.03458307 = score(doc=6683,freq=12.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.38007212 = fieldWeight in 6683, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=6683)
        0.026024643 = product of:
          0.052049287 = sum of:
            0.052049287 = weight(_text_:management in 6683) [ClassicSimilarity], result of:
              0.052049287 = score(doc=6683,freq=2.0), product of:
                0.17470726 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0518325 = queryNorm
                0.29792285 = fieldWeight in 6683, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6683)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Storing and accessing texts in a conceptual format has a number of advantages over traditional document retrieval methods. A conceptual format facilitates natural language access to text information. It can support imprecise and inexact queries, conceptual information summarisation, and, ultimately, document translation. Describes 2 methods which have been implemented in a prototype intelligent information retrieval system calles SCISOR (System for Conceptual Information Summarisation, Organization and Retrieval). Describes the text processing, language acquisition, and summarisation components of SCISOR
    Source
    Information processing and management. 25(1989) no.4, S.419-428
  12. Mustafa El Hadi, W.: Evaluating human language technology : general applications to information access and management (2002) 0.04
    0.040143125 = product of:
      0.060214683 = sum of:
        0.02117772 = weight(_text_:information in 1840) [ClassicSimilarity], result of:
          0.02117772 = score(doc=1840,freq=2.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.23274569 = fieldWeight in 1840, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=1840)
        0.039036963 = product of:
          0.078073926 = sum of:
            0.078073926 = weight(_text_:management in 1840) [ClassicSimilarity], result of:
              0.078073926 = score(doc=1840,freq=2.0), product of:
                0.17470726 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0518325 = queryNorm
                0.44688427 = fieldWeight in 1840, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1840)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
  13. Paolillo, J.C.: Linguistics and the information sciences (2009) 0.04
    0.038175866 = product of:
      0.057263795 = sum of:
        0.032684736 = weight(_text_:information in 3840) [ClassicSimilarity], result of:
          0.032684736 = score(doc=3840,freq=14.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.3592092 = fieldWeight in 3840, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3840)
        0.02457906 = product of:
          0.04915812 = sum of:
            0.04915812 = weight(_text_:22 in 3840) [ClassicSimilarity], result of:
              0.04915812 = score(doc=3840,freq=2.0), product of:
                0.18150859 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0518325 = queryNorm
                0.2708308 = fieldWeight in 3840, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3840)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Linguistics is the scientific study of language which emphasizes language spoken in everyday settings by human beings. It has a long history of interdisciplinarity, both internally and in contribution to other fields, including information science. A linguistic perspective is beneficial in many ways in information science, since it examines the relationship between the forms of meaningful expressions and their social, cognitive, institutional, and communicative context, these being two perspectives on information that are actively studied, to different degrees, in information science. Examples of issues relevant to information science are presented for which the approach taken under a linguistic perspective is illustrated.
    Date
    27. 8.2011 14:22:33
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  14. Riloff, E.: ¬An empirical study of automated dictionary construction for information extraction in three domains (1996) 0.04
    0.035029523 = product of:
      0.05254428 = sum of:
        0.024453925 = weight(_text_:information in 6752) [ClassicSimilarity], result of:
          0.024453925 = score(doc=6752,freq=6.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.2687516 = fieldWeight in 6752, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=6752)
        0.028090354 = product of:
          0.056180708 = sum of:
            0.056180708 = weight(_text_:22 in 6752) [ClassicSimilarity], result of:
              0.056180708 = score(doc=6752,freq=2.0), product of:
                0.18150859 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0518325 = queryNorm
                0.30952093 = fieldWeight in 6752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6752)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    AutoSlog is a system that addresses the knowledge engineering bottleneck for information extraction. AutoSlog automatically creates domain specific dictionaries for information extraction, given an appropriate training corpus. Describes experiments with AutoSlog in terrorism, joint ventures and microelectronics domains. Compares the performance of AutoSlog across the 3 domains, discusses the lessons learned and presents results from 2 experiments which demonstrate that novice users can generate effective dictionaries using AutoSlog
    Date
    6. 3.1997 16:22:15
  15. Liddy, E.D.: Natural language processing for information retrieval and knowledge discovery (1998) 0.03
    0.034801804 = product of:
      0.052202705 = sum of:
        0.027623646 = weight(_text_:information in 2345) [ClassicSimilarity], result of:
          0.027623646 = score(doc=2345,freq=10.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.3035872 = fieldWeight in 2345, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2345)
        0.02457906 = product of:
          0.04915812 = sum of:
            0.04915812 = weight(_text_:22 in 2345) [ClassicSimilarity], result of:
              0.04915812 = score(doc=2345,freq=2.0), product of:
                0.18150859 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0518325 = queryNorm
                0.2708308 = fieldWeight in 2345, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2345)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Natural language processing (NLP) is a powerful technology for the vital tasks of information retrieval (IR) and knowledge discovery (KD) which, in turn, feed the visualization systems of the present and future and enable knowledge workers to focus more of their time on the vital tasks of analysis and prediction
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  16. Mustafa el Hadi, W.: Human language technology and its role in information access and management (2003) 0.03
    0.0339379 = product of:
      0.050906852 = sum of:
        0.027904097 = weight(_text_:information in 5524) [ClassicSimilarity], result of:
          0.027904097 = score(doc=5524,freq=20.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.30666938 = fieldWeight in 5524, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5524)
        0.023002753 = product of:
          0.046005506 = sum of:
            0.046005506 = weight(_text_:management in 5524) [ClassicSimilarity], result of:
              0.046005506 = score(doc=5524,freq=4.0), product of:
                0.17470726 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0518325 = queryNorm
                0.2633291 = fieldWeight in 5524, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5524)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The role of linguistics in information access, extraction and dissemination is essential. Radical changes in the techniques of information and communication at the end of the twentieth century have had a significant effect on the function of the linguistic paradigm and its applications in all forms of communication. The introduction of new technical means have deeply changed the possibilities for the distribution of information. In this situation, what is the role of the linguistic paradigm and its practical applications, i.e., natural language processing (NLP) techniques when applied to information access? What solutions can linguistics offer in human computer interaction, extraction and management? Many fields show the relevance of the linguistic paradigm through the various technologies that require NLP, such as document and message understanding, information detection, extraction, and retrieval, question and answer, cross-language information retrieval (CLIR), text summarization, filtering, and spoken document retrieval. This paper focuses on the central role of human language technologies in the information society, surveys the current situation, describes the benefits of the above mentioned applications, outlines successes and challenges, and discusses solutions. It reviews the resources and means needed to advance information access and dissemination across language boundaries in the twenty-first century. Multilingualism, which is a natural result of globalization, requires more effort in the direction of language technology. The scope of human language technology (HLT) is large, so we limit our review to applications that involve multilinguality.
    Content
    Beitrag eines Themenheftes "Knowledge organization and classification in international information retrieval"
  17. Addison, E.R.; Wilson, H.D.; Feder, J.: ¬The impact of plain English searching on end users (1993) 0.03
    0.03365238 = product of:
      0.05047857 = sum of:
        0.024453925 = weight(_text_:information in 5354) [ClassicSimilarity], result of:
          0.024453925 = score(doc=5354,freq=6.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.2687516 = fieldWeight in 5354, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=5354)
        0.026024643 = product of:
          0.052049287 = sum of:
            0.052049287 = weight(_text_:management in 5354) [ClassicSimilarity], result of:
              0.052049287 = score(doc=5354,freq=2.0), product of:
                0.17470726 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0518325 = queryNorm
                0.29792285 = fieldWeight in 5354, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5354)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Commercial software products are available with plain English searching capabilities as engines for online and CD-ROM information services, and for internal text information management. With plain English interfaces, end users do not need to master the keyword and connector approach of the Boolean search query language. Describes plain English searching and its impact on the process of full text retrieval. Explores the issues of ease of use, reliability and implications for the total research process
    Imprint
    Medford, NJ : Learned Information
  18. Pereira, C.N.; Grosz, B.J.: Natural language processing (1994) 0.03
    0.0334526 = product of:
      0.0501789 = sum of:
        0.017648099 = weight(_text_:information in 8602) [ClassicSimilarity], result of:
          0.017648099 = score(doc=8602,freq=2.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.19395474 = fieldWeight in 8602, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=8602)
        0.032530803 = product of:
          0.06506161 = sum of:
            0.06506161 = weight(_text_:management in 8602) [ClassicSimilarity], result of:
              0.06506161 = score(doc=8602,freq=2.0), product of:
                0.17470726 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0518325 = queryNorm
                0.37240356 = fieldWeight in 8602, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.078125 = fieldNorm(doc=8602)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    Rez. in: Information processing and management 32(1996) no.1, S.122-123 (P.B. Heidorn)
  19. Haas, S.W.: Natural language processing : toward large-scale, robust systems (1996) 0.03
    0.032037936 = product of:
      0.0480569 = sum of:
        0.019966545 = weight(_text_:information in 7415) [ClassicSimilarity], result of:
          0.019966545 = score(doc=7415,freq=4.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.21943474 = fieldWeight in 7415, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=7415)
        0.028090354 = product of:
          0.056180708 = sum of:
            0.056180708 = weight(_text_:22 in 7415) [ClassicSimilarity], result of:
              0.056180708 = score(doc=7415,freq=2.0), product of:
                0.18150859 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0518325 = queryNorm
                0.30952093 = fieldWeight in 7415, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7415)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    State of the art review of natural language processing updating an earlier review published in ARIST 22(1987). Discusses important developments that have allowed for significant advances in the field of natural language processing: materials and resources; knowledge based systems and statistical approaches; and a strong emphasis on evaluation. Reviews some natural language processing applications and common problems still awaiting solution. Considers closely related applications such as language generation and th egeneration phase of machine translation which face the same problems as natural language processing. Covers natural language methodologies for information retrieval only briefly
    Source
    Annual review of information science and technology. 31(1996), S.83-119
  20. Keselman, A.; Rosemblat, G.; Kilicoglu, H.; Fiszman, M.; Jin, H.; Shin, D.; Rindflesch, T.C.: Adapting semantic natural language processing technology to address information overload in influenza epidemic management (2010) 0.03
    0.031973958 = product of:
      0.047960933 = sum of:
        0.02495818 = weight(_text_:information in 1312) [ClassicSimilarity], result of:
          0.02495818 = score(doc=1312,freq=16.0), product of:
            0.09099081 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0518325 = queryNorm
            0.27429342 = fieldWeight in 1312, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1312)
        0.023002753 = product of:
          0.046005506 = sum of:
            0.046005506 = weight(_text_:management in 1312) [ClassicSimilarity], result of:
              0.046005506 = score(doc=1312,freq=4.0), product of:
                0.17470726 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0518325 = queryNorm
                0.2633291 = fieldWeight in 1312, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1312)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The explosion of disaster health information results in information overload among response professionals. The objective of this project was to determine the feasibility of applying semantic natural language processing (NLP) technology to addressing this overload. The project characterizes concepts and relationships commonly used in disaster health-related documents on influenza pandemics, as the basis for adapting an existing semantic summarizer to the domain. Methods include human review and semantic NLP analysis of a set of relevant documents. This is followed by a pilot test in which two information specialists use the adapted application for a realistic information-seeking task. According to the results, the ontology of influenza epidemics management can be described via a manageable number of semantic relationships that involve concepts from a limited number of semantic types. Test users demonstrate several ways to engage with the application to obtain useful information. This suggests that existing semantic NLP algorithms can be adapted to support information summarization and visualization in influenza epidemics and other disaster health areas. However, additional research is needed in the areas of terminology development (as many relevant relationships and terms are not part of existing standardized vocabularies), NLP, and user interface design.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.12, S.2531-2543

Languages

Types

  • a 394
  • m 35
  • el 26
  • s 19
  • x 10
  • p 3
  • d 2
  • b 1
  • More… Less…

Subjects

Classifications