Search (408 results, page 1 of 21)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.12
    0.11620137 = product of:
      0.27113652 = sum of:
        0.063708186 = product of:
          0.19112456 = sum of:
            0.19112456 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.19112456 = score(doc=562,freq=2.0), product of:
                0.34006837 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04011181 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.19112456 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.19112456 = score(doc=562,freq=2.0), product of:
            0.34006837 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04011181 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.016303789 = product of:
          0.032607578 = sum of:
            0.032607578 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.032607578 = score(doc=562,freq=2.0), product of:
                0.14046472 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04011181 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Liddy, E.D.: Natural language processing for information retrieval and knowledge discovery (1998) 0.10
    0.102764584 = product of:
      0.17983802 = sum of:
        0.02929879 = weight(_text_:systems in 2345) [ClassicSimilarity], result of:
          0.02929879 = score(doc=2345,freq=2.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.23767869 = fieldWeight in 2345, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2345)
        0.01522058 = product of:
          0.03044116 = sum of:
            0.03044116 = weight(_text_:science in 2345) [ClassicSimilarity], result of:
              0.03044116 = score(doc=2345,freq=4.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.2881068 = fieldWeight in 2345, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2345)
          0.5 = coord(1/2)
        0.03714858 = weight(_text_:library in 2345) [ClassicSimilarity], result of:
          0.03714858 = score(doc=2345,freq=6.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.3522223 = fieldWeight in 2345, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2345)
        0.09817007 = sum of:
          0.06012789 = weight(_text_:applications in 2345) [ClassicSimilarity], result of:
            0.06012789 = score(doc=2345,freq=2.0), product of:
              0.17659263 = queryWeight, product of:
                4.4025097 = idf(docFreq=1471, maxDocs=44218)
                0.04011181 = queryNorm
              0.34048924 = fieldWeight in 2345, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.4025097 = idf(docFreq=1471, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2345)
          0.038042177 = weight(_text_:22 in 2345) [ClassicSimilarity], result of:
            0.038042177 = score(doc=2345,freq=2.0), product of:
              0.14046472 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04011181 = queryNorm
              0.2708308 = fieldWeight in 2345, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2345)
      0.5714286 = coord(4/7)
    
    Abstract
    Natural language processing (NLP) is a powerful technology for the vital tasks of information retrieval (IR) and knowledge discovery (KD) which, in turn, feed the visualization systems of the present and future and enable knowledge workers to focus more of their time on the vital tasks of analysis and prediction
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  3. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.10
    0.09959014 = product of:
      0.17428273 = sum of:
        0.020927707 = weight(_text_:systems in 2541) [ClassicSimilarity], result of:
          0.020927707 = score(doc=2541,freq=2.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.1697705 = fieldWeight in 2541, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2541)
        0.013709821 = product of:
          0.027419642 = sum of:
            0.027419642 = weight(_text_:29 in 2541) [ClassicSimilarity], result of:
              0.027419642 = score(doc=2541,freq=2.0), product of:
                0.14110081 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04011181 = queryNorm
                0.19432661 = fieldWeight in 2541, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2541)
          0.5 = coord(1/2)
        0.015319815 = weight(_text_:library in 2541) [ClassicSimilarity], result of:
          0.015319815 = score(doc=2541,freq=2.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.14525402 = fieldWeight in 2541, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2541)
        0.124325395 = sum of:
          0.08589699 = weight(_text_:applications in 2541) [ClassicSimilarity], result of:
            0.08589699 = score(doc=2541,freq=8.0), product of:
              0.17659263 = queryWeight, product of:
                4.4025097 = idf(docFreq=1471, maxDocs=44218)
                0.04011181 = queryNorm
              0.4864132 = fieldWeight in 2541, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                4.4025097 = idf(docFreq=1471, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2541)
          0.0384284 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
            0.0384284 = score(doc=2541,freq=4.0), product of:
              0.14046472 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04011181 = queryNorm
              0.27358043 = fieldWeight in 2541, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2541)
      0.5714286 = coord(4/7)
    
    Abstract
    The Specialized Information Services Division (SIS) of the National Library of Medicine (NLM) provides Web access to more than a dozen scientific databases on toxicology and the environment on TOXNET . Search queries on TOXNET often include misspelled or variant English words, medical and scientific jargon and chemical names. Following the example of search engines like Google and ClinicalTrials.gov, we set out to develop a spelling "suggestion" system for increased recall and precision in TOXNET searching. This paper describes development of dictionary technology that can be used in a variety of applications such as orthographic verification, writing aid, natural language processing, and information storage and retrieval. The design of the technology allows building complex applications using the components developed in the earlier phases of the work in a modular fashion without extensive rewriting of computer code. Since many of the potential applications envisioned for this work have on-line or web-based interfaces, the dictionaries and other computer components must have fast response, and must be adaptable to open-ended database vocabularies, including chemical nomenclature. The dictionary vocabulary for this work was derived from SIS and other databases and specialized resources, such as NLM's Unified Medical Language Systems (UMLS) . The resulting technology, A-Z Dictionary (AZdict), has three major constituents: 1) the vocabulary list, 2) the word attributes that define part of speech and morphological relationships between words in the list, and 3) a set of programs that implements the retrieval of words and their attributes, and determines similarity between words (ChemSpell). These three components can be used in various applications such as spelling verification, spelling aid, part-of-speech tagging, paraphrasing, and many other natural language processing functions.
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
  4. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.09
    0.0944891 = product of:
      0.22047456 = sum of:
        0.013046212 = product of:
          0.026092423 = sum of:
            0.026092423 = weight(_text_:science in 563) [ClassicSimilarity], result of:
              0.026092423 = score(doc=563,freq=4.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.24694869 = fieldWeight in 563, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
        0.19112456 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.19112456 = score(doc=563,freq=2.0), product of:
            0.34006837 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04011181 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.016303789 = product of:
          0.032607578 = sum of:
            0.032607578 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.032607578 = score(doc=563,freq=2.0), product of:
                0.14046472 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04011181 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  5. Haas, S.W.: Natural language processing : toward large-scale, robust systems (1996) 0.09
    0.085848086 = product of:
      0.2003122 = sum of:
        0.047353994 = weight(_text_:systems in 7415) [ClassicSimilarity], result of:
          0.047353994 = score(doc=7415,freq=4.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.38414678 = fieldWeight in 7415, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0625 = fieldNorm(doc=7415)
        0.012300085 = product of:
          0.02460017 = sum of:
            0.02460017 = weight(_text_:science in 7415) [ClassicSimilarity], result of:
              0.02460017 = score(doc=7415,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.23282544 = fieldWeight in 7415, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7415)
          0.5 = coord(1/2)
        0.14065813 = sum of:
          0.09718135 = weight(_text_:applications in 7415) [ClassicSimilarity], result of:
            0.09718135 = score(doc=7415,freq=4.0), product of:
              0.17659263 = queryWeight, product of:
                4.4025097 = idf(docFreq=1471, maxDocs=44218)
                0.04011181 = queryNorm
              0.5503137 = fieldWeight in 7415, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.4025097 = idf(docFreq=1471, maxDocs=44218)
                0.0625 = fieldNorm(doc=7415)
          0.04347677 = weight(_text_:22 in 7415) [ClassicSimilarity], result of:
            0.04347677 = score(doc=7415,freq=2.0), product of:
              0.14046472 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04011181 = queryNorm
              0.30952093 = fieldWeight in 7415, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=7415)
      0.42857143 = coord(3/7)
    
    Abstract
    State of the art review of natural language processing updating an earlier review published in ARIST 22(1987). Discusses important developments that have allowed for significant advances in the field of natural language processing: materials and resources; knowledge based systems and statistical approaches; and a strong emphasis on evaluation. Reviews some natural language processing applications and common problems still awaiting solution. Considers closely related applications such as language generation and th egeneration phase of machine translation which face the same problems as natural language processing. Covers natural language methodologies for information retrieval only briefly
    Source
    Annual review of information science and technology. 31(1996), S.83-119
  6. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.07
    0.07280936 = product of:
      0.25483274 = sum of:
        0.063708186 = product of:
          0.19112456 = sum of:
            0.19112456 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.19112456 = score(doc=862,freq=2.0), product of:
                0.34006837 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04011181 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.19112456 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.19112456 = score(doc=862,freq=2.0), product of:
            0.34006837 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04011181 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.2857143 = coord(2/7)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  7. Czejdo. B.D.; Tucci, R.P.: ¬A dataflow graphical language for database applications (1994) 0.06
    0.063150205 = product of:
      0.14735048 = sum of:
        0.059192497 = weight(_text_:systems in 559) [ClassicSimilarity], result of:
          0.059192497 = score(doc=559,freq=4.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.48018348 = fieldWeight in 559, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.078125 = fieldNorm(doc=559)
        0.027419642 = product of:
          0.054839283 = sum of:
            0.054839283 = weight(_text_:29 in 559) [ClassicSimilarity], result of:
              0.054839283 = score(doc=559,freq=2.0), product of:
                0.14110081 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04011181 = queryNorm
                0.38865322 = fieldWeight in 559, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=559)
          0.5 = coord(1/2)
        0.060738344 = product of:
          0.12147669 = sum of:
            0.12147669 = weight(_text_:applications in 559) [ClassicSimilarity], result of:
              0.12147669 = score(doc=559,freq=4.0), product of:
                0.17659263 = queryWeight, product of:
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.04011181 = queryNorm
                0.68789214 = fieldWeight in 559, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.078125 = fieldNorm(doc=559)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    Discusses a graphical language for information retrieval and processing. A lot of recent activity has occured in the area of improving access to database systems. However, current results are restricted to simple interfacing of database systems. Proposes a graphical language for specifying complex applications
    Date
    20.10.2000 13:29:46
  8. Ciganik, M.: Pred koordinaciou a kooperaciou informacnych systemov (1997) 0.04
    0.044761702 = product of:
      0.104443975 = sum of:
        0.057996564 = weight(_text_:systems in 950) [ClassicSimilarity], result of:
          0.057996564 = score(doc=950,freq=6.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.4704818 = fieldWeight in 950, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0625 = fieldNorm(doc=950)
        0.021935713 = product of:
          0.043871425 = sum of:
            0.043871425 = weight(_text_:29 in 950) [ClassicSimilarity], result of:
              0.043871425 = score(doc=950,freq=2.0), product of:
                0.14110081 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04011181 = queryNorm
                0.31092256 = fieldWeight in 950, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=950)
          0.5 = coord(1/2)
        0.024511702 = weight(_text_:library in 950) [ClassicSimilarity], result of:
          0.024511702 = score(doc=950,freq=2.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.23240642 = fieldWeight in 950, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0625 = fieldNorm(doc=950)
      0.42857143 = coord(3/7)
    
    Abstract
    The information requirements for library users can only be met if individual information systems are compatible, i.e. based on the use of a single information language. Points out that natural language is the best instrument for integration of information systems. Presents a model of the strucutre of natural language, extended by metaknowledge elements which makes it possible to analyse and represent text without the need for syntax analysis
    Footnote
    Übers. des Titels: Coordination of information systems
    Source
    Kniznice a informacie. 29(1997) no.10, S.389-396
  9. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.04
    0.043912634 = product of:
      0.10246281 = sum of:
        0.02152515 = product of:
          0.0430503 = sum of:
            0.0430503 = weight(_text_:science in 4506) [ClassicSimilarity], result of:
              0.0430503 = score(doc=4506,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.40744454 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
        0.04289548 = weight(_text_:library in 4506) [ClassicSimilarity], result of:
          0.04289548 = score(doc=4506,freq=2.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.40671125 = fieldWeight in 4506, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.109375 = fieldNorm(doc=4506)
        0.038042177 = product of:
          0.07608435 = sum of:
            0.07608435 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.07608435 = score(doc=4506,freq=2.0), product of:
                0.14046472 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04011181 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Date
    8.10.2000 11:52:22
    Source
    Library science with a slant to documentation. 28(1991) no.4, S.125-130
  10. Computational linguistics for the new millennium : divergence or synergy? Proceedings of the International Symposium held at the Ruprecht-Karls Universität Heidelberg, 21-22 July 2000. Festschrift in honour of Peter Hellwig on the occasion of his 60th birthday (2002) 0.04
    0.042315748 = product of:
      0.09873674 = sum of:
        0.020927707 = weight(_text_:systems in 4900) [ClassicSimilarity], result of:
          0.020927707 = score(doc=4900,freq=2.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.1697705 = fieldWeight in 4900, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4900)
        0.0076875538 = product of:
          0.0153751075 = sum of:
            0.0153751075 = weight(_text_:science in 4900) [ClassicSimilarity], result of:
              0.0153751075 = score(doc=4900,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.1455159 = fieldWeight in 4900, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4900)
          0.5 = coord(1/2)
        0.07012148 = sum of:
          0.042948496 = weight(_text_:applications in 4900) [ClassicSimilarity], result of:
            0.042948496 = score(doc=4900,freq=2.0), product of:
              0.17659263 = queryWeight, product of:
                4.4025097 = idf(docFreq=1471, maxDocs=44218)
                0.04011181 = queryNorm
              0.2432066 = fieldWeight in 4900, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.4025097 = idf(docFreq=1471, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4900)
          0.027172983 = weight(_text_:22 in 4900) [ClassicSimilarity], result of:
            0.027172983 = score(doc=4900,freq=2.0), product of:
              0.14046472 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04011181 = queryNorm
              0.19345059 = fieldWeight in 4900, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4900)
      0.42857143 = coord(3/7)
    
    Abstract
    The two seemingly conflicting tendencies, synergy and divergence, are both fundamental to the advancement of any science. Their interplay defines the demarcation line between application-oriented and theoretical research. The papers in this festschrift in honour of Peter Hellwig are geared to answer questions that arise from this insight: where does the discipline of Computational Linguistics currently stand, what has been achieved so far and what should be done next. Given the complexity of such questions, no simple answers can be expected. However, each of the practitioners and researchers are contributing from their very own perspective a piece of insight into the overall picture of today's and tomorrow's computational linguistics.
    Content
    Contents: Manfred Klenner / Henriette Visser: Introduction - Khurshid Ahmad: Writing Linguistics: When I use a word it means what I choose it to mean - Jürgen Handke: 2000 and Beyond: The Potential of New Technologies in Linguistics - Jurij Apresjan / Igor Boguslavsky / Leonid Iomdin / Leonid Tsinman: Lexical Functions in NU: Possible Uses - Hubert Lehmann: Practical Machine Translation and Linguistic Theory - Karin Haenelt: A Contextbased Approach towards Content Processing of Electronic Documents - Petr Sgall / Eva Hajicová: Are Linguistic Frameworks Comparable? - Wolfgang Menzel: Theory and Applications in Computational Linguistics - Is there Common Ground? - Robert Porzel / Michael Strube: Towards Context-adaptive Natural Language Processing Systems - Nicoletta Calzolari: Language Resources in a Multilingual Setting: The European Perspective - Piek Vossen: Computational Linguistics for Theory and Practice.
  11. Goshawke, W.; Kelly, D.K.; Wigg, J.D.: Computer translation of natural language (1987) 0.04
    0.041119143 = product of:
      0.143917 = sum of:
        0.07103099 = weight(_text_:systems in 4819) [ClassicSimilarity], result of:
          0.07103099 = score(doc=4819,freq=4.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.57622015 = fieldWeight in 4819, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.09375 = fieldNorm(doc=4819)
        0.07288601 = product of:
          0.14577202 = sum of:
            0.14577202 = weight(_text_:applications in 4819) [ClassicSimilarity], result of:
              0.14577202 = score(doc=4819,freq=4.0), product of:
                0.17659263 = queryWeight, product of:
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.04011181 = queryNorm
                0.82547057 = fieldWeight in 4819, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4819)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    PRECIS
    Languages / Translation / Applications of computer systems
    Subject
    Languages / Translation / Applications of computer systems
  12. Croft, W.B.: Knowledge-based and statistical approaches to text retrieval (1993) 0.04
    0.0387675 = product of:
      0.13568625 = sum of:
        0.066968665 = weight(_text_:systems in 7863) [ClassicSimilarity], result of:
          0.066968665 = score(doc=7863,freq=2.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.5432656 = fieldWeight in 7863, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.125 = fieldNorm(doc=7863)
        0.06871759 = product of:
          0.13743518 = sum of:
            0.13743518 = weight(_text_:applications in 7863) [ClassicSimilarity], result of:
              0.13743518 = score(doc=7863,freq=2.0), product of:
                0.17659263 = queryWeight, product of:
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.04011181 = queryNorm
                0.7782611 = fieldWeight in 7863, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.125 = fieldNorm(doc=7863)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Source
    IEEE expert intelligent systems and their applications. 8(1993) no.2, S.8-12
  13. Stede, M.: Lexicalization in natural language generation (2002) 0.04
    0.037376758 = product of:
      0.065409325 = sum of:
        0.020927707 = weight(_text_:systems in 4245) [ClassicSimilarity], result of:
          0.020927707 = score(doc=4245,freq=2.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.1697705 = fieldWeight in 4245, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4245)
        0.0076875538 = product of:
          0.0153751075 = sum of:
            0.0153751075 = weight(_text_:science in 4245) [ClassicSimilarity], result of:
              0.0153751075 = score(doc=4245,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.1455159 = fieldWeight in 4245, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4245)
          0.5 = coord(1/2)
        0.015319815 = weight(_text_:library in 4245) [ClassicSimilarity], result of:
          0.015319815 = score(doc=4245,freq=2.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.14525402 = fieldWeight in 4245, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4245)
        0.021474248 = product of:
          0.042948496 = sum of:
            0.042948496 = weight(_text_:applications in 4245) [ClassicSimilarity], result of:
              0.042948496 = score(doc=4245,freq=2.0), product of:
                0.17659263 = queryWeight, product of:
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.04011181 = queryNorm
                0.2432066 = fieldWeight in 4245, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4245)
          0.5 = coord(1/2)
      0.5714286 = coord(4/7)
    
    Abstract
    Natural language generation (NLG), the automatic production of text by Computers, is commonly seen as a process consisting of the following distinct phases: Obviously, choosing words is a central aspect of generatiog language. In which of the these phases it should take place is not entirely clear, however. The decision depends an various factors: what exactly is seen as an individual lexical item; how the relation between word meaning and background knowledge (concepts) is defined; how one accounts for the interactions between individual lexical choices in the Same sentence; what criteria are employed for choosing between similar words; whether or not output is required in one or more languages. This article surveys these issues and the answers that have been proposed in NLG research. For many applications of natural language processing, large scale lexical resources have become available in recent years, such as the WordNet database. In language generation, however, generic lexicons are not in use yet; rather, almost every generation project develops its own format for lexical representations. The reason is that the entries of a generation lexicon need their specific interfaces to the Input representations processed by the generator; lexical semantics in an NLG lexicon needs to be tailored to the Input. Ort the other hand, the large lexicons used for language analysis typically have only very limited semantic information at all. Yet the syntactic behavior of words remains the same regardless of the particular application; thus, it should be possible to build at least parts of generic NLG lexical entries automatically, which could then be used by different systems.
    Source
    Encyclopedia of library and information science. Vol.70, [=Suppl.33]
  14. Bowker, L.: Information retrieval in translation memory systems : assessment of current limitations and possibilities for future development (2002) 0.03
    0.03161704 = product of:
      0.11065964 = sum of:
        0.050746992 = weight(_text_:systems in 1854) [ClassicSimilarity], result of:
          0.050746992 = score(doc=1854,freq=6.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.41167158 = fieldWeight in 1854, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1854)
        0.059912644 = sum of:
          0.02152515 = weight(_text_:science in 1854) [ClassicSimilarity], result of:
            0.02152515 = score(doc=1854,freq=2.0), product of:
              0.10565929 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.04011181 = queryNorm
              0.20372227 = fieldWeight in 1854, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1854)
          0.038387496 = weight(_text_:29 in 1854) [ClassicSimilarity], result of:
            0.038387496 = score(doc=1854,freq=2.0), product of:
              0.14110081 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.04011181 = queryNorm
              0.27205724 = fieldWeight in 1854, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1854)
      0.2857143 = coord(2/7)
    
    Abstract
    A translation memory system is a new type of human language technology (HLT) tool that is gaining popularity among translators. Such tools allow translators to store previously translated texts in a type of aligned bilingual database, and to recycle relevant parts of these texts when producing new translations. Currently, these tools retrieve information from the database using superficial character string matching, which often results in poor precision and recall. This paper explains how translation memory systems work, and it considers some possible ways for introducing more sophisticated information retrieval techniques into such systems by taking syntactic and semantic similarity into account. Some of the suggested techniques are inspired by these used in other areas of HLT, and some by techniques used in information science.
    Source
    Knowledge organization. 29(2002) nos.3/4, S.198-203
  15. Betrand-Gastaldy, S.: ¬La modelisation de l'analyse documentaire : à la convergence de la semiotique, de la psychologie cognitive et de l'intelligence (1995) 0.03
    0.030474113 = product of:
      0.07110626 = sum of:
        0.04349742 = weight(_text_:systems in 5377) [ClassicSimilarity], result of:
          0.04349742 = score(doc=5377,freq=6.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.35286134 = fieldWeight in 5377, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=5377)
        0.009225064 = product of:
          0.018450128 = sum of:
            0.018450128 = weight(_text_:science in 5377) [ClassicSimilarity], result of:
              0.018450128 = score(doc=5377,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.17461908 = fieldWeight in 5377, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5377)
          0.5 = coord(1/2)
        0.018383777 = weight(_text_:library in 5377) [ClassicSimilarity], result of:
          0.018383777 = score(doc=5377,freq=2.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.17430481 = fieldWeight in 5377, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=5377)
      0.42857143 = coord(3/7)
    
    Abstract
    Textual semiotics and cognitive psychology are advocated to model several types of documentary analysis. Proposes a theoretical model which combines elements from the 2 disciplines. Thanks to the addition of values of properties pertaining to different semiotic systems to the primary and secondary texts, one can retrieve the units and the characteristics valued by a group of indexers or by one individual. The cognitive studies of the experts confirm or complete the textual analysis. Examples from the findings obtained by the statistic-linguistic analysis of 2 corpora illustrate the usefulness of the methodology, especially for the conception of expert systems to assist whatever kind of reading
    Imprint
    Alberta : Alberta University, School of Library and Information Studies
    Source
    Connectedness: information, systems, people, organizations. Proceedings of CAIS/ACSI 95, the proceedings of the 23rd Annual Conference of the Canadian Association for Information Science. Ed. by Hope A. Olson and Denis B. Ward
  16. Muresan, S.; Klavans, J.L.: Inducing terminologies from text : a case study for the consumer health domain (2013) 0.03
    0.030334853 = product of:
      0.07078132 = sum of:
        0.02511325 = weight(_text_:systems in 682) [ClassicSimilarity], result of:
          0.02511325 = score(doc=682,freq=2.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.2037246 = fieldWeight in 682, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=682)
        0.009225064 = product of:
          0.018450128 = sum of:
            0.018450128 = weight(_text_:science in 682) [ClassicSimilarity], result of:
              0.018450128 = score(doc=682,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.17461908 = fieldWeight in 682, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=682)
          0.5 = coord(1/2)
        0.036443006 = product of:
          0.07288601 = sum of:
            0.07288601 = weight(_text_:applications in 682) [ClassicSimilarity], result of:
              0.07288601 = score(doc=682,freq=4.0), product of:
                0.17659263 = queryWeight, product of:
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.04011181 = queryNorm
                0.41273528 = fieldWeight in 682, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.046875 = fieldNorm(doc=682)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    Specialized medical ontologies and terminologies, such as SNOMED CT and the Unified Medical Language System (UMLS), have been successfully leveraged in medical information systems to provide a standard web-accessible medium for interoperability, access, and reuse. However, these clinically oriented terminologies and ontologies cannot provide sufficient support when integrated into consumer-oriented applications, because these applications must "understand" both technical and lay vocabulary. The latter is not part of these specialized terminologies and ontologies. In this article, we propose a two-step approach for building consumer health terminologies from text: 1) automatic extraction of definitions from consumer-oriented articles and web documents, which reflects language in use, rather than relying solely on dictionaries, and 2) learning to map definitions expressed in natural language to terminological knowledge by inducing a syntactic-semantic grammar rather than using hand-written patterns or grammars. We present quantitative and qualitative evaluations of our two-step approach, which show that our framework could be used to induce consumer health terminologies from text.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.4, S.727-744
  17. Chowdhury, G.G.: Natural language processing (2002) 0.03
    0.030218424 = product of:
      0.07050966 = sum of:
        0.035515495 = weight(_text_:systems in 4284) [ClassicSimilarity], result of:
          0.035515495 = score(doc=4284,freq=4.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.28811008 = fieldWeight in 4284, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=4284)
        0.009225064 = product of:
          0.018450128 = sum of:
            0.018450128 = weight(_text_:science in 4284) [ClassicSimilarity], result of:
              0.018450128 = score(doc=4284,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.17461908 = fieldWeight in 4284, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4284)
          0.5 = coord(1/2)
        0.025769096 = product of:
          0.05153819 = sum of:
            0.05153819 = weight(_text_:applications in 4284) [ClassicSimilarity], result of:
              0.05153819 = score(doc=4284,freq=2.0), product of:
                0.17659263 = queryWeight, product of:
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.04011181 = queryNorm
                0.2918479 = fieldWeight in 4284, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4284)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    Natural Language Processing (NLP) is an area of research and application that explores how computers can be used to understand and manipulate natural language text or speech to do useful things. NLP researchers aim to gather knowledge an how human beings understand and use language so that appropriate tools and techniques can be developed to make computer systems understand and manipulate natural languages to perform desired tasks. The foundations of NLP lie in a number of disciplines, namely, computer and information sciences, linguistics, mathematics, electrical and electronic engineering, artificial intelligence and robotics, and psychology. Applications of NLP include a number of fields of study, such as machine translation, natural language text processing and summarization, user interfaces, multilingual and cross-language information retrieval (CLIR), speech recognition, artificial intelligence, and expert systems. One important application area that is relatively new and has not been covered in previous ARIST chapters an NLP relates to the proliferation of the World Wide Web and digital libraries.
    Source
    Annual review of information science and technology. 37(2003), S.51-90
  18. Rahmstorf, G.: Concept structures for large vocabularies (1998) 0.03
    0.030087028 = product of:
      0.070203066 = sum of:
        0.035515495 = weight(_text_:systems in 75) [ClassicSimilarity], result of:
          0.035515495 = score(doc=75,freq=4.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.28811008 = fieldWeight in 75, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=75)
        0.018383777 = weight(_text_:library in 75) [ClassicSimilarity], result of:
          0.018383777 = score(doc=75,freq=2.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.17430481 = fieldWeight in 75, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=75)
        0.016303789 = product of:
          0.032607578 = sum of:
            0.032607578 = weight(_text_:22 in 75) [ClassicSimilarity], result of:
              0.032607578 = score(doc=75,freq=2.0), product of:
                0.14046472 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04011181 = queryNorm
                0.23214069 = fieldWeight in 75, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=75)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    A technology is described which supports the acquisition, visualisation and manipulation of large vocabularies with associated structures. It is used for dictionary production, terminology data bases, thesauri, library classification systems etc. Essential features of the technology are a lexicographic user interface, variable word description, unlimited list of word readings, a concept language, automatic transformations of formulas into graphic structures, structure manipulation operations and retransformation into formulas. The concept language includes notations for undefined concepts. The structure of defined concepts can be constructed interactively. The technology supports the generation of large vocabularies with structures representing word senses. Concept structures and ordering systems for indexing and retrieval can be constructed separately and connected by associating relations.
    Date
    30.12.2001 19:01:22
  19. Shen, M.; Liu, D.-R.; Huang, Y.-S.: Extracting semantic relations to enrich domain ontologies (2012) 0.03
    0.030053705 = product of:
      0.07012531 = sum of:
        0.02929879 = weight(_text_:systems in 267) [ClassicSimilarity], result of:
          0.02929879 = score(doc=267,freq=2.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.23767869 = fieldWeight in 267, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=267)
        0.010762575 = product of:
          0.02152515 = sum of:
            0.02152515 = weight(_text_:science in 267) [ClassicSimilarity], result of:
              0.02152515 = score(doc=267,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.20372227 = fieldWeight in 267, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=267)
          0.5 = coord(1/2)
        0.030063946 = product of:
          0.06012789 = sum of:
            0.06012789 = weight(_text_:applications in 267) [ClassicSimilarity], result of:
              0.06012789 = score(doc=267,freq=2.0), product of:
                0.17659263 = queryWeight, product of:
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.04011181 = queryNorm
                0.34048924 = fieldWeight in 267, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.4025097 = idf(docFreq=1471, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=267)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    Domain ontologies facilitate the organization, sharing and reuse of domain knowledge, and enable various vertical domain applications to operate successfully. Most methods for automatically constructing ontologies focus on taxonomic relations, such as is-kind-of and is- part-of relations. However, much of the domain-specific semantics is ignored. This work proposes a semi-unsupervised approach for extracting semantic relations from domain-specific text documents. The approach effectively utilizes text mining and existing taxonomic relations in domain ontologies to discover candidate keywords that can represent semantic relations. A preliminary experiment on the natural science domain (Taiwan K9 education) indicates that the proposed method yields valuable recommendations. This work enriches domain ontologies by adding distilled semantics.
    Source
    Journal of Intelligent Information Systems
  20. Yang, C.C.; Luk, J.: Automatic generation of English/Chinese thesaurus based on a parallel corpus in laws (2003) 0.03
    0.027938657 = product of:
      0.065190196 = sum of:
        0.0053812875 = product of:
          0.010762575 = sum of:
            0.010762575 = weight(_text_:science in 1616) [ClassicSimilarity], result of:
              0.010762575 = score(doc=1616,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.101861134 = fieldWeight in 1616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1616)
          0.5 = coord(1/2)
        0.01072387 = weight(_text_:library in 1616) [ClassicSimilarity], result of:
          0.01072387 = score(doc=1616,freq=2.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.10167781 = fieldWeight in 1616, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1616)
        0.049085036 = sum of:
          0.030063946 = weight(_text_:applications in 1616) [ClassicSimilarity], result of:
            0.030063946 = score(doc=1616,freq=2.0), product of:
              0.17659263 = queryWeight, product of:
                4.4025097 = idf(docFreq=1471, maxDocs=44218)
                0.04011181 = queryNorm
              0.17024462 = fieldWeight in 1616, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.4025097 = idf(docFreq=1471, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1616)
          0.019021088 = weight(_text_:22 in 1616) [ClassicSimilarity], result of:
            0.019021088 = score(doc=1616,freq=2.0), product of:
              0.14046472 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04011181 = queryNorm
              0.1354154 = fieldWeight in 1616, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1616)
      0.42857143 = coord(3/7)
    
    Abstract
    The information available in languages other than English in the World Wide Web is increasing significantly. According to a report from Computer Economics in 1999, 54% of Internet users are English speakers ("English Will Dominate Web for Only Three More Years," Computer Economics, July 9, 1999, http://www.computereconomics. com/new4/pr/pr990610.html). However, it is predicted that there will be only 60% increase in Internet users among English speakers verses a 150% growth among nonEnglish speakers for the next five years. By 2005, 57% of Internet users will be non-English speakers. A report by CNN.com in 2000 showed that the number of Internet users in China had been increased from 8.9 million to 16.9 million from January to June in 2000 ("Report: China Internet users double to 17 million," CNN.com, July, 2000, http://cnn.org/2000/TECH/computing/07/27/ china.internet.reut/index.html). According to Nielsen/ NetRatings, there was a dramatic leap from 22.5 millions to 56.6 millions Internet users from 2001 to 2002. China had become the second largest global at-home Internet population in 2002 (US's Internet population was 166 millions) (Robyn Greenspan, "China Pulls Ahead of Japan," Internet.com, April 22, 2002, http://cyberatias.internet.com/big-picture/geographics/article/0,,5911_1013841,00. html). All of the evidences reveal the importance of crosslingual research to satisfy the needs in the near future. Digital library research has been focusing in structural and semantic interoperability in the past. Searching and retrieving objects across variations in protocols, formats and disciplines are widely explored (Schatz, B., & Chen, H. (1999). Digital libraries: technological advances and social impacts. IEEE Computer, Special Issue an Digital Libraries, February, 32(2), 45-50.; Chen, H., Yen, J., & Yang, C.C. (1999). International activities: development of Asian digital libraries. IEEE Computer, Special Issue an Digital Libraries, 32(2), 48-49.). However, research in crossing language boundaries, especially across European languages and Oriental languages, is still in the initial stage. In this proposal, we put our focus an cross-lingual semantic interoperability by developing automatic generation of a cross-lingual thesaurus based an English/Chinese parallel corpus. When the searchers encounter retrieval problems, Professional librarians usually consult the thesaurus to identify other relevant vocabularies. In the problem of searching across language boundaries, a cross-lingual thesaurus, which is generated by co-occurrence analysis and Hopfield network, can be used to generate additional semantically relevant terms that cannot be obtained from dictionary. In particular, the automatically generated cross-lingual thesaurus is able to capture the unknown words that do not exist in a dictionary, such as names of persons, organizations, and events. Due to Hong Kong's unique history background, both English and Chinese are used as official languages in all legal documents. Therefore, English/Chinese cross-lingual information retrieval is critical for applications in courts and the government. In this paper, we develop an automatic thesaurus by the Hopfield network based an a parallel corpus collected from the Web site of the Department of Justice of the Hong Kong Special Administrative Region (HKSAR) Government. Experiments are conducted to measure the precision and recall of the automatic generated English/Chinese thesaurus. The result Shows that such thesaurus is a promising tool to retrieve relevant terms, especially in the language that is not the same as the input term. The direct translation of the input term can also be retrieved in most of the cases.
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.7, S.671-682

Languages

  • e 348
  • d 50
  • ru 5
  • m 3
  • chi 1
  • f 1
  • More… Less…

Types

  • a 328
  • m 55
  • el 29
  • s 18
  • x 7
  • p 3
  • b 1
  • d 1
  • r 1
  • More… Less…

Subjects

Classifications