Search (63 results, page 1 of 4)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.31
    0.30974132 = product of:
      0.46461195 = sum of:
        0.06403218 = product of:
          0.19209655 = sum of:
            0.19209655 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.19209655 = score(doc=562,freq=2.0), product of:
                0.34179783 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.040315803 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.19209655 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.19209655 = score(doc=562,freq=2.0), product of:
            0.34179783 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.040315803 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.19209655 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.19209655 = score(doc=562,freq=2.0), product of:
            0.34179783 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.040315803 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.016386703 = product of:
          0.032773405 = sum of:
            0.032773405 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.032773405 = score(doc=562,freq=2.0), product of:
                0.14117907 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040315803 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.6666667 = coord(4/6)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.22
    0.22411263 = product of:
      0.44822526 = sum of:
        0.06403218 = product of:
          0.19209655 = sum of:
            0.19209655 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.19209655 = score(doc=862,freq=2.0), product of:
                0.34179783 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.040315803 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.19209655 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.19209655 = score(doc=862,freq=2.0), product of:
            0.34179783 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.040315803 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.19209655 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.19209655 = score(doc=862,freq=2.0), product of:
            0.34179783 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.040315803 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.5 = coord(3/6)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.20
    0.2002899 = product of:
      0.4005798 = sum of:
        0.19209655 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.19209655 = score(doc=563,freq=2.0), product of:
            0.34179783 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.040315803 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.19209655 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.19209655 = score(doc=563,freq=2.0), product of:
            0.34179783 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.040315803 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.016386703 = product of:
          0.032773405 = sum of:
            0.032773405 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.032773405 = score(doc=563,freq=2.0), product of:
                0.14117907 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040315803 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  4. Yang, C.C.; Luk, J.: Automatic generation of English/Chinese thesaurus based on a parallel corpus in laws (2003) 0.01
    0.014402415 = product of:
      0.043207243 = sum of:
        0.033648334 = weight(_text_:libraries in 1616) [ClassicSimilarity], result of:
          0.033648334 = score(doc=1616,freq=8.0), product of:
            0.13243961 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.040315803 = queryNorm
            0.25406548 = fieldWeight in 1616, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1616)
        0.0095589105 = product of:
          0.019117821 = sum of:
            0.019117821 = weight(_text_:22 in 1616) [ClassicSimilarity], result of:
              0.019117821 = score(doc=1616,freq=2.0), product of:
                0.14117907 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040315803 = queryNorm
                0.1354154 = fieldWeight in 1616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1616)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    The information available in languages other than English in the World Wide Web is increasing significantly. According to a report from Computer Economics in 1999, 54% of Internet users are English speakers ("English Will Dominate Web for Only Three More Years," Computer Economics, July 9, 1999, http://www.computereconomics. com/new4/pr/pr990610.html). However, it is predicted that there will be only 60% increase in Internet users among English speakers verses a 150% growth among nonEnglish speakers for the next five years. By 2005, 57% of Internet users will be non-English speakers. A report by CNN.com in 2000 showed that the number of Internet users in China had been increased from 8.9 million to 16.9 million from January to June in 2000 ("Report: China Internet users double to 17 million," CNN.com, July, 2000, http://cnn.org/2000/TECH/computing/07/27/ china.internet.reut/index.html). According to Nielsen/ NetRatings, there was a dramatic leap from 22.5 millions to 56.6 millions Internet users from 2001 to 2002. China had become the second largest global at-home Internet population in 2002 (US's Internet population was 166 millions) (Robyn Greenspan, "China Pulls Ahead of Japan," Internet.com, April 22, 2002, http://cyberatias.internet.com/big-picture/geographics/article/0,,5911_1013841,00. html). All of the evidences reveal the importance of crosslingual research to satisfy the needs in the near future. Digital library research has been focusing in structural and semantic interoperability in the past. Searching and retrieving objects across variations in protocols, formats and disciplines are widely explored (Schatz, B., & Chen, H. (1999). Digital libraries: technological advances and social impacts. IEEE Computer, Special Issue an Digital Libraries, February, 32(2), 45-50.; Chen, H., Yen, J., & Yang, C.C. (1999). International activities: development of Asian digital libraries. IEEE Computer, Special Issue an Digital Libraries, 32(2), 48-49.). However, research in crossing language boundaries, especially across European languages and Oriental languages, is still in the initial stage. In this proposal, we put our focus an cross-lingual semantic interoperability by developing automatic generation of a cross-lingual thesaurus based an English/Chinese parallel corpus. When the searchers encounter retrieval problems, Professional librarians usually consult the thesaurus to identify other relevant vocabularies. In the problem of searching across language boundaries, a cross-lingual thesaurus, which is generated by co-occurrence analysis and Hopfield network, can be used to generate additional semantically relevant terms that cannot be obtained from dictionary. In particular, the automatically generated cross-lingual thesaurus is able to capture the unknown words that do not exist in a dictionary, such as names of persons, organizations, and events. Due to Hong Kong's unique history background, both English and Chinese are used as official languages in all legal documents. Therefore, English/Chinese cross-lingual information retrieval is critical for applications in courts and the government. In this paper, we develop an automatic thesaurus by the Hopfield network based an a parallel corpus collected from the Web site of the Department of Justice of the Hong Kong Special Administrative Region (HKSAR) Government. Experiments are conducted to measure the precision and recall of the automatic generated English/Chinese thesaurus. The result Shows that such thesaurus is a promising tool to retrieve relevant terms, especially in the language that is not the same as the input term. The direct translation of the input term can also be retrieved in most of the cases.
  5. Warner, A.J.: Natural language processing (1987) 0.01
    0.00728298 = product of:
      0.04369788 = sum of:
        0.04369788 = product of:
          0.08739576 = sum of:
            0.08739576 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.08739576 = score(doc=337,freq=2.0), product of:
                0.14117907 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040315803 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  6. Rahmstorf, G.: Compositional semantics and concept representation (1991) 0.01
    0.0064092064 = product of:
      0.038455237 = sum of:
        0.038455237 = weight(_text_:libraries in 6673) [ClassicSimilarity], result of:
          0.038455237 = score(doc=6673,freq=2.0), product of:
            0.13243961 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.040315803 = queryNorm
            0.29036054 = fieldWeight in 6673, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0625 = fieldNorm(doc=6673)
      0.16666667 = coord(1/6)
    
    Abstract
    Concept systems are not only used in the sciences, but also in secondary supporting fields, e.g. in libraries, in documentation, in terminology and increasingly also in knowledge representation. It is suggested that the development of concept systems be based on semantic analysis. Methodical steps are described. The principle of morpho-syntactic composition in semantics will serve as a theoretical basis for the suggested method. The implications and limitations of this principle will be demonstrated
  7. Pritchard-Schoch, T.: Comparing natural language retrieval : Win & Freestyle (1995) 0.01
    0.0064092064 = product of:
      0.038455237 = sum of:
        0.038455237 = weight(_text_:libraries in 2546) [ClassicSimilarity], result of:
          0.038455237 = score(doc=2546,freq=2.0), product of:
            0.13243961 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.040315803 = queryNorm
            0.29036054 = fieldWeight in 2546, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0625 = fieldNorm(doc=2546)
      0.16666667 = coord(1/6)
    
    Abstract
    Reports on a comparison of 2 natural language interfaces to full text legal databases: WIN for access to WESTLAW databases and FREESTYLE for access to the LEXIS database. 30 legal issues in natural langugae queries were presented to identical libraries in both systems. The top 20 ranked documents from each search were analyzed and reviewed for relevance to the legal issue
  8. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.01
    0.0063726073 = product of:
      0.038235642 = sum of:
        0.038235642 = product of:
          0.076471284 = sum of:
            0.076471284 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.076471284 = score(doc=3164,freq=2.0), product of:
                0.14117907 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040315803 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  9. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.01
    0.0063726073 = product of:
      0.038235642 = sum of:
        0.038235642 = product of:
          0.076471284 = sum of:
            0.076471284 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.076471284 = score(doc=4506,freq=2.0), product of:
                0.14117907 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040315803 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    8.10.2000 11:52:22
  10. Somers, H.: Example-based machine translation : Review article (1999) 0.01
    0.0063726073 = product of:
      0.038235642 = sum of:
        0.038235642 = product of:
          0.076471284 = sum of:
            0.076471284 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.076471284 = score(doc=6672,freq=2.0), product of:
                0.14117907 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040315803 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    31. 7.1996 9:22:19
  11. New tools for human translators (1997) 0.01
    0.0063726073 = product of:
      0.038235642 = sum of:
        0.038235642 = product of:
          0.076471284 = sum of:
            0.076471284 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.076471284 = score(doc=1179,freq=2.0), product of:
                0.14117907 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040315803 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    31. 7.1996 9:22:19
  12. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.01
    0.0063726073 = product of:
      0.038235642 = sum of:
        0.038235642 = product of:
          0.076471284 = sum of:
            0.076471284 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.076471284 = score(doc=3117,freq=2.0), product of:
                0.14117907 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040315803 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    28. 2.1999 10:48:22
  13. ¬Der Student aus dem Computer (2023) 0.01
    0.0063726073 = product of:
      0.038235642 = sum of:
        0.038235642 = product of:
          0.076471284 = sum of:
            0.076471284 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.076471284 = score(doc=1079,freq=2.0), product of:
                0.14117907 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040315803 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    27. 1.2023 16:22:55
  14. Zaitseva, E.M.: Developing linguistic tools of thematic search in library information systems (2023) 0.01
    0.005664991 = product of:
      0.033989947 = sum of:
        0.033989947 = weight(_text_:libraries in 1187) [ClassicSimilarity], result of:
          0.033989947 = score(doc=1187,freq=4.0), product of:
            0.13243961 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.040315803 = queryNorm
            0.25664487 = fieldWeight in 1187, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1187)
      0.16666667 = coord(1/6)
    
    Abstract
    Within the R&D program "Information support of research by scientists and specialists on the basis of RNPLS&T Open Archive - the system of scientific knowledge aggregation", the RNPLS&T analyzes the use of linguistic tools of thematic search in the modern library information systems and the prospects for their development. The author defines the key common characteristics of e-catalogs of the largest Russian libraries revealed at the first stage of the analysis. Based on the specified common characteristics and detailed comparison analysis, the author outlines and substantiates the vectors for enhancing search inter faces of e-catalogs. The focus is made on linguistic tools of thematic search in library information systems; the key vectors are suggested: use of thematic search at different search levels with the clear-cut level differentiation; use of combined functionality within thematic search system; implementation of classification search in all e-catalogs; hierarchical representation of classifications; use of the matching systems for classification information retrieval languages, and in the long term classification and verbal information retrieval languages, and various verbal information retrieval languages. The author formulates practical recommendations to improve thematic search in library information systems.
    Source
    Scientific and technical libraries. 1(2023) no.11, S.66-83
  15. Melucci, M.; Orio, N.: Design, implementation, and evaluation of a methodology for automatic stemmer generation (2007) 0.01
    0.0056080557 = product of:
      0.033648334 = sum of:
        0.033648334 = weight(_text_:libraries in 268) [ClassicSimilarity], result of:
          0.033648334 = score(doc=268,freq=2.0), product of:
            0.13243961 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.040315803 = queryNorm
            0.25406548 = fieldWeight in 268, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0546875 = fieldNorm(doc=268)
      0.16666667 = coord(1/6)
    
    Abstract
    The authors describe a statistical approach based on hidden Markov models (HMMs), for generating stemmers automatically. The proposed approach requires little effort to insert new languages in the system even if minimal linguistic knowledge is available. This is a key advantage especially for digital libraries, which are often developed for a specific institution or government because the program can manage a great amount of documents written in local languages. The evaluation described in the article shows that the stemmers implemented by means of HMMs are as effective as those based on linguistic rules.
  16. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.01
    0.0054622344 = product of:
      0.032773405 = sum of:
        0.032773405 = product of:
          0.06554681 = sum of:
            0.06554681 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.06554681 = score(doc=4483,freq=2.0), product of:
                0.14117907 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040315803 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    15. 3.2000 10:22:37
  17. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.01
    0.0054622344 = product of:
      0.032773405 = sum of:
        0.032773405 = product of:
          0.06554681 = sum of:
            0.06554681 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.06554681 = score(doc=4888,freq=2.0), product of:
                0.14117907 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040315803 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    1. 3.2013 14:56:22
  18. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.01
    0.0054622344 = product of:
      0.032773405 = sum of:
        0.032773405 = product of:
          0.06554681 = sum of:
            0.06554681 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
              0.06554681 = score(doc=5429,freq=2.0), product of:
                0.14117907 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040315803 = queryNorm
                0.46428138 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Source
    c't. 2000, H.22, S.230-231
  19. French, J.C.; Powell, A.L.; Schulman, E.: Using clustering strategies for creating authority files (2000) 0.00
    0.0048069046 = product of:
      0.028841427 = sum of:
        0.028841427 = weight(_text_:libraries in 4811) [ClassicSimilarity], result of:
          0.028841427 = score(doc=4811,freq=2.0), product of:
            0.13243961 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.040315803 = queryNorm
            0.2177704 = fieldWeight in 4811, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.046875 = fieldNorm(doc=4811)
      0.16666667 = coord(1/6)
    
    Abstract
    As more online databases are integrated into digital libraries, the issue of quality control of the data becomes increasingly important, especially as it relates to the effective retrieval of information. Authority work, the need to discover and reconcile variant forms of strings in bibliographical entries, will become more critical in the future. Spelling variants, misspellings, and transliteration differences will all increase the difficulty of retrieving information. We investigate a number of approximate string matching techniques that have traditionally been used to help with this problem. We then introduce the notion of approximate word matching and show how it can be used to improve detection and categorization of variant forms. We demonstrate the utility of these approaches using data from the Astrophysics Data System and show how we can reduce the human effort involved in the creation of authority files
  20. Chowdhury, G.G.: Natural language processing (2002) 0.00
    0.0048069046 = product of:
      0.028841427 = sum of:
        0.028841427 = weight(_text_:libraries in 4284) [ClassicSimilarity], result of:
          0.028841427 = score(doc=4284,freq=2.0), product of:
            0.13243961 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.040315803 = queryNorm
            0.2177704 = fieldWeight in 4284, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.046875 = fieldNorm(doc=4284)
      0.16666667 = coord(1/6)
    
    Abstract
    Natural Language Processing (NLP) is an area of research and application that explores how computers can be used to understand and manipulate natural language text or speech to do useful things. NLP researchers aim to gather knowledge an how human beings understand and use language so that appropriate tools and techniques can be developed to make computer systems understand and manipulate natural languages to perform desired tasks. The foundations of NLP lie in a number of disciplines, namely, computer and information sciences, linguistics, mathematics, electrical and electronic engineering, artificial intelligence and robotics, and psychology. Applications of NLP include a number of fields of study, such as machine translation, natural language text processing and summarization, user interfaces, multilingual and cross-language information retrieval (CLIR), speech recognition, artificial intelligence, and expert systems. One important application area that is relatively new and has not been covered in previous ARIST chapters an NLP relates to the proliferation of the World Wide Web and digital libraries.

Years

Languages

  • e 46
  • d 17

Types

  • a 51
  • el 6
  • m 5
  • s 3
  • p 2
  • x 2
  • d 1
  • More… Less…