Search (244 results, page 1 of 13)

  • × theme_ss:"Computerlinguistik"
  1. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.08
    0.07856258 = product of:
      0.15712516 = sum of:
        0.064335 = weight(_text_:interfaces in 2541) [ClassicSimilarity], result of:
          0.064335 = score(doc=2541,freq=2.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.28785467 = fieldWeight in 2541, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2541)
        0.09279016 = sum of:
          0.022378203 = weight(_text_:systems in 2541) [ClassicSimilarity], result of:
            0.022378203 = score(doc=2541,freq=2.0), product of:
              0.13181444 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.04289195 = queryNorm
              0.1697705 = fieldWeight in 2541, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2541)
          0.029320091 = weight(_text_:29 in 2541) [ClassicSimilarity], result of:
            0.029320091 = score(doc=2541,freq=2.0), product of:
              0.15088047 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.04289195 = queryNorm
              0.19432661 = fieldWeight in 2541, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2541)
          0.04109186 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
            0.04109186 = score(doc=2541,freq=4.0), product of:
              0.15020029 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04289195 = queryNorm
              0.27358043 = fieldWeight in 2541, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2541)
      0.5 = coord(2/4)
    
    Abstract
    The Specialized Information Services Division (SIS) of the National Library of Medicine (NLM) provides Web access to more than a dozen scientific databases on toxicology and the environment on TOXNET . Search queries on TOXNET often include misspelled or variant English words, medical and scientific jargon and chemical names. Following the example of search engines like Google and ClinicalTrials.gov, we set out to develop a spelling "suggestion" system for increased recall and precision in TOXNET searching. This paper describes development of dictionary technology that can be used in a variety of applications such as orthographic verification, writing aid, natural language processing, and information storage and retrieval. The design of the technology allows building complex applications using the components developed in the earlier phases of the work in a modular fashion without extensive rewriting of computer code. Since many of the potential applications envisioned for this work have on-line or web-based interfaces, the dictionaries and other computer components must have fast response, and must be adaptable to open-ended database vocabularies, including chemical nomenclature. The dictionary vocabulary for this work was derived from SIS and other databases and specialized resources, such as NLM's Unified Medical Language Systems (UMLS) . The resulting technology, A-Z Dictionary (AZdict), has three major constituents: 1) the vocabulary list, 2) the word attributes that define part of speech and morphological relationships between words in the list, and 3) a set of programs that implements the retrieval of words and their attributes, and determines similarity between words (ChemSpell). These three components can be used in various applications such as spelling verification, spelling aid, part-of-speech tagging, paraphrasing, and many other natural language processing functions.
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
  2. Mustafa el Hadi, W.: Automatic term recognition & extraction tools : examining the new interfaces and their effective communication role in LSP discourse (1998) 0.08
    0.07581019 = product of:
      0.15162037 = sum of:
        0.1337178 = weight(_text_:interfaces in 67) [ClassicSimilarity], result of:
          0.1337178 = score(doc=67,freq=6.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.59829473 = fieldWeight in 67, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.046875 = fieldNorm(doc=67)
        0.017902562 = product of:
          0.053707685 = sum of:
            0.053707685 = weight(_text_:systems in 67) [ClassicSimilarity], result of:
              0.053707685 = score(doc=67,freq=8.0), product of:
                0.13181444 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.04289195 = queryNorm
                0.4074492 = fieldWeight in 67, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=67)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    In this paper we will discuss the possibility of reorienting NLP (Natural Language Processing) systems towards the extraction, not only of terms and their semantic relations, but also towards a variety of other uses; the storage, accessing and retrieving of Language for Special Purposes (LSPZ-20) lexical combinations, the provision of contexts and other information on terms through the integration of more interfaces to terminological data-bases, term managing systems and existing NLP systems. The aim of making such interfaces available is to increase the efficiency of the systems and improve the terminology-oriented text analysis. Since automatic term extraction is the backbone of many applications such as machine translation (MT), indexing, technical writing, thesaurus construction and knowledge representation developments in this area will have asignificant impact
  3. ¬The language engineering directory (1993) 0.07
    0.071794406 = product of:
      0.14358881 = sum of:
        0.12867 = weight(_text_:interfaces in 8408) [ClassicSimilarity], result of:
          0.12867 = score(doc=8408,freq=2.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.57570934 = fieldWeight in 8408, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.078125 = fieldNorm(doc=8408)
        0.014918802 = product of:
          0.044756405 = sum of:
            0.044756405 = weight(_text_:systems in 8408) [ClassicSimilarity], result of:
              0.044756405 = score(doc=8408,freq=2.0), product of:
                0.13181444 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.04289195 = queryNorm
                0.339541 = fieldWeight in 8408, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.078125 = fieldNorm(doc=8408)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    This is a reference guide to language technology organizations and products around the world. Areas covered in the directory include: Artificial intelligence, Document storage and retrieval, Electronic dictionaries (mono- and multilingual), Expert language systems, Multilingual word processors, Natural language database interfaces, Term databanks, Terminology management, Text content analysis, Thesauri
  4. Pritchard-Schoch, T.: Comparing natural language retrieval : Win & Freestyle (1995) 0.06
    0.057435524 = product of:
      0.11487105 = sum of:
        0.10293601 = weight(_text_:interfaces in 2546) [ClassicSimilarity], result of:
          0.10293601 = score(doc=2546,freq=2.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.46056747 = fieldWeight in 2546, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.0625 = fieldNorm(doc=2546)
        0.01193504 = product of:
          0.03580512 = sum of:
            0.03580512 = weight(_text_:systems in 2546) [ClassicSimilarity], result of:
              0.03580512 = score(doc=2546,freq=2.0), product of:
                0.13181444 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.04289195 = queryNorm
                0.2716328 = fieldWeight in 2546, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2546)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Reports on a comparison of 2 natural language interfaces to full text legal databases: WIN for access to WESTLAW databases and FREESTYLE for access to the LEXIS database. 30 legal issues in natural langugae queries were presented to identical libraries in both systems. The top 20 ranked documents from each search were analyzed and reviewed for relevance to the legal issue
  5. Lee, Y.-H.; Evens, M.W.: Natural language interface for an expert system (1998) 0.05
    0.05025608 = product of:
      0.10051216 = sum of:
        0.090069 = weight(_text_:interfaces in 5108) [ClassicSimilarity], result of:
          0.090069 = score(doc=5108,freq=2.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.40299654 = fieldWeight in 5108, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5108)
        0.010443161 = product of:
          0.031329483 = sum of:
            0.031329483 = weight(_text_:systems in 5108) [ClassicSimilarity], result of:
              0.031329483 = score(doc=5108,freq=2.0), product of:
                0.13181444 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.04289195 = queryNorm
                0.23767869 = fieldWeight in 5108, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5108)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Presents a complete analysis of the underlying principles of natural language interfaces from the screen manager to the parser / understander. The main focus is on the design and development of a subsystem for understanding natural language input in an expert system. Considers that fast response time and user friendliness are the most important considerations in the design. The screen manager provides an easy editing capability for users and the spelling correction system can detect most spelling errors and correct them automatically, quickly and effectively. The Lexical Functional Grammar (LFG) parser and the understander are designed to handle most types of simple sentences, fragments, and ellipses
    Source
    Expert systems. 15(1998) no.4, S.233-239
  6. Haas, S.W.: ¬A feasibility study of the case hierarchy model for the construction and porting of natural language interfaces (1990) 0.05
    0.0450345 = product of:
      0.180138 = sum of:
        0.180138 = weight(_text_:interfaces in 8071) [ClassicSimilarity], result of:
          0.180138 = score(doc=8071,freq=2.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.8059931 = fieldWeight in 8071, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.109375 = fieldNorm(doc=8071)
      0.25 = coord(1/4)
    
  7. Chowdhury, G.G.: Natural language processing (2002) 0.04
    0.044930514 = product of:
      0.08986103 = sum of:
        0.07720201 = weight(_text_:interfaces in 4284) [ClassicSimilarity], result of:
          0.07720201 = score(doc=4284,freq=2.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.3454256 = fieldWeight in 4284, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.046875 = fieldNorm(doc=4284)
        0.012659023 = product of:
          0.037977066 = sum of:
            0.037977066 = weight(_text_:systems in 4284) [ClassicSimilarity], result of:
              0.037977066 = score(doc=4284,freq=4.0), product of:
                0.13181444 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.04289195 = queryNorm
                0.28811008 = fieldWeight in 4284, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4284)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Natural Language Processing (NLP) is an area of research and application that explores how computers can be used to understand and manipulate natural language text or speech to do useful things. NLP researchers aim to gather knowledge an how human beings understand and use language so that appropriate tools and techniques can be developed to make computer systems understand and manipulate natural languages to perform desired tasks. The foundations of NLP lie in a number of disciplines, namely, computer and information sciences, linguistics, mathematics, electrical and electronic engineering, artificial intelligence and robotics, and psychology. Applications of NLP include a number of fields of study, such as machine translation, natural language text processing and summarization, user interfaces, multilingual and cross-language information retrieval (CLIR), speech recognition, artificial intelligence, and expert systems. One important application area that is relatively new and has not been covered in previous ARIST chapters an NLP relates to the proliferation of the World Wide Web and digital libraries.
  8. Wagner, J.: Mensch - Computer - Interaktion : Sprachwissenschaftliche Aspekte (2002) 0.04
    0.043076646 = product of:
      0.08615329 = sum of:
        0.07720201 = weight(_text_:interfaces in 1102) [ClassicSimilarity], result of:
          0.07720201 = score(doc=1102,freq=2.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.3454256 = fieldWeight in 1102, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.046875 = fieldNorm(doc=1102)
        0.008951281 = product of:
          0.026853843 = sum of:
            0.026853843 = weight(_text_:systems in 1102) [ClassicSimilarity], result of:
              0.026853843 = score(doc=1102,freq=2.0), product of:
                0.13181444 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.04289195 = queryNorm
                0.2037246 = fieldWeight in 1102, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1102)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Die Arbeit zeigt, dass die vielerorts beklagten Probleme von Benutzern mit Computern in hohem Maße durch die (zum Teil mangelhafte) sprachliche Gestaltung der jeweiligen Interfaces bedingt sind. Es wird die These vertreten, dass verbale Interface-Elemente trotz des zunächst augenfälligen hohen grafischen Aufwands von Benutzeroberflächen für den Aufbau von Sinn und interaktiver Ordnung bei der Bedienung eines hinreichend komplexen Systems als primär gelten können. Als Basis der Analyse dienen vor allem Aufnahmen der Dialoge von jeweils zwei Benutzern bei der gemeinsamen Arbeit am Computer in Verbindung mit entsprechenden Aufzeichnungen der Bildschirminhalte. Die Arbeit versteht sich als ein Plädoyer für mehr linguistische Kompetenz in der Softwaregestaltung wie auch für stärkere Investitionen der Industrie in die sprachliche Gestaltung von Mensch-Maschine-Schnittstellen.
  9. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.04
    0.03987316 = product of:
      0.07974632 = sum of:
        0.06812379 = product of:
          0.20437136 = sum of:
            0.20437136 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.20437136 = score(doc=562,freq=2.0), product of:
                0.36363843 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04289195 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.011622533 = product of:
          0.0348676 = sum of:
            0.0348676 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.0348676 = score(doc=562,freq=2.0), product of:
                0.15020029 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04289195 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  10. Lehman, J.F.: Adaptive parsing : self-extending natural language interfaces (19??) 0.04
    0.038601004 = product of:
      0.15440401 = sum of:
        0.15440401 = weight(_text_:interfaces in 5296) [ClassicSimilarity], result of:
          0.15440401 = score(doc=5296,freq=2.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.6908512 = fieldWeight in 5296, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.09375 = fieldNorm(doc=5296)
      0.25 = coord(1/4)
    
  11. Stede, M.: Lexicalization in natural language generation (2002) 0.04
    0.035897203 = product of:
      0.071794406 = sum of:
        0.064335 = weight(_text_:interfaces in 4245) [ClassicSimilarity], result of:
          0.064335 = score(doc=4245,freq=2.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.28785467 = fieldWeight in 4245, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4245)
        0.007459401 = product of:
          0.022378203 = sum of:
            0.022378203 = weight(_text_:systems in 4245) [ClassicSimilarity], result of:
              0.022378203 = score(doc=4245,freq=2.0), product of:
                0.13181444 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.04289195 = queryNorm
                0.1697705 = fieldWeight in 4245, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4245)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Natural language generation (NLG), the automatic production of text by Computers, is commonly seen as a process consisting of the following distinct phases: Obviously, choosing words is a central aspect of generatiog language. In which of the these phases it should take place is not entirely clear, however. The decision depends an various factors: what exactly is seen as an individual lexical item; how the relation between word meaning and background knowledge (concepts) is defined; how one accounts for the interactions between individual lexical choices in the Same sentence; what criteria are employed for choosing between similar words; whether or not output is required in one or more languages. This article surveys these issues and the answers that have been proposed in NLG research. For many applications of natural language processing, large scale lexical resources have become available in recent years, such as the WordNet database. In language generation, however, generic lexicons are not in use yet; rather, almost every generation project develops its own format for lexical representations. The reason is that the entries of a generation lexicon need their specific interfaces to the Input representations processed by the generator; lexical semantics in an NLG lexicon needs to be tailored to the Input. Ort the other hand, the large lexicons used for language analysis typically have only very limited semantic information at all. Yet the syntactic behavior of words remains the same regardless of the particular application; thus, it should be possible to build at least parts of generic NLG lexical entries automatically, which could then be used by different systems.
  12. Addison, E.R.; Wilson, H.D.; Feder, J.: ¬The impact of plain English searching on end users (1993) 0.03
    0.025734002 = product of:
      0.10293601 = sum of:
        0.10293601 = weight(_text_:interfaces in 5354) [ClassicSimilarity], result of:
          0.10293601 = score(doc=5354,freq=2.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.46056747 = fieldWeight in 5354, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.0625 = fieldNorm(doc=5354)
      0.25 = coord(1/4)
    
    Abstract
    Commercial software products are available with plain English searching capabilities as engines for online and CD-ROM information services, and for internal text information management. With plain English interfaces, end users do not need to master the keyword and connector approach of the Boolean search query language. Describes plain English searching and its impact on the process of full text retrieval. Explores the issues of ease of use, reliability and implications for the total research process
  13. Helbig, H.; Gnörlich, C.; Leveling, J.: Natürlichsprachlicher Zugang zu Informationsanbietern im Internet und zu lokalen Datenbanken (2000) 0.02
    0.022745859 = product of:
      0.090983436 = sum of:
        0.090983436 = weight(_text_:interfaces in 5558) [ClassicSimilarity], result of:
          0.090983436 = score(doc=5558,freq=4.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.40708798 = fieldWeight in 5558, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5558)
      0.25 = coord(1/4)
    
    Abstract
    Die Schaffung eines natürlichsprachlichen Interfaces (NLI), (das einem Nutzer die Formulierung von Anfragen an Informationsanbieter in seiner Muttersprache erlaubt, stellt eine der interessantesten Herausforderungen im Bereich des Information-Retrieval und der Verarbeitung natürlicher Sprache dar. Dieser Beitrag beschreibt Methoden zur Obersetzung natürlichsprachlicher Anfragen in Ausdrücke formaler Retrievalsprachen sowohl für Informationsressourcen im Internet als auch für lokale Datenbanken. Die vorgestellten Methoden sind Teil das Informationsrecherchesystems LINAS, das an der Fernuniversität Hagen entwickelt wurde, um Nutzern einen natürlichsprachlichen Zugang zu lokalen und zu im Internet verteilten wissenschaftlichen und technischen Informationen anzubieten. Das LINAS-System unterscheidet sich von anderen Systemen und natürlichsprachlichen Interfaces (vgl. OSIRIS) oder die früheren Systeme INTELLECT, Q&A durch die explizite Einbeziehung von Hintergrundwissen und speziellen Dialogmodellen in den Übersetzungsprozeß. Darüber hinaus ist das System auf ein vollständiges Verstehen des natürlichsprachlichen Textes ausgerichtet, während andere Systeme typischerweise nur nach Stichworten oder bestimmten grammatikalischen Mustern in der Eingabe suchen. Ein besonderer Schwerpunkt von LINAS liegt in der Repräsentation und Auswertung der semantischen Relationen zwischen den in der Nutzeranfrage gegebenen Konzepten
  14. Bedathur, S.; Narang, A.: Mind your language : effects of spoken query formulation on retrieval effectiveness (2013) 0.02
    0.02251725 = product of:
      0.090069 = sum of:
        0.090069 = weight(_text_:interfaces in 1150) [ClassicSimilarity], result of:
          0.090069 = score(doc=1150,freq=2.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.40299654 = fieldWeight in 1150, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1150)
      0.25 = coord(1/4)
    
    Abstract
    Voice search is becoming a popular mode for interacting with search engines. As a result, research has gone into building better voice transcription engines, interfaces, and search engines that better handle inherent verbosity of queries. However, when one considers its use by non- native speakers of English, another aspect that becomes important is the formulation of the query by users. In this paper, we present the results of a preliminary study that we conducted with non-native English speakers who formulate queries for given retrieval tasks. Our results show that the current search engines are sensitive in their rankings to the query formulation, and thus highlights the need for developing more robust ranking methods.
  15. Czejdo. B.D.; Tucci, R.P.: ¬A dataflow graphical language for database applications (1994) 0.02
    0.02032255 = product of:
      0.0812902 = sum of:
        0.0812902 = product of:
          0.12193529 = sum of:
            0.06329511 = weight(_text_:systems in 559) [ClassicSimilarity], result of:
              0.06329511 = score(doc=559,freq=4.0), product of:
                0.13181444 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.04289195 = queryNorm
                0.48018348 = fieldWeight in 559, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.078125 = fieldNorm(doc=559)
            0.058640182 = weight(_text_:29 in 559) [ClassicSimilarity], result of:
              0.058640182 = score(doc=559,freq=2.0), product of:
                0.15088047 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04289195 = queryNorm
                0.38865322 = fieldWeight in 559, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=559)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    Discusses a graphical language for information retrieval and processing. A lot of recent activity has occured in the area of improving access to database systems. However, current results are restricted to simple interfacing of database systems. Proposes a graphical language for specifying complex applications
    Date
    20.10.2000 13:29:46
  16. Galvez, C.; Moya-Anegón, F. de: ¬An evaluation of conflation accuracy using finite-state transducers (2006) 0.02
    0.019300502 = product of:
      0.07720201 = sum of:
        0.07720201 = weight(_text_:interfaces in 5599) [ClassicSimilarity], result of:
          0.07720201 = score(doc=5599,freq=2.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.3454256 = fieldWeight in 5599, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.046875 = fieldNorm(doc=5599)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - To evaluate the accuracy of conflation methods based on finite-state transducers (FSTs). Design/methodology/approach - Incorrectly lemmatized and stemmed forms may lead to the retrieval of inappropriate documents. Experimental studies to date have focused on retrieval performance, but very few on conflation performance. The process of normalization we used involved a linguistic toolbox that allowed us to construct, through graphic interfaces, electronic dictionaries represented internally by FSTs. The lexical resources developed were applied to a Spanish test corpus for merging term variants in canonical lemmatized forms. Conflation performance was evaluated in terms of an adaptation of recall and precision measures, based on accuracy and coverage, not actual retrieval. The results were compared with those obtained using a Spanish version of the Porter algorithm. Findings - The conclusion is that the main strength of lemmatization is its accuracy, whereas its main limitation is the underanalysis of variant forms. Originality/value - The report outlines the potential of transducers in their application to normalization processes.
  17. Ciganik, M.: Pred koordinaciou a kooperaciou informacnych systemov (1997) 0.02
    0.01815474 = product of:
      0.07261896 = sum of:
        0.07261896 = product of:
          0.108928435 = sum of:
            0.062016293 = weight(_text_:systems in 950) [ClassicSimilarity], result of:
              0.062016293 = score(doc=950,freq=6.0), product of:
                0.13181444 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.04289195 = queryNorm
                0.4704818 = fieldWeight in 950, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0625 = fieldNorm(doc=950)
            0.04691214 = weight(_text_:29 in 950) [ClassicSimilarity], result of:
              0.04691214 = score(doc=950,freq=2.0), product of:
                0.15088047 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04289195 = queryNorm
                0.31092256 = fieldWeight in 950, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=950)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    The information requirements for library users can only be met if individual information systems are compatible, i.e. based on the use of a single information language. Points out that natural language is the best instrument for integration of information systems. Presents a model of the strucutre of natural language, extended by metaknowledge elements which makes it possible to analyse and represent text without the need for syntax analysis
    Footnote
    Übers. des Titels: Coordination of information systems
    Source
    Kniznice a informacie. 29(1997) no.10, S.389-396
  18. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.02
    0.017030947 = product of:
      0.06812379 = sum of:
        0.06812379 = product of:
          0.20437136 = sum of:
            0.20437136 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.20437136 = score(doc=862,freq=2.0), product of:
                0.36363843 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04289195 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  19. Haas, S.W.: Natural language processing : toward large-scale, robust systems (1996) 0.02
    0.016187705 = product of:
      0.06475082 = sum of:
        0.06475082 = product of:
          0.09712622 = sum of:
            0.05063609 = weight(_text_:systems in 7415) [ClassicSimilarity], result of:
              0.05063609 = score(doc=7415,freq=4.0), product of:
                0.13181444 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.04289195 = queryNorm
                0.38414678 = fieldWeight in 7415, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7415)
            0.046490133 = weight(_text_:22 in 7415) [ClassicSimilarity], result of:
              0.046490133 = score(doc=7415,freq=2.0), product of:
                0.15020029 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04289195 = queryNorm
                0.30952093 = fieldWeight in 7415, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7415)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    State of the art review of natural language processing updating an earlier review published in ARIST 22(1987). Discusses important developments that have allowed for significant advances in the field of natural language processing: materials and resources; knowledge based systems and statistical approaches; and a strong emphasis on evaluation. Reviews some natural language processing applications and common problems still awaiting solution. Considers closely related applications such as language generation and th egeneration phase of machine translation which face the same problems as natural language processing. Covers natural language methodologies for information retrieval only briefly
  20. Bowker, L.: Information retrieval in translation memory systems : assessment of current limitations and possibilities for future development (2002) 0.02
    0.015885398 = product of:
      0.06354159 = sum of:
        0.06354159 = product of:
          0.09531238 = sum of:
            0.054264255 = weight(_text_:systems in 1854) [ClassicSimilarity], result of:
              0.054264255 = score(doc=1854,freq=6.0), product of:
                0.13181444 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.04289195 = queryNorm
                0.41167158 = fieldWeight in 1854, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1854)
            0.041048124 = weight(_text_:29 in 1854) [ClassicSimilarity], result of:
              0.041048124 = score(doc=1854,freq=2.0), product of:
                0.15088047 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04289195 = queryNorm
                0.27205724 = fieldWeight in 1854, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1854)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    A translation memory system is a new type of human language technology (HLT) tool that is gaining popularity among translators. Such tools allow translators to store previously translated texts in a type of aligned bilingual database, and to recycle relevant parts of these texts when producing new translations. Currently, these tools retrieve information from the database using superficial character string matching, which often results in poor precision and recall. This paper explains how translation memory systems work, and it considers some possible ways for introducing more sophisticated information retrieval techniques into such systems by taking syntactic and semantic similarity into account. Some of the suggested techniques are inspired by these used in other areas of HLT, and some by techniques used in information science.
    Source
    Knowledge organization. 29(2002) nos.3/4, S.198-203

Languages

  • e 191
  • d 45
  • ru 4
  • m 2
  • el 1
  • f 1
  • More… Less…

Types

  • a 195
  • m 30
  • el 25
  • s 12
  • x 5
  • p 3
  • d 1
  • r 1
  • More… Less…

Subjects

Classifications