Search (479 results, page 1 of 24)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.24
    0.24117026 = product of:
      0.32156035 = sum of:
        0.07555613 = product of:
          0.22666839 = sum of:
            0.22666839 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.22666839 = score(doc=562,freq=2.0), product of:
                0.40331158 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.047571484 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.22666839 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.22666839 = score(doc=562,freq=2.0), product of:
            0.40331158 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.047571484 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.019335838 = product of:
          0.038671676 = sum of:
            0.038671676 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.038671676 = score(doc=562,freq=2.0), product of:
                0.16658723 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047571484 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.23
    0.2279355 = product of:
      0.30391398 = sum of:
        0.22666839 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.22666839 = score(doc=563,freq=2.0), product of:
            0.40331158 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.047571484 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.009718376 = weight(_text_:information in 563) [ClassicSimilarity], result of:
          0.009718376 = score(doc=563,freq=2.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.116372846 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.06752723 = sum of:
          0.02885555 = weight(_text_:retrieval in 563) [ClassicSimilarity], result of:
            0.02885555 = score(doc=563,freq=2.0), product of:
              0.1438997 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.047571484 = queryNorm
              0.20052543 = fieldWeight in 563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.046875 = fieldNorm(doc=563)
          0.038671676 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
            0.038671676 = score(doc=563,freq=2.0), product of:
              0.16658723 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047571484 = queryNorm
              0.23214069 = fieldWeight in 563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=563)
      0.75 = coord(3/4)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  3. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.15
    0.15111226 = product of:
      0.30222452 = sum of:
        0.07555613 = product of:
          0.22666839 = sum of:
            0.22666839 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.22666839 = score(doc=862,freq=2.0), product of:
                0.40331158 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.047571484 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.22666839 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.22666839 = score(doc=862,freq=2.0), product of:
            0.40331158 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.047571484 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.5 = coord(2/4)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  4. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.08
    0.08127109 = product of:
      0.16254218 = sum of:
        0.02748772 = weight(_text_:information in 4483) [ClassicSimilarity], result of:
          0.02748772 = score(doc=4483,freq=4.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.3291521 = fieldWeight in 4483, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=4483)
        0.13505445 = sum of:
          0.0577111 = weight(_text_:retrieval in 4483) [ClassicSimilarity], result of:
            0.0577111 = score(doc=4483,freq=2.0), product of:
              0.1438997 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.047571484 = queryNorm
              0.40105087 = fieldWeight in 4483, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.09375 = fieldNorm(doc=4483)
          0.07734335 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
            0.07734335 = score(doc=4483,freq=2.0), product of:
              0.16658723 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047571484 = queryNorm
              0.46428138 = fieldWeight in 4483, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=4483)
      0.5 = coord(2/4)
    
    Date
    15. 3.2000 10:22:37
    Source
    Journal of information science. 25(1999) no.2, S.113-131
  5. Liddy, E.D.: Natural language processing for information retrieval and knowledge discovery (1998) 0.06
    0.059039477 = product of:
      0.118078955 = sum of:
        0.025352776 = weight(_text_:information in 2345) [ClassicSimilarity], result of:
          0.025352776 = score(doc=2345,freq=10.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.3035872 = fieldWeight in 2345, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2345)
        0.09272618 = sum of:
          0.047609225 = weight(_text_:retrieval in 2345) [ClassicSimilarity], result of:
            0.047609225 = score(doc=2345,freq=4.0), product of:
              0.1438997 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.047571484 = queryNorm
              0.33085006 = fieldWeight in 2345, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2345)
          0.045116954 = weight(_text_:22 in 2345) [ClassicSimilarity], result of:
            0.045116954 = score(doc=2345,freq=2.0), product of:
              0.16658723 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047571484 = queryNorm
              0.2708308 = fieldWeight in 2345, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2345)
      0.5 = coord(2/4)
    
    Abstract
    Natural language processing (NLP) is a powerful technology for the vital tasks of information retrieval (IR) and knowledge discovery (KD) which, in turn, feed the visualization systems of the present and future and enable knowledge workers to focus more of their time on the vital tasks of analysis and prediction
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  6. Haas, S.W.: Natural language processing : toward large-scale, robust systems (1996) 0.05
    0.054180726 = product of:
      0.10836145 = sum of:
        0.018325146 = weight(_text_:information in 7415) [ClassicSimilarity], result of:
          0.018325146 = score(doc=7415,freq=4.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.21943474 = fieldWeight in 7415, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=7415)
        0.0900363 = sum of:
          0.038474064 = weight(_text_:retrieval in 7415) [ClassicSimilarity], result of:
            0.038474064 = score(doc=7415,freq=2.0), product of:
              0.1438997 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.047571484 = queryNorm
              0.26736724 = fieldWeight in 7415, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0625 = fieldNorm(doc=7415)
          0.051562235 = weight(_text_:22 in 7415) [ClassicSimilarity], result of:
            0.051562235 = score(doc=7415,freq=2.0), product of:
              0.16658723 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047571484 = queryNorm
              0.30952093 = fieldWeight in 7415, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=7415)
      0.5 = coord(2/4)
    
    Abstract
    State of the art review of natural language processing updating an earlier review published in ARIST 22(1987). Discusses important developments that have allowed for significant advances in the field of natural language processing: materials and resources; knowledge based systems and statistical approaches; and a strong emphasis on evaluation. Reviews some natural language processing applications and common problems still awaiting solution. Considers closely related applications such as language generation and th egeneration phase of machine translation which face the same problems as natural language processing. Covers natural language methodologies for information retrieval only briefly
    Source
    Annual review of information science and technology. 31(1996), S.83-119
  7. Mauldin, M.L.: Conceptual information retrieval : a case study in adaptive partial parsing (1991) 0.05
    0.053389147 = product of:
      0.106778294 = sum of:
        0.042976275 = weight(_text_:information in 121) [ClassicSimilarity], result of:
          0.042976275 = score(doc=121,freq=22.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.51462007 = fieldWeight in 121, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=121)
        0.06380202 = product of:
          0.12760404 = sum of:
            0.12760404 = weight(_text_:retrieval in 121) [ClassicSimilarity], result of:
              0.12760404 = score(doc=121,freq=22.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.88675684 = fieldWeight in 121, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=121)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    LCSH
    FERRET (Information retrieval system)
    Information storage and retrieval
    RSWK
    Freitextsuche / Information Retrieval
    Information Retrieval / Expertensystem
    Syntaktische Analyse Information Retrieval
    Subject
    Freitextsuche / Information Retrieval
    Information Retrieval / Expertensystem
    Syntaktische Analyse Information Retrieval
    FERRET (Information retrieval system)
    Information storage and retrieval
  8. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.05
    0.045517407 = product of:
      0.091034815 = sum of:
        0.011453216 = weight(_text_:information in 2541) [ClassicSimilarity], result of:
          0.011453216 = score(doc=2541,freq=4.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.13714671 = fieldWeight in 2541, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2541)
        0.079581596 = sum of:
          0.03400659 = weight(_text_:retrieval in 2541) [ClassicSimilarity], result of:
            0.03400659 = score(doc=2541,freq=4.0), product of:
              0.1438997 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.047571484 = queryNorm
              0.23632148 = fieldWeight in 2541, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2541)
          0.045575008 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
            0.045575008 = score(doc=2541,freq=4.0), product of:
              0.16658723 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047571484 = queryNorm
              0.27358043 = fieldWeight in 2541, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2541)
      0.5 = coord(2/4)
    
    Abstract
    The Specialized Information Services Division (SIS) of the National Library of Medicine (NLM) provides Web access to more than a dozen scientific databases on toxicology and the environment on TOXNET . Search queries on TOXNET often include misspelled or variant English words, medical and scientific jargon and chemical names. Following the example of search engines like Google and ClinicalTrials.gov, we set out to develop a spelling "suggestion" system for increased recall and precision in TOXNET searching. This paper describes development of dictionary technology that can be used in a variety of applications such as orthographic verification, writing aid, natural language processing, and information storage and retrieval. The design of the technology allows building complex applications using the components developed in the earlier phases of the work in a modular fashion without extensive rewriting of computer code. Since many of the potential applications envisioned for this work have on-line or web-based interfaces, the dictionaries and other computer components must have fast response, and must be adaptable to open-ended database vocabularies, including chemical nomenclature. The dictionary vocabulary for this work was derived from SIS and other databases and specialized resources, such as NLM's Unified Medical Language Systems (UMLS) . The resulting technology, A-Z Dictionary (AZdict), has three major constituents: 1) the vocabulary list, 2) the word attributes that define part of speech and morphological relationships between words in the list, and 3) a set of programs that implements the retrieval of words and their attributes, and determines similarity between words (ChemSpell). These three components can be used in various applications such as spelling verification, spelling aid, part-of-speech tagging, paraphrasing, and many other natural language processing functions.
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
  9. Bian, G.-W.; Chen, H.-H.: Cross-language information access to multilingual collections on the Internet (2000) 0.04
    0.042179976 = product of:
      0.08435995 = sum of:
        0.016832722 = weight(_text_:information in 4436) [ClassicSimilarity], result of:
          0.016832722 = score(doc=4436,freq=6.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.20156369 = fieldWeight in 4436, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4436)
        0.06752723 = sum of:
          0.02885555 = weight(_text_:retrieval in 4436) [ClassicSimilarity], result of:
            0.02885555 = score(doc=4436,freq=2.0), product of:
              0.1438997 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.047571484 = queryNorm
              0.20052543 = fieldWeight in 4436, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.046875 = fieldNorm(doc=4436)
          0.038671676 = weight(_text_:22 in 4436) [ClassicSimilarity], result of:
            0.038671676 = score(doc=4436,freq=2.0), product of:
              0.16658723 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047571484 = queryNorm
              0.23214069 = fieldWeight in 4436, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4436)
      0.5 = coord(2/4)
    
    Abstract
    Language barrier is the major problem that people face in searching for, retrieving, and understanding multilingual collections on the Internet. This paper deals with query translation and document translation in a Chinese-English information retrieval system called MTIR. Bilingual dictionary and monolingual corpus-based approaches are adopted to select suitable tranlated query terms. A machine transliteration algorithm is introduced to resolve proper name searching. We consider several design issues for document translation, including which material is translated, what roles the HTML tags play in translation, what the tradeoff is between the speed performance and the translation performance, and what from the translated result is presented in. About 100.000 Web pages translated in the last 4 months of 1997 are used for quantitative study of online and real-time Web page translation
    Date
    16. 2.2000 14:22:39
    Source
    Journal of the American Society for Information Science. 51(2000) no.3, S.281-296
  10. Gachot, D.A.; Lange, E.; Yang, J.: ¬The SYSTRAN NLP browser : an application of machine translation technology in cross-language information retrieval (1998) 0.04
    0.041822363 = product of:
      0.083644725 = sum of:
        0.033665445 = weight(_text_:information in 6213) [ClassicSimilarity], result of:
          0.033665445 = score(doc=6213,freq=6.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.40312737 = fieldWeight in 6213, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=6213)
        0.04997928 = product of:
          0.09995856 = sum of:
            0.09995856 = weight(_text_:retrieval in 6213) [ClassicSimilarity], result of:
              0.09995856 = score(doc=6213,freq=6.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.6946405 = fieldWeight in 6213, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6213)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Series
    The Kluwer International series on information retrieval
    Source
    Cross-language information retrieval. Ed.: G. Grefenstette
  11. Pirkola, A.; Hedlund, T.; Keskustalo, H.; Järvelin, K.: Dictionary-based cross-language information retrieval : problems, methods, and research findings (2001) 0.04
    0.039839115 = product of:
      0.07967823 = sum of:
        0.032069005 = weight(_text_:information in 3908) [ClassicSimilarity], result of:
          0.032069005 = score(doc=3908,freq=4.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.3840108 = fieldWeight in 3908, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=3908)
        0.047609225 = product of:
          0.09521845 = sum of:
            0.09521845 = weight(_text_:retrieval in 3908) [ClassicSimilarity], result of:
              0.09521845 = score(doc=3908,freq=4.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.6617001 = fieldWeight in 3908, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3908)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Information retrieval. 4(2001), S.209-230
  12. Warner, A.J.: Natural language processing (1987) 0.04
    0.03873895 = product of:
      0.0774779 = sum of:
        0.025915671 = weight(_text_:information in 337) [ClassicSimilarity], result of:
          0.025915671 = score(doc=337,freq=2.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.3103276 = fieldWeight in 337, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.125 = fieldNorm(doc=337)
        0.051562235 = product of:
          0.10312447 = sum of:
            0.10312447 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.10312447 = score(doc=337,freq=2.0), product of:
                0.16658723 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047571484 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  13. McCune, B.P.; Tong, R.M.; Dean, J.S.: Rubric: a system for rule-based information retrieval (1985) 0.03
    0.034147814 = product of:
      0.06829563 = sum of:
        0.02748772 = weight(_text_:information in 1945) [ClassicSimilarity], result of:
          0.02748772 = score(doc=1945,freq=4.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.3291521 = fieldWeight in 1945, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=1945)
        0.040807907 = product of:
          0.08161581 = sum of:
            0.08161581 = weight(_text_:retrieval in 1945) [ClassicSimilarity], result of:
              0.08161581 = score(doc=1945,freq=4.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.5671716 = fieldWeight in 1945, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1945)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.440-445.
  14. Beitzel, S.M.; Jensen, E.C.; Chowdhury, A.; Grossman, D.; Frieder, O; Goharian, N.: Fusion of effective retrieval strategies in the same information retrieval system (2004) 0.03
    0.033677787 = product of:
      0.06735557 = sum of:
        0.021730952 = weight(_text_:information in 2502) [ClassicSimilarity], result of:
          0.021730952 = score(doc=2502,freq=10.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.2602176 = fieldWeight in 2502, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2502)
        0.045624625 = product of:
          0.09124925 = sum of:
            0.09124925 = weight(_text_:retrieval in 2502) [ClassicSimilarity], result of:
              0.09124925 = score(doc=2502,freq=20.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.63411707 = fieldWeight in 2502, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2502)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Prior efforts have shown that under certain situations retrieval effectiveness may be improved via the use of data fusion techniques. Although these improvements have been observed from the fusion of result sets from several distinct information retrieval systems, it has often been thought that fusing different document retrieval strategies in a single information retrieval system will lead to similar improvements. In this study, we show that this is not the case. We hold constant systemic differences such as parsing, stemming, phrase processing, and relevance feedback, and fuse result sets generated from highly effective retrieval strategies in the same information retrieval system. From this, we show that data fusion of highly effective retrieval strategies alone shows little or no improvement in retrieval effectiveness. Furthermore, we present a detailed analysis of the performance of modern data fusion approaches, and demonstrate the reasons why they do not perform weIl when applied to this problem. Detailed results and analyses are included to support our conclusions.
    Source
    Journal of the American Society for Information Science and Technology. 55(2004) no.10, S.859-868
  15. Smeaton, A.F.: Natural language processing used in information retrieval tasks : an overview of achievements to date (1995) 0.03
    0.032866906 = product of:
      0.06573381 = sum of:
        0.032069005 = weight(_text_:information in 1265) [ClassicSimilarity], result of:
          0.032069005 = score(doc=1265,freq=4.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.3840108 = fieldWeight in 1265, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=1265)
        0.033664808 = product of:
          0.067329615 = sum of:
            0.067329615 = weight(_text_:retrieval in 1265) [ClassicSimilarity], result of:
              0.067329615 = score(doc=1265,freq=2.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.46789268 = fieldWeight in 1265, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1265)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Encyclopedia of library and information science. Vol.55, [=Suppl.18]
  16. Perez-Carballo, J.; Strzalkowski, T.: Natural language information retrieval : progress report (2000) 0.03
    0.032866906 = product of:
      0.06573381 = sum of:
        0.032069005 = weight(_text_:information in 6421) [ClassicSimilarity], result of:
          0.032069005 = score(doc=6421,freq=4.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.3840108 = fieldWeight in 6421, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=6421)
        0.033664808 = product of:
          0.067329615 = sum of:
            0.067329615 = weight(_text_:retrieval in 6421) [ClassicSimilarity], result of:
              0.067329615 = score(doc=6421,freq=2.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.46789268 = fieldWeight in 6421, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6421)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Information processing and management. 36(2000) no.1, S.155-205
  17. Rau, L.F.; Jacobs, P.S.; Zernik, U.: Information extraction and text summarization using linguistic knowledge acquisition (1989) 0.03
    0.0325298 = product of:
      0.0650596 = sum of:
        0.031740084 = weight(_text_:information in 6683) [ClassicSimilarity], result of:
          0.031740084 = score(doc=6683,freq=12.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.38007212 = fieldWeight in 6683, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=6683)
        0.033319518 = product of:
          0.066639036 = sum of:
            0.066639036 = weight(_text_:retrieval in 6683) [ClassicSimilarity], result of:
              0.066639036 = score(doc=6683,freq=6.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.46309367 = fieldWeight in 6683, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6683)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Storing and accessing texts in a conceptual format has a number of advantages over traditional document retrieval methods. A conceptual format facilitates natural language access to text information. It can support imprecise and inexact queries, conceptual information summarisation, and, ultimately, document translation. Describes 2 methods which have been implemented in a prototype intelligent information retrieval system calles SCISOR (System for Conceptual Information Summarisation, Organization and Retrieval). Describes the text processing, language acquisition, and summarisation components of SCISOR
    Source
    Information processing and management. 25(1989) no.4, S.419-428
  18. Frappaolo, C.: Artificial intelligence and text retrieval : a current perspective on the state of the art (1992) 0.03
    0.032277916 = product of:
      0.06455583 = sum of:
        0.022906432 = weight(_text_:information in 7097) [ClassicSimilarity], result of:
          0.022906432 = score(doc=7097,freq=4.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.27429342 = fieldWeight in 7097, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=7097)
        0.041649397 = product of:
          0.083298795 = sum of:
            0.083298795 = weight(_text_:retrieval in 7097) [ClassicSimilarity], result of:
              0.083298795 = score(doc=7097,freq=6.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.5788671 = fieldWeight in 7097, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.078125 = fieldNorm(doc=7097)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Brief discussion of the ways in which computerized information retrieval and database searching can be enhanced by integrating artificial intelligence with such search systems. Explores the possibility of integrating the powers and capabilities of artificial intelligence (specifically natural language processing) with text retrieval
    Imprint
    Medford, NJ : Learned Information Inc.
  19. Schmolz, H.: Anaphora resolution and text retrieval : a lnguistic analysis of hypertexts (2015) 0.03
    0.032277916 = product of:
      0.06455583 = sum of:
        0.022906432 = weight(_text_:information in 1172) [ClassicSimilarity], result of:
          0.022906432 = score(doc=1172,freq=4.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.27429342 = fieldWeight in 1172, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=1172)
        0.041649397 = product of:
          0.083298795 = sum of:
            0.083298795 = weight(_text_:retrieval in 1172) [ClassicSimilarity], result of:
              0.083298795 = score(doc=1172,freq=6.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.5788671 = fieldWeight in 1172, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1172)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    RSWK
    Englisch / Anapher <Syntax> / Hypertext / Information Retrieval / Korpus <Linguistik>
    Subject
    Englisch / Anapher <Syntax> / Hypertext / Information Retrieval / Korpus <Linguistik>
  20. Schwarz, C.: Linguistische Hilfsmittel beim Information Retrieval (1984) 0.03
    0.032194868 = product of:
      0.064389735 = sum of:
        0.025915671 = weight(_text_:information in 545) [ClassicSimilarity], result of:
          0.025915671 = score(doc=545,freq=2.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.3103276 = fieldWeight in 545, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.125 = fieldNorm(doc=545)
        0.038474064 = product of:
          0.07694813 = sum of:
            0.07694813 = weight(_text_:retrieval in 545) [ClassicSimilarity], result of:
              0.07694813 = score(doc=545,freq=2.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.5347345 = fieldWeight in 545, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.125 = fieldNorm(doc=545)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    

Languages

Types

  • a 410
  • m 37
  • el 27
  • s 20
  • x 11
  • p 3
  • d 2
  • b 1
  • r 1
  • More… Less…

Subjects

Classifications