Search (280 results, page 1 of 14)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.17
    0.17391951 = product of:
      0.28986585 = sum of:
        0.06810896 = product of:
          0.20432688 = sum of:
            0.20432688 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.20432688 = score(doc=562,freq=2.0), product of:
                0.3635593 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042882618 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.20432688 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.20432688 = score(doc=562,freq=2.0), product of:
            0.3635593 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.042882618 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.017430007 = product of:
          0.034860015 = sum of:
            0.034860015 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.034860015 = score(doc=562,freq=2.0), product of:
                0.15016761 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042882618 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.15
    0.14866099 = product of:
      0.2477683 = sum of:
        0.026011413 = weight(_text_:retrieval in 563) [ClassicSimilarity], result of:
          0.026011413 = score(doc=563,freq=2.0), product of:
            0.12971628 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042882618 = queryNorm
            0.20052543 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.20432688 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.20432688 = score(doc=563,freq=2.0), product of:
            0.3635593 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.042882618 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.017430007 = product of:
          0.034860015 = sum of:
            0.034860015 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.034860015 = score(doc=563,freq=2.0), product of:
                0.15016761 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042882618 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  3. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.11
    0.10897434 = product of:
      0.27243584 = sum of:
        0.06810896 = product of:
          0.20432688 = sum of:
            0.20432688 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.20432688 = score(doc=862,freq=2.0), product of:
                0.3635593 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042882618 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.20432688 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.20432688 = score(doc=862,freq=2.0), product of:
            0.3635593 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.042882618 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.4 = coord(2/5)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  4. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.03
    0.03475314 = product of:
      0.086882845 = sum of:
        0.052022826 = weight(_text_:retrieval in 4483) [ClassicSimilarity], result of:
          0.052022826 = score(doc=4483,freq=2.0), product of:
            0.12971628 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042882618 = queryNorm
            0.40105087 = fieldWeight in 4483, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=4483)
        0.034860015 = product of:
          0.06972003 = sum of:
            0.06972003 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.06972003 = score(doc=4483,freq=2.0), product of:
                0.15016761 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042882618 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    15. 3.2000 10:22:37
  5. Liddy, E.D.: Natural language processing for information retrieval and knowledge discovery (1998) 0.03
    0.025300657 = product of:
      0.063251644 = sum of:
        0.042916637 = weight(_text_:retrieval in 2345) [ClassicSimilarity], result of:
          0.042916637 = score(doc=2345,freq=4.0), product of:
            0.12971628 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042882618 = queryNorm
            0.33085006 = fieldWeight in 2345, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2345)
        0.020335007 = product of:
          0.040670015 = sum of:
            0.040670015 = weight(_text_:22 in 2345) [ClassicSimilarity], result of:
              0.040670015 = score(doc=2345,freq=2.0), product of:
                0.15016761 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042882618 = queryNorm
                0.2708308 = fieldWeight in 2345, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2345)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Natural language processing (NLP) is a powerful technology for the vital tasks of information retrieval (IR) and knowledge discovery (KD) which, in turn, feed the visualization systems of the present and future and enable knowledge workers to focus more of their time on the vital tasks of analysis and prediction
    Date
    22. 9.1997 19:16:05
  6. Haas, S.W.: Natural language processing : toward large-scale, robust systems (1996) 0.02
    0.023168758 = product of:
      0.057921894 = sum of:
        0.034681883 = weight(_text_:retrieval in 7415) [ClassicSimilarity], result of:
          0.034681883 = score(doc=7415,freq=2.0), product of:
            0.12971628 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042882618 = queryNorm
            0.26736724 = fieldWeight in 7415, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=7415)
        0.02324001 = product of:
          0.04648002 = sum of:
            0.04648002 = weight(_text_:22 in 7415) [ClassicSimilarity], result of:
              0.04648002 = score(doc=7415,freq=2.0), product of:
                0.15016761 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042882618 = queryNorm
                0.30952093 = fieldWeight in 7415, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7415)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    State of the art review of natural language processing updating an earlier review published in ARIST 22(1987). Discusses important developments that have allowed for significant advances in the field of natural language processing: materials and resources; knowledge based systems and statistical approaches; and a strong emphasis on evaluation. Reviews some natural language processing applications and common problems still awaiting solution. Considers closely related applications such as language generation and th egeneration phase of machine translation which face the same problems as natural language processing. Covers natural language methodologies for information retrieval only briefly
  7. Mauldin, M.L.: Conceptual information retrieval : a case study in adaptive partial parsing (1991) 0.02
    0.023005359 = product of:
      0.115026794 = sum of:
        0.115026794 = weight(_text_:retrieval in 121) [ClassicSimilarity], result of:
          0.115026794 = score(doc=121,freq=22.0), product of:
            0.12971628 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042882618 = queryNorm
            0.88675684 = fieldWeight in 121, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=121)
      0.2 = coord(1/5)
    
    LCSH
    FERRET (Information retrieval system)
    Information storage and retrieval
    RSWK
    Freitextsuche / Information Retrieval
    Information Retrieval / Expertensystem
    Syntaktische Analyse Information Retrieval
    Subject
    Freitextsuche / Information Retrieval
    Information Retrieval / Expertensystem
    Syntaktische Analyse Information Retrieval
    FERRET (Information retrieval system)
    Information storage and retrieval
  8. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.02
    0.020478481 = product of:
      0.051196203 = sum of:
        0.030654743 = weight(_text_:retrieval in 2541) [ClassicSimilarity], result of:
          0.030654743 = score(doc=2541,freq=4.0), product of:
            0.12971628 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042882618 = queryNorm
            0.23632148 = fieldWeight in 2541, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2541)
        0.020541461 = product of:
          0.041082922 = sum of:
            0.041082922 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
              0.041082922 = score(doc=2541,freq=4.0), product of:
                0.15016761 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042882618 = queryNorm
                0.27358043 = fieldWeight in 2541, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2541)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The Specialized Information Services Division (SIS) of the National Library of Medicine (NLM) provides Web access to more than a dozen scientific databases on toxicology and the environment on TOXNET . Search queries on TOXNET often include misspelled or variant English words, medical and scientific jargon and chemical names. Following the example of search engines like Google and ClinicalTrials.gov, we set out to develop a spelling "suggestion" system for increased recall and precision in TOXNET searching. This paper describes development of dictionary technology that can be used in a variety of applications such as orthographic verification, writing aid, natural language processing, and information storage and retrieval. The design of the technology allows building complex applications using the components developed in the earlier phases of the work in a modular fashion without extensive rewriting of computer code. Since many of the potential applications envisioned for this work have on-line or web-based interfaces, the dictionaries and other computer components must have fast response, and must be adaptable to open-ended database vocabularies, including chemical nomenclature. The dictionary vocabulary for this work was derived from SIS and other databases and specialized resources, such as NLM's Unified Medical Language Systems (UMLS) . The resulting technology, A-Z Dictionary (AZdict), has three major constituents: 1) the vocabulary list, 2) the word attributes that define part of speech and morphological relationships between words in the list, and 3) a set of programs that implements the retrieval of words and their attributes, and determines similarity between words (ChemSpell). These three components can be used in various applications such as spelling verification, spelling aid, part-of-speech tagging, paraphrasing, and many other natural language processing functions.
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
  9. Schwarz, C.: THESYS: Thesaurus Syntax System : a fully automatic thesaurus building aid (1988) 0.02
    0.020272663 = product of:
      0.050681658 = sum of:
        0.030346649 = weight(_text_:retrieval in 1361) [ClassicSimilarity], result of:
          0.030346649 = score(doc=1361,freq=2.0), product of:
            0.12971628 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042882618 = queryNorm
            0.23394634 = fieldWeight in 1361, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1361)
        0.020335007 = product of:
          0.040670015 = sum of:
            0.040670015 = weight(_text_:22 in 1361) [ClassicSimilarity], result of:
              0.040670015 = score(doc=1361,freq=2.0), product of:
                0.15016761 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042882618 = queryNorm
                0.2708308 = fieldWeight in 1361, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1361)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    THESYS is based on the natural language processing of free-text databases. It yields statistically evaluated correlations between words of the database. These correlations correspond to traditional thesaurus relations. The person who has to build a thesaurus is thus assisted by the proposals made by THESYS. THESYS is being tested on commercial databases under real world conditions. It is part of a text processing project at Siemens, called TINA (Text-Inhalts-Analyse). Software from TINA is actually being applied and evaluated by the US Department of Commerce for patent search and indexing (REALIST: REtrieval Aids by Linguistics and STatistics)
    Date
    6. 1.1999 10:22:07
  10. Gachot, D.A.; Lange, E.; Yang, J.: ¬The SYSTRAN NLP browser : an application of machine translation technology in cross-language information retrieval (1998) 0.02
    0.018021237 = product of:
      0.09010618 = sum of:
        0.09010618 = weight(_text_:retrieval in 6213) [ClassicSimilarity], result of:
          0.09010618 = score(doc=6213,freq=6.0), product of:
            0.12971628 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042882618 = queryNorm
            0.6946405 = fieldWeight in 6213, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=6213)
      0.2 = coord(1/5)
    
    Series
    The Kluwer International series on information retrieval
    Source
    Cross-language information retrieval. Ed.: G. Grefenstette
  11. Bian, G.-W.; Chen, H.-H.: Cross-language information access to multilingual collections on the Internet (2000) 0.02
    0.01737657 = product of:
      0.043441422 = sum of:
        0.026011413 = weight(_text_:retrieval in 4436) [ClassicSimilarity], result of:
          0.026011413 = score(doc=4436,freq=2.0), product of:
            0.12971628 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042882618 = queryNorm
            0.20052543 = fieldWeight in 4436, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=4436)
        0.017430007 = product of:
          0.034860015 = sum of:
            0.034860015 = weight(_text_:22 in 4436) [ClassicSimilarity], result of:
              0.034860015 = score(doc=4436,freq=2.0), product of:
                0.15016761 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042882618 = queryNorm
                0.23214069 = fieldWeight in 4436, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4436)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Language barrier is the major problem that people face in searching for, retrieving, and understanding multilingual collections on the Internet. This paper deals with query translation and document translation in a Chinese-English information retrieval system called MTIR. Bilingual dictionary and monolingual corpus-based approaches are adopted to select suitable tranlated query terms. A machine transliteration algorithm is introduced to resolve proper name searching. We consider several design issues for document translation, including which material is translated, what roles the HTML tags play in translation, what the tradeoff is between the speed performance and the translation performance, and what from the translated result is presented in. About 100.000 Web pages translated in the last 4 months of 1997 are used for quantitative study of online and real-time Web page translation
    Date
    16. 2.2000 14:22:39
  12. Rahmstorf, G.: Concept structures for large vocabularies (1998) 0.02
    0.01737657 = product of:
      0.043441422 = sum of:
        0.026011413 = weight(_text_:retrieval in 75) [ClassicSimilarity], result of:
          0.026011413 = score(doc=75,freq=2.0), product of:
            0.12971628 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042882618 = queryNorm
            0.20052543 = fieldWeight in 75, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=75)
        0.017430007 = product of:
          0.034860015 = sum of:
            0.034860015 = weight(_text_:22 in 75) [ClassicSimilarity], result of:
              0.034860015 = score(doc=75,freq=2.0), product of:
                0.15016761 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042882618 = queryNorm
                0.23214069 = fieldWeight in 75, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=75)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    A technology is described which supports the acquisition, visualisation and manipulation of large vocabularies with associated structures. It is used for dictionary production, terminology data bases, thesauri, library classification systems etc. Essential features of the technology are a lexicographic user interface, variable word description, unlimited list of word readings, a concept language, automatic transformations of formulas into graphic structures, structure manipulation operations and retransformation into formulas. The concept language includes notations for undefined concepts. The structure of defined concepts can be constructed interactively. The technology supports the generation of large vocabularies with structures representing word senses. Concept structures and ordering systems for indexing and retrieval can be constructed separately and connected by associating relations.
    Date
    30.12.2001 19:01:22
  13. Pirkola, A.; Hedlund, T.; Keskustalo, H.; Järvelin, K.: Dictionary-based cross-language information retrieval : problems, methods, and research findings (2001) 0.02
    0.017166656 = product of:
      0.085833274 = sum of:
        0.085833274 = weight(_text_:retrieval in 3908) [ClassicSimilarity], result of:
          0.085833274 = score(doc=3908,freq=4.0), product of:
            0.12971628 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042882618 = queryNorm
            0.6617001 = fieldWeight in 3908, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.109375 = fieldNorm(doc=3908)
      0.2 = coord(1/5)
    
    Source
    Information retrieval. 4(2001), S.209-230
  14. Sembok, T.M.T.; Rijsbergen, C.J. van: SILOL: a simple logical-linguistic document retrieval system (1990) 0.02
    0.016990583 = product of:
      0.08495291 = sum of:
        0.08495291 = weight(_text_:retrieval in 6684) [ClassicSimilarity], result of:
          0.08495291 = score(doc=6684,freq=12.0), product of:
            0.12971628 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042882618 = queryNorm
            0.6549133 = fieldWeight in 6684, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=6684)
      0.2 = coord(1/5)
    
    Abstract
    Describes a system called SILOL which is based on a logical-linguistic model of document retrieval systems. SILOL uses a shallow semantic translation of natural language texts into a first order predicate representation in performing a document indexing and retrieval process. Some preliminary experiments have been carried out to test the retrieval effectiveness of this system. The results obtained show improvements in the level of retrieval effectiveness, which demonstrate that the approach of using a semantic theory of natural language and logic in document retrieval systems is a valid one
  15. Beitzel, S.M.; Jensen, E.C.; Chowdhury, A.; Grossman, D.; Frieder, O; Goharian, N.: Fusion of effective retrieval strategies in the same information retrieval system (2004) 0.02
    0.01645106 = product of:
      0.082255304 = sum of:
        0.082255304 = weight(_text_:retrieval in 2502) [ClassicSimilarity], result of:
          0.082255304 = score(doc=2502,freq=20.0), product of:
            0.12971628 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042882618 = queryNorm
            0.63411707 = fieldWeight in 2502, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2502)
      0.2 = coord(1/5)
    
    Abstract
    Prior efforts have shown that under certain situations retrieval effectiveness may be improved via the use of data fusion techniques. Although these improvements have been observed from the fusion of result sets from several distinct information retrieval systems, it has often been thought that fusing different document retrieval strategies in a single information retrieval system will lead to similar improvements. In this study, we show that this is not the case. We hold constant systemic differences such as parsing, stemming, phrase processing, and relevance feedback, and fuse result sets generated from highly effective retrieval strategies in the same information retrieval system. From this, we show that data fusion of highly effective retrieval strategies alone shows little or no improvement in retrieval effectiveness. Furthermore, we present a detailed analysis of the performance of modern data fusion approaches, and demonstrate the reasons why they do not perform weIl when applied to this problem. Detailed results and analyses are included to support our conclusions.
  16. Frappaolo, C.: Artificial intelligence and text retrieval : a current perspective on the state of the art (1992) 0.02
    0.015017696 = product of:
      0.07508848 = sum of:
        0.07508848 = weight(_text_:retrieval in 7097) [ClassicSimilarity], result of:
          0.07508848 = score(doc=7097,freq=6.0), product of:
            0.12971628 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042882618 = queryNorm
            0.5788671 = fieldWeight in 7097, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=7097)
      0.2 = coord(1/5)
    
    Abstract
    Brief discussion of the ways in which computerized information retrieval and database searching can be enhanced by integrating artificial intelligence with such search systems. Explores the possibility of integrating the powers and capabilities of artificial intelligence (specifically natural language processing) with text retrieval
  17. Wenzel, F.: Semantische Eingrenzung im Freitext-Retrieval auf der Basis morphologischer Segmentierungen (1980) 0.02
    0.015017696 = product of:
      0.07508848 = sum of:
        0.07508848 = weight(_text_:retrieval in 2037) [ClassicSimilarity], result of:
          0.07508848 = score(doc=2037,freq=6.0), product of:
            0.12971628 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042882618 = queryNorm
            0.5788671 = fieldWeight in 2037, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=2037)
      0.2 = coord(1/5)
    
    Abstract
    The basic problem in freetext retrieval is that the retrieval language is not properly adapted to that of the author. Morphological segmentation, where words with the same root are grouped together in the inverted file, is a good eliminator of noise and information loss, providing high recall but low precision
  18. Schmolz, H.: Anaphora resolution and text retrieval : a lnguistic analysis of hypertexts (2015) 0.02
    0.015017696 = product of:
      0.07508848 = sum of:
        0.07508848 = weight(_text_:retrieval in 1172) [ClassicSimilarity], result of:
          0.07508848 = score(doc=1172,freq=6.0), product of:
            0.12971628 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042882618 = queryNorm
            0.5788671 = fieldWeight in 1172, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=1172)
      0.2 = coord(1/5)
    
    RSWK
    Englisch / Anapher <Syntax> / Hypertext / Information Retrieval / Korpus <Linguistik>
    Subject
    Englisch / Anapher <Syntax> / Hypertext / Information Retrieval / Korpus <Linguistik>
  19. Yannakoudakis, E.J.; Daraki, J.J.: Lexical clustering and retrieval of bibliographic records (1994) 0.01
    0.014866761 = product of:
      0.0743338 = sum of:
        0.0743338 = weight(_text_:retrieval in 1045) [ClassicSimilarity], result of:
          0.0743338 = score(doc=1045,freq=12.0), product of:
            0.12971628 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042882618 = queryNorm
            0.5730491 = fieldWeight in 1045, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1045)
      0.2 = coord(1/5)
    
    Abstract
    Presents a new system that enables users to retrieve catalogue entries on the basis of theri lexical similarities and to cluster records in a dynamic fashion. Describes the information retrieval system developed by the Department of Informatics, Athens University of Economics and Business, Greece. The system also offers the means for cyclic retrieval of records from each cluster while allowing the user to define the field to be used in each case. The approach is based on logical keys which are derived from pertinent bibliographic fields and are used for all clustering and information retrieval functions
    Source
    Information retrieval: new systems and current research. Proceedings of the 15th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Glasgow 1993. Ed.: Ruben Leon
  20. Smeaton, A.F.: Progress in the application of natural language processing to information retrieval tasks (1992) 0.01
    0.014714277 = product of:
      0.073571384 = sum of:
        0.073571384 = weight(_text_:retrieval in 7080) [ClassicSimilarity], result of:
          0.073571384 = score(doc=7080,freq=4.0), product of:
            0.12971628 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042882618 = queryNorm
            0.5671716 = fieldWeight in 7080, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=7080)
      0.2 = coord(1/5)
    
    Abstract
    Account of recent developments in automatic and semi-automatic text indexing as well as in the generation of thesauri, text retrieval, abstracting and summarization

Authors

Years

Languages

Types

  • a 236
  • m 24
  • el 16
  • s 13
  • x 9
  • p 2
  • b 1
  • d 1
  • r 1
  • More… Less…

Subjects

Classifications