Search (2 results, page 1 of 1)

  • × author_ss:"Lassalle, E."
  1. Lassalle, E.; Lassalle, E.: Semantic models in information retrieval (2012) 0.01
    0.013447259 = product of:
      0.040341776 = sum of:
        0.040341776 = product of:
          0.08068355 = sum of:
            0.08068355 = weight(_text_:indexing in 97) [ClassicSimilarity], result of:
              0.08068355 = score(doc=97,freq=6.0), product of:
                0.2202888 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.057548698 = queryNorm
                0.3662626 = fieldWeight in 97, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=97)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Robertson and Spärck Jones pioneered experimental probabilistic models (Binary Independence Model) with both a typology generalizing the Boolean model, a frequency counting to calculate elementary weightings, and their combination into a global probabilistic estimation. However, this model did not consider indexing terms dependencies. An extension to mixture models (e.g., using a 2-Poisson law) made it possible to take into account these dependencies from a macroscopic point of view (BM25), as well as a shallow linguistic processing of co-references. New approaches (language models, for example "bag of words" models, probabilistic dependencies between requests and documents, and consequently Bayesian inference using Dirichlet prior conjugate) furnished new solutions for documents structuring (categorization) and for index smoothing. Presently, in these probabilistic models the main issues have been addressed from a formal point of view only. Thus, linguistic properties are neglected in the indexing language. The authors examine how a linguistic and semantic modeling can be integrated in indexing languages and set up a hybrid model that makes it possible to deal with different information retrieval problems in a unified way.
  2. Lassalle, E.: Text retrieval : from a monolingual system to a multilingual system (1993) 0.01
    0.01086929 = product of:
      0.03260787 = sum of:
        0.03260787 = product of:
          0.06521574 = sum of:
            0.06521574 = weight(_text_:indexing in 7403) [ClassicSimilarity], result of:
              0.06521574 = score(doc=7403,freq=2.0), product of:
                0.2202888 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.057548698 = queryNorm
                0.29604656 = fieldWeight in 7403, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7403)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Describes the TELMI monolingual text retrieval system and its future extension, a multilingual system. TELMI is designed for medium sized databases containing short texts. The characteristics of the system are fine-grained natural language processing (NLP); an open domain and a large scale knowledge base; automated indexing based on conceptual representation of texts and reusability of the NLP tools. Discusses the French MINITEL service, the MGS information service and the TELMI research system covering the full text system; NLP architecture; the lexical level; the syntactic level; the semantic level and an example of the use of a generic system