Search (24 results, page 2 of 2)

  • × theme_ss:"Sprachretrieval"
  • × type_ss:"a"
  1. Ferret, O.; Grau, B.; Hurault-Plantet, M.; Illouz, G.; Jacquemin, C.; Monceaux, L.; Robba, I.; Vilnat, A.: How NLP can improve question answering (2002) 0.00
    6.106462E-4 = product of:
      0.0024425848 = sum of:
        0.0024425848 = product of:
          0.007327754 = sum of:
            0.007327754 = weight(_text_:a in 1850) [ClassicSimilarity], result of:
              0.007327754 = score(doc=1850,freq=6.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.13239266 = fieldWeight in 1850, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1850)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Answering open-domain factual questions requires Natural Language processing for refining document selection and answer identification. With our system QALC, we have participated in the Question Answering track of the TREC8, TREC9 and TREC10 evaluations. QALC performs an analysis of documents relying an multiword term searches and their linguistic variation both to minimize the number of documents selected and to provide additional clues when comparing question and sentence representations. This comparison process also makes use of the results of a syntactic parsing of the questions and Named Entity recognition functionalities. Answer extraction relies an the application of syntactic patterns chosen according to the kind of information that is sought, and categorized depending an the syntactic form of the question. These patterns allow QALC to handle nicely linguistic variations at the answer level.
    Type
    a
  2. Wittbrock, M.J.; Hauptmann, A.G.: Speech recognition for a digital video library (1998) 0.00
    5.875945E-4 = product of:
      0.002350378 = sum of:
        0.002350378 = product of:
          0.007051134 = sum of:
            0.007051134 = weight(_text_:a in 873) [ClassicSimilarity], result of:
              0.007051134 = score(doc=873,freq=8.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.12739488 = fieldWeight in 873, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=873)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    The standard method for making the full content of audio and video material searchable is to annotate it with human-generated meta-data that describes the content in a way that search can understand, as is done in the creation of multimedia CD-ROMs. However, for the huge amounts of data that could usefully be included in digital video and audio libraries, the cost of producing the meta-data is prohibitive. In the Informedia Digital Video Library, the production of the meta-data supporting the library interface is automated using techniques derived from artificial intelligence (AI) research. By applying speech recognition together with natural language processing, information retrieval, and image analysis, an interface has been prduced that helps users locate the information they want, and navigate or browse the digital video library more effectively. Specific interface components include automatc titles, filmstrips, video skims, word location marking, and representative frames for shots. Both the user interface and the information retrieval engine within Informedia are designed for use with automatically derived meta-data, much of which depends on speech recognition for its production. Some experimental information retrieval results will be given, supporting a basic premise of the Informedia project: That speech recognition generated transcripts can make multimedia material searchable. The Informedia project emphasizes the integration of speech recognition, image processing, natural language processing, and information retrieval to compensate for deficiencies in these individual technologies
    Type
    a
  3. Lange, H.R.: Speech synthesis and speech recognition : tomorrow's human-computer interface? (1993) 0.00
    4.700756E-4 = product of:
      0.0018803024 = sum of:
        0.0018803024 = product of:
          0.005640907 = sum of:
            0.005640907 = weight(_text_:a in 7224) [ClassicSimilarity], result of:
              0.005640907 = score(doc=7224,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.10191591 = fieldWeight in 7224, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7224)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Type
    a
  4. Hannabuss, S.: Dialogue and the search for information (1989) 0.00
    4.700756E-4 = product of:
      0.0018803024 = sum of:
        0.0018803024 = product of:
          0.005640907 = sum of:
            0.005640907 = weight(_text_:a in 2590) [ClassicSimilarity], result of:
              0.005640907 = score(doc=2590,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.10191591 = fieldWeight in 2590, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2590)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Type
    a