Search (10 results, page 1 of 1)

  • × theme_ss:"Sprachretrieval"
  1. Keller, F.: How do humans deal with ungrammatical input? : Experimental evidence and computational modelling (1996) 0.04
    0.042149827 = product of:
      0.14752439 = sum of:
        0.029979186 = weight(_text_:with in 7293) [ClassicSimilarity], result of:
          0.029979186 = score(doc=7293,freq=2.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.3194935 = fieldWeight in 7293, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.09375 = fieldNorm(doc=7293)
        0.117545195 = product of:
          0.23509039 = sum of:
            0.23509039 = weight(_text_:humans in 7293) [ClassicSimilarity], result of:
              0.23509039 = score(doc=7293,freq=2.0), product of:
                0.26276368 = queryWeight, product of:
                  6.7481275 = idf(docFreq=140, maxDocs=44218)
                  0.038938753 = queryNorm
                0.8946837 = fieldWeight in 7293, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7481275 = idf(docFreq=140, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7293)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
  2. Kruschwitz, U.; AI-Bakour, H.: Users want more sophisticated search assistants : results of a task-based evaluation (2005) 0.00
    0.003568951 = product of:
      0.024982655 = sum of:
        0.024982655 = weight(_text_:with in 4575) [ClassicSimilarity], result of:
          0.024982655 = score(doc=4575,freq=8.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.2662446 = fieldWeight in 4575, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4575)
      0.14285715 = coord(1/7)
    
    Abstract
    The Web provides a massive knowledge source, as do intranets and other electronic document collections. However, much of that knowledge is encoded implicitly and cannot be applied directly without processing into some more appropriate structures. Searching, browsing, question answering, for example, could all benefit from domain-specific knowledge contained in the documents, and in applications such as simple search we do not actually need very "deep" knowledge structures such as ontologies, but we can get a long way with a model of the domain that consists of term hierarchies. We combine domain knowledge automatically acquired by exploiting the documents' markup structure with knowledge extracted an the fly to assist a user with ad hoc search requests. Such a search system can suggest query modification options derived from the actual data and thus guide a user through the space of documents. This article gives a detailed account of a task-based evaluation that compares a search system that uses the outlined domain knowledge with a standard search system. We found that users do use the query modification suggestions proposed by the system. The main conclusion we can draw from this evaluation, however, is that users prefer a system that can suggest query modifications over a standard search engine, which simply presents a ranked list of documents. Most interestingly, we observe this user preference despite the fact that the baseline system even performs slightly better under certain criteria.
  3. Sparck Jones, K.; Jones, G.J.F.; Foote, J.T.; Young, S.J.: Experiments in spoken document retrieval (1996) 0.00
    0.0035330812 = product of:
      0.024731567 = sum of:
        0.024731567 = weight(_text_:with in 1951) [ClassicSimilarity], result of:
          0.024731567 = score(doc=1951,freq=4.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.2635687 = fieldWeight in 1951, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1951)
      0.14285715 = coord(1/7)
    
    Abstract
    Describes experiments in the retrieval of spoken documents in multimedia systems. Speech documents pose a particular problem for retrieval since their words as well as contents are unknown. Addresses this problem, for a video mail application, by combining state of the art speech recognition with established document retrieval technologies so as to provide an effective and efficient retrieval tool. Tests with a small spoken message collection show that retrieval precision for the spoken file can reach 90% of that obtained when the same file is used, as a benchmark, in text transcription form
  4. Wittbrock, M.J.; Hauptmann, A.G.: Speech recognition for a digital video library (1998) 0.00
    0.0030908023 = product of:
      0.021635616 = sum of:
        0.021635616 = weight(_text_:with in 873) [ClassicSimilarity], result of:
          0.021635616 = score(doc=873,freq=6.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.2305746 = fieldWeight in 873, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0390625 = fieldNorm(doc=873)
      0.14285715 = coord(1/7)
    
    Abstract
    The standard method for making the full content of audio and video material searchable is to annotate it with human-generated meta-data that describes the content in a way that search can understand, as is done in the creation of multimedia CD-ROMs. However, for the huge amounts of data that could usefully be included in digital video and audio libraries, the cost of producing the meta-data is prohibitive. In the Informedia Digital Video Library, the production of the meta-data supporting the library interface is automated using techniques derived from artificial intelligence (AI) research. By applying speech recognition together with natural language processing, information retrieval, and image analysis, an interface has been prduced that helps users locate the information they want, and navigate or browse the digital video library more effectively. Specific interface components include automatc titles, filmstrips, video skims, word location marking, and representative frames for shots. Both the user interface and the information retrieval engine within Informedia are designed for use with automatically derived meta-data, much of which depends on speech recognition for its production. Some experimental information retrieval results will be given, supporting a basic premise of the Informedia project: That speech recognition generated transcripts can make multimedia material searchable. The Informedia project emphasizes the integration of speech recognition, image processing, natural language processing, and information retrieval to compensate for deficiencies in these individual technologies
  5. Hannabuss, S.: Dialogue and the search for information (1989) 0.00
    0.0028551605 = product of:
      0.019986123 = sum of:
        0.019986123 = weight(_text_:with in 2590) [ClassicSimilarity], result of:
          0.019986123 = score(doc=2590,freq=2.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.21299566 = fieldWeight in 2590, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0625 = fieldNorm(doc=2590)
      0.14285715 = coord(1/7)
    
    Abstract
    Knowledge of conversation theory and speech act assists us to understand how people search for information. Dialogue embodies meanings and intentionalities, and represents epistemic inquiry. There are implications for the information-processing model of cognitive psychology. Question formulation (erotetics) and turn-taking play important roles in eliciting information, while discourse analysis furnishes us with information about people's categorising, recall, and semantic skills
  6. Burke, R.D.: Question answering from frequently asked question files : experiences with the FAQ Finder System (1997) 0.00
    0.0028551605 = product of:
      0.019986123 = sum of:
        0.019986123 = weight(_text_:with in 1191) [ClassicSimilarity], result of:
          0.019986123 = score(doc=1191,freq=2.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.21299566 = fieldWeight in 1191, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0625 = fieldNorm(doc=1191)
      0.14285715 = coord(1/7)
    
  7. Pomerantz, J.: ¬A linguistic analysis of question taxonomies (2005) 0.00
    0.0024982654 = product of:
      0.017487857 = sum of:
        0.017487857 = weight(_text_:with in 3465) [ClassicSimilarity], result of:
          0.017487857 = score(doc=3465,freq=2.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.1863712 = fieldWeight in 3465, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3465)
      0.14285715 = coord(1/7)
    
    Abstract
    Recent work in automatic question answering has called for question taxonomies as a critical component of the process of machine understanding of questions. There is a long tradition of classifying questions in library reference services, and digital reference services have a strong need for automation to support scalability. Digital reference and question answering systems have the potential to arrive at a highly fruitful symbiosis. To move towards this goal, an extensive review was conducted of bodies of literature from several fields that deal with questions, to identify question taxonomies that exist in these bodies of literature. In the course of this review, five question taxonomies were identified, at four levels of linguistic analysis.
  8. Srihari, R.K.: Using speech input for image interpretation, annotation, and retrieval (1997) 0.00
    0.0022609986 = product of:
      0.015826989 = sum of:
        0.015826989 = product of:
          0.031653978 = sum of:
            0.031653978 = weight(_text_:22 in 764) [ClassicSimilarity], result of:
              0.031653978 = score(doc=764,freq=2.0), product of:
                0.13635688 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038938753 = queryNorm
                0.23214069 = fieldWeight in 764, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=764)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 9.1997 19:16:05
  9. Ferret, O.; Grau, B.; Hurault-Plantet, M.; Illouz, G.; Jacquemin, C.; Monceaux, L.; Robba, I.; Vilnat, A.: How NLP can improve question answering (2002) 0.00
    0.0021413704 = product of:
      0.014989593 = sum of:
        0.014989593 = weight(_text_:with in 1850) [ClassicSimilarity], result of:
          0.014989593 = score(doc=1850,freq=2.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.15974675 = fieldWeight in 1850, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.046875 = fieldNorm(doc=1850)
      0.14285715 = coord(1/7)
    
    Abstract
    Answering open-domain factual questions requires Natural Language processing for refining document selection and answer identification. With our system QALC, we have participated in the Question Answering track of the TREC8, TREC9 and TREC10 evaluations. QALC performs an analysis of documents relying an multiword term searches and their linguistic variation both to minimize the number of documents selected and to provide additional clues when comparing question and sentence representations. This comparison process also makes use of the results of a syntactic parsing of the questions and Named Entity recognition functionalities. Answer extraction relies an the application of syntactic patterns chosen according to the kind of information that is sought, and categorized depending an the syntactic form of the question. These patterns allow QALC to handle nicely linguistic variations at the answer level.
  10. Galitsky, B.: Can many agents answer questions better than one? (2005) 0.00
    0.0021413704 = product of:
      0.014989593 = sum of:
        0.014989593 = weight(_text_:with in 3094) [ClassicSimilarity], result of:
          0.014989593 = score(doc=3094,freq=2.0), product of:
            0.09383348 = queryWeight, product of:
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.038938753 = queryNorm
            0.15974675 = fieldWeight in 3094, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.409771 = idf(docFreq=10797, maxDocs=44218)
              0.046875 = fieldNorm(doc=3094)
      0.14285715 = coord(1/7)
    
    Abstract
    The paper addresses the issue of how online natural language question answering, based on deep semantic analysis, may compete with currently popular keyword search, open domain information retrieval systems, covering a horizontal domain. We suggest the multiagent question answering approach, where each domain is represented by an agent which tries to answer questions taking into account its specific knowledge. The meta-agent controls the cooperation between question answering agents and chooses the most relevant answer(s). We argue that multiagent question answering is optimal in terms of access to business and financial knowledge, flexibility in query phrasing, and efficiency and usability of advice. The knowledge and advice encoded in the system are initially prepared by domain experts. We analyze the commercial application of multiagent question answering and the robustness of the meta-agent. The paper suggests that a multiagent architecture is optimal when a real world question answering domain combines a number of vertical ones to form a horizontal domain.