Search (3 results, page 1 of 1)

  • × theme_ss:"Sprachretrieval"
  • × type_ss:"a"
  • × year_i:[2000 TO 2010}
  1. Strötgen, R.; Mandl, T.; Schneider, R.: Entwicklung und Evaluierung eines Question Answering Systems im Rahmen des Cross Language Evaluation Forum (CLEF) (2006) 0.01
    0.014147157 = product of:
      0.028294314 = sum of:
        0.028294314 = product of:
          0.056588627 = sum of:
            0.056588627 = weight(_text_:systems in 5981) [ClassicSimilarity], result of:
              0.056588627 = score(doc=5981,freq=6.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.35286134 = fieldWeight in 5981, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5981)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Question Answering Systeme versuchen, zu konkreten Fragen eine korrekte Antwort zu liefern. Dazu durchsuchen sie einen Dokumentenbestand und extrahieren einen Bruchteil eines Dokuments. Dieser Beitrag beschreibt die Entwicklung eines modularen Systems zum multilingualen Question Answering. Die Strategie bei der Entwicklung zielte auf eine schnellstmögliche Verwendbarkeit eines modularen Systems, das auf viele frei verfügbare Ressourcen zugreift. Das System integriert Module zur Erkennung von Eigennamen, zu Indexierung und Retrieval, elektronische Wörterbücher, Online-Übersetzungswerkzeuge sowie Textkorpora zu Trainings- und Testzwecken und implementiert eigene Ansätze zu den Bereichen der Frage- und AntwortTaxonomien, zum Passagenretrieval und zum Ranking alternativer Antworten.
  2. Lin, J.; Katz, B.: Building a reusable test collection for question answering (2006) 0.01
    0.011551105 = product of:
      0.02310221 = sum of:
        0.02310221 = product of:
          0.04620442 = sum of:
            0.04620442 = weight(_text_:systems in 5045) [ClassicSimilarity], result of:
              0.04620442 = score(doc=5045,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.28811008 = fieldWeight in 5045, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5045)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In contrast to traditional information retrieval systems, which return ranked lists of documents that users must manually browse through, a question answering system attempts to directly answer natural language questions posed by the user. Although such systems possess language-processing capabilities, they still rely on traditional document retrieval techniques to generate an initial candidate set of documents. In this article, the authors argue that document retrieval for question answering represents a task different from retrieving documents in response to more general retrospective information needs. Thus, to guide future system development, specialized question answering test collections must be constructed. They show that the current evaluation resources have major shortcomings; to remedy the situation, they have manually created a small, reusable question answering test collection for research purposes. In this article they describe their methodology for building this test collection and discuss issues they encountered regarding the notion of "answer correctness."
  3. Pomerantz, J.: ¬A linguistic analysis of question taxonomies (2005) 0.01
    0.009529176 = product of:
      0.019058352 = sum of:
        0.019058352 = product of:
          0.038116705 = sum of:
            0.038116705 = weight(_text_:systems in 3465) [ClassicSimilarity], result of:
              0.038116705 = score(doc=3465,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.23767869 = fieldWeight in 3465, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3465)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Recent work in automatic question answering has called for question taxonomies as a critical component of the process of machine understanding of questions. There is a long tradition of classifying questions in library reference services, and digital reference services have a strong need for automation to support scalability. Digital reference and question answering systems have the potential to arrive at a highly fruitful symbiosis. To move towards this goal, an extensive review was conducted of bodies of literature from several fields that deal with questions, to identify question taxonomies that exist in these bodies of literature. In the course of this review, five question taxonomies were identified, at four levels of linguistic analysis.