Search (5 results, page 1 of 1)

  • × theme_ss:"Sprachretrieval"
  1. Srihari, R.K.: Using speech input for image interpretation, annotation, and retrieval (1997) 0.04
    0.042957798 = product of:
      0.085915595 = sum of:
        0.085915595 = sum of:
          0.043459415 = weight(_text_:b in 764) [ClassicSimilarity], result of:
            0.043459415 = score(doc=764,freq=2.0), product of:
              0.18503809 = queryWeight, product of:
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.052226946 = queryNorm
              0.23486741 = fieldWeight in 764, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.046875 = fieldNorm(doc=764)
          0.04245618 = weight(_text_:22 in 764) [ClassicSimilarity], result of:
            0.04245618 = score(doc=764,freq=2.0), product of:
              0.18288986 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052226946 = queryNorm
              0.23214069 = fieldWeight in 764, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=764)
      0.5 = coord(1/2)
    
    Date
    22. 9.1997 19:16:05
    Source
    Digital image access and retrieval: Proceedings of the 1996 Clinic on Library Applications of Data Processing, 24-26 Mar 1996. Ed.: P.B. Heidorn u. B. Sandore
  2. Lin, J.; Katz, B.: Building a reusable test collection for question answering (2006) 0.04
    0.039718196 = sum of:
      0.017988488 = product of:
        0.07195395 = sum of:
          0.07195395 = weight(_text_:authors in 5045) [ClassicSimilarity], result of:
            0.07195395 = score(doc=5045,freq=2.0), product of:
              0.23809293 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052226946 = queryNorm
              0.30220953 = fieldWeight in 5045, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=5045)
        0.25 = coord(1/4)
      0.021729708 = product of:
        0.043459415 = sum of:
          0.043459415 = weight(_text_:b in 5045) [ClassicSimilarity], result of:
            0.043459415 = score(doc=5045,freq=2.0), product of:
              0.18503809 = queryWeight, product of:
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.052226946 = queryNorm
              0.23486741 = fieldWeight in 5045, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.046875 = fieldNorm(doc=5045)
        0.5 = coord(1/2)
    
    Abstract
    In contrast to traditional information retrieval systems, which return ranked lists of documents that users must manually browse through, a question answering system attempts to directly answer natural language questions posed by the user. Although such systems possess language-processing capabilities, they still rely on traditional document retrieval techniques to generate an initial candidate set of documents. In this article, the authors argue that document retrieval for question answering represents a task different from retrieving documents in response to more general retrospective information needs. Thus, to guide future system development, specialized question answering test collections must be constructed. They show that the current evaluation resources have major shortcomings; to remedy the situation, they have manually created a small, reusable question answering test collection for research purposes. In this article they describe their methodology for building this test collection and discuss issues they encountered regarding the notion of "answer correctness."
  3. Ferret, O.; Grau, B.; Hurault-Plantet, M.; Illouz, G.; Jacquemin, C.; Monceaux, L.; Robba, I.; Vilnat, A.: How NLP can improve question answering (2002) 0.01
    0.010864854 = product of:
      0.021729708 = sum of:
        0.021729708 = product of:
          0.043459415 = sum of:
            0.043459415 = weight(_text_:b in 1850) [ClassicSimilarity], result of:
              0.043459415 = score(doc=1850,freq=2.0), product of:
                0.18503809 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.052226946 = queryNorm
                0.23486741 = fieldWeight in 1850, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1850)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  4. Galitsky, B.: Can many agents answer questions better than one? (2005) 0.01
    0.010864854 = product of:
      0.021729708 = sum of:
        0.021729708 = product of:
          0.043459415 = sum of:
            0.043459415 = weight(_text_:b in 3094) [ClassicSimilarity], result of:
              0.043459415 = score(doc=3094,freq=2.0), product of:
                0.18503809 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.052226946 = queryNorm
                0.23486741 = fieldWeight in 3094, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3094)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  5. Rösener, C.: ¬Die Stecknadel im Heuhaufen : Natürlichsprachlicher Zugang zu Volltextdatenbanken (2005) 0.01
    0.007243236 = product of:
      0.014486472 = sum of:
        0.014486472 = product of:
          0.028972944 = sum of:
            0.028972944 = weight(_text_:b in 548) [ClassicSimilarity], result of:
              0.028972944 = score(doc=548,freq=2.0), product of:
                0.18503809 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.052226946 = queryNorm
                0.15657827 = fieldWeight in 548, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.03125 = fieldNorm(doc=548)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    5: Interaktion 5.1 Frage-Antwort- bzw. Dialogsysteme: Forschungen und Projekte 5.2 Darstellung und Visualisierung von Wissen 5.3 Das Dialogsystem im Rahmen des LeWi-Projektes 5.4 Ergebnisdarstellung und Antwortpräsentation im LeWi-Kontext 6: Testumgebungen und -ergebnisse 7: Ergebnisse und Ausblick 7.1 Ausgangssituation 7.2 Schlussfolgerungen 7.3 Ausblick Anhang A Auszüge aus der Grob- bzw. Feinklassifikation des BMM Anhang B MPRO - Formale Beschreibung der wichtigsten Merkmale ... Anhang C Fragentypologie mit Beispielsätzen (Auszug) Anhang D Semantische Merkmale im morphologischen Lexikon (Auszug) Anhang E Regelbeispiele für die Fragentypzuweisung Anhang F Aufstellung der möglichen Suchen im LeWi-Dialogmodul (Auszug) Anhang G Vollständiger Dialogbaum zu Beginn des Projektes Anhang H Statuszustände zur Ermittlung der Folgefragen (Auszug)