Search (9 results, page 1 of 1)

  • × author_ss:"Strzalkowski, T."
  1. Strzalkowski, T.; Perez-Carballo, J.: Natural language information retrieval : TREC-4 report (1996) 0.08
    0.07668869 = product of:
      0.15337738 = sum of:
        0.02059882 = weight(_text_:information in 3211) [ClassicSimilarity], result of:
          0.02059882 = score(doc=3211,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.23274569 = fieldWeight in 3211, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=3211)
        0.13277857 = weight(_text_:standards in 3211) [ClassicSimilarity], result of:
          0.13277857 = score(doc=3211,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.59091425 = fieldWeight in 3211, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.09375 = fieldNorm(doc=3211)
      0.5 = coord(2/4)
    
    Imprint
    Gaithersburgh, MD : National Institute of Standards and Technology
  2. Strzalkowski, T.; Guthrie, L.; Karlgren, J.; Leistensnider, J.; Lin, F.; Perez-Carballo, J.; Straszheim, T.; Wang, J.; Wilding, J.: Natural language information retrieval : TREC-5 report (1997) 0.06
    0.06390724 = product of:
      0.12781449 = sum of:
        0.017165681 = weight(_text_:information in 3100) [ClassicSimilarity], result of:
          0.017165681 = score(doc=3100,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.19395474 = fieldWeight in 3100, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=3100)
        0.1106488 = weight(_text_:standards in 3100) [ClassicSimilarity], result of:
          0.1106488 = score(doc=3100,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.49242854 = fieldWeight in 3100, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.078125 = fieldNorm(doc=3100)
      0.5 = coord(2/4)
    
    Imprint
    Gaithersburgh, MD : National Institute of Standards and Technology
  3. Strzalkowski, T.; Sparck Jones, K.: NLP track at TREC-5 (1997) 0.03
    0.033194643 = product of:
      0.13277857 = sum of:
        0.13277857 = weight(_text_:standards in 3098) [ClassicSimilarity], result of:
          0.13277857 = score(doc=3098,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.59091425 = fieldWeight in 3098, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.09375 = fieldNorm(doc=3098)
      0.25 = coord(1/4)
    
    Imprint
    Gaithersburgh, MD : National Institute of Standards and Technology
  4. Perez-Carballo, J.; Strzalkowski, T.: Natural language information retrieval : progress report (2000) 0.01
    0.00849658 = product of:
      0.03398632 = sum of:
        0.03398632 = weight(_text_:information in 6421) [ClassicSimilarity], result of:
          0.03398632 = score(doc=6421,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.3840108 = fieldWeight in 6421, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=6421)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 36(2000) no.1, S.155-205
  5. Strzalkowski, T.: Robust text processing in automated information retrieval (1994) 0.01
    0.006068985 = product of:
      0.02427594 = sum of:
        0.02427594 = weight(_text_:information in 1953) [ClassicSimilarity], result of:
          0.02427594 = score(doc=1953,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.27429342 = fieldWeight in 1953, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=1953)
      0.25 = coord(1/4)
    
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.317-322.
  6. Strzalkowski, T.: Natural language information retrieval (1995) 0.01
    0.0052030715 = product of:
      0.020812286 = sum of:
        0.020812286 = weight(_text_:information in 1914) [ClassicSimilarity], result of:
          0.020812286 = score(doc=1914,freq=6.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.23515764 = fieldWeight in 1914, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1914)
      0.25 = coord(1/4)
    
    Abstract
    Describes an information retrieval system in which advanced natural language processing techniques are used to enhance the effectiveness of term based document retrieval. The backbone of the system is a traditional statistical engine that builds inverted index files from pre processed documents, and then searches and ranks the documents in response to user queries. Natural language processing is used to: preprocess the documents in order to extract content carrying terms; discover interterm dependencies and build a conceptual hierarchy specific to the database domain, and process the user's natural language requests into effective search queries. During the course of the Text Retrieval Conferences, TREC-1 and TREC-2, this system has evolved from a scaled up prototype, originally tested on such collections as CACM-3204 and Cranfield, to its present form, which can be effectively used to process hundreds or millions of words of unrestricted text
    Source
    Information processing and management. 31(1995) no.3, S.397-417
  7. Ng, K.B.; Kantor, P.B.; Strzalkowski, T.; Wacholder, N.; Tang, R.; Bai, B.; Rittman,; Song, P.; Sun, Y.: Automated judgment of document qualities (2006) 0.00
    0.0036413912 = product of:
      0.014565565 = sum of:
        0.014565565 = weight(_text_:information in 182) [ClassicSimilarity], result of:
          0.014565565 = score(doc=182,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.16457605 = fieldWeight in 182, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=182)
      0.25 = coord(1/4)
    
    Abstract
    The authors report on a series of experiments to automate the assessment of document qualities such as depth and objectivity. The primary purpose is to develop a quality-sensitive functionality, orthogonal to relevance, to select documents for an interactive question-answering system. The study consisted of two stages. In the classifier construction stage, nine document qualities deemed important by information professionals were identified and classifiers were developed to predict their values. In the confirmative evaluation stage, the performance of the developed methods was checked using a different document collection. The quality prediction methods worked well in the second stage. The results strongly suggest that the best way to predict document qualities automatically is to construct classifiers on a person-by-person basis.
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.9, S.1155-1164
  8. Kelly, D.; Wacholder, N.; Rittman, R.; Sun, Y.; Kantor, P.; Small, S.; Strzalkowski, T.: Using interview data to identify evaluation criteria for interactive, analytical question-answering systems (2007) 0.00
    0.0036413912 = product of:
      0.014565565 = sum of:
        0.014565565 = weight(_text_:information in 332) [ClassicSimilarity], result of:
          0.014565565 = score(doc=332,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.16457605 = fieldWeight in 332, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=332)
      0.25 = coord(1/4)
    
    Abstract
    The purpose of this work is to identify potential evaluation criteria for interactive, analytical question-answering (QA) systems by analyzing evaluative comments made by users of such a system. Qualitative data collected from intelligence analysts during interviews and focus groups were analyzed to identify common themes related to performance, use, and usability. These data were collected as part of an intensive, three-day evaluation workshop of the High-Quality Interactive Question Answering (HITIQA) system. Inductive coding and memoing were used to identify and categorize these data. Results suggest potential evaluation criteria for interactive, analytical QA systems, which can be used to guide the development and design of future systems and evaluations. This work contributes to studies of QA systems, information seeking and use behaviors, and interactive searching.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.7, S.1032-1043
  9. Wacholder, N.; Kelly, D.; Kantor, P.; Rittman, R.; Sun, Y.; Bai, B.; Small, S.; Yamrom, B.; Strzalkowski, T.: ¬A model for quantitative evaluation of an end-to-end question-answering system (2007) 0.00
    0.0036413912 = product of:
      0.014565565 = sum of:
        0.014565565 = weight(_text_:information in 435) [ClassicSimilarity], result of:
          0.014565565 = score(doc=435,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.16457605 = fieldWeight in 435, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=435)
      0.25 = coord(1/4)
    
    Abstract
    We describe a procedure for quantitative evaluation of interactive question-answering systems and illustrate it with application to the High-Quality Interactive QuestionAnswering (HITIQA) system. Our objectives were (a) to design a method to realistically and reliably assess interactive question-answering systems by comparing the quality of reports produced using different systems, (b) to conduct a pilot test of this method, and (c) to perform a formative evaluation of the HITIQA system. Far more important than the specific information gathered from this pilot evaluation is the development of (a) a protocol for evaluating an emerging technology, (b) reusable assessment instruments, and (c) the knowledge gained in conducting the evaluation. We conclude that this method, which uses a surprisingly small number of subjects and does not rely on predetermined relevance judgments, measures the impact of system change on work produced by users. Therefore this method can be used to compare the product of interactive systems that use different underlying technologies.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.8, S.1082-1099