Search (6 results, page 1 of 1)

  • × author_ss:"Harman, D."
  1. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.01
    0.010714828 = product of:
      0.042859312 = sum of:
        0.042859312 = product of:
          0.085718624 = sum of:
            0.085718624 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
              0.085718624 = score(doc=6438,freq=2.0), product of:
                0.15825124 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191016 = queryNorm
                0.5416616 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    11. 8.2001 16:22:19
  2. Harman, D.: Overview of the Second Text Retrieval Conference : TREC-2 (1995) 0.01
    0.008070464 = product of:
      0.032281857 = sum of:
        0.032281857 = product of:
          0.064563714 = sum of:
            0.064563714 = weight(_text_:methods in 1915) [ClassicSimilarity], result of:
              0.064563714 = score(doc=1915,freq=2.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.35535768 = fieldWeight in 1915, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1915)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The conference was attended by about 150 people involved in 31 participating groups. Its goal was to bring research groups together to discuss their work on a new large test collection. There was a large variation of retrieval techniques reported on, including methods using automatic thesauri, sophisticated term weighting, natural language techniques, relevance feedback, and advanced pattern matching. As results had been run through a common evaluation package, groups were able to compare the effectiveness of different techniques, and discuss how differences between the systems affected performance
  3. Over, P.; Dang, H.; Harman, D.: DUC in context (2007) 0.01
    0.008070464 = product of:
      0.032281857 = sum of:
        0.032281857 = product of:
          0.064563714 = sum of:
            0.064563714 = weight(_text_:methods in 934) [ClassicSimilarity], result of:
              0.064563714 = score(doc=934,freq=2.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.35535768 = fieldWeight in 934, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0625 = fieldNorm(doc=934)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Recent years have seen increased interest in text summarization with emphasis on evaluation of prototype systems. Many factors can affect the design of such evaluations, requiring choices among competing alternatives. This paper examines several major themes running through three evaluations: SUMMAC, NTCIR, and DUC, with a concentration on DUC. The themes are extrinsic and intrinsic evaluation, evaluation procedures and methods, generic versus focused summaries, single- and multi-document summaries, length and compression issues, extracts versus abstracts, and issues with genre.
  4. Harman, D.: User-friendly systems instead of user-friendly front-ends (1992) 0.01
    0.0070616566 = product of:
      0.028246626 = sum of:
        0.028246626 = product of:
          0.056493253 = sum of:
            0.056493253 = weight(_text_:methods in 3323) [ClassicSimilarity], result of:
              0.056493253 = score(doc=3323,freq=2.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.31093797 = fieldWeight in 3323, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3323)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Most commercial on-line information retrieval systems are not designed to service end users and, therefore, have often built front-ends to their systems specifically to serve the end user market. These front ends have not been well accepted, mostly because the underlying systems are still difficult for end users to use successfully in searching. New techniques, based on statistical methods, that allow natural language input and return lists of records in order of likely relevance, have long been available from research laboratories. Presents 4 prototype implementations of these statistical retrieval systems that demonstrate their potential as powerful and easily used retrieval systems able to service all users. The systems consist of: the PRISE system; the CITE system; the Muscat system; and the News Retrieval Tool
  5. Harman, D.: Overview of the first Text Retrieval Conference (1993) 0.01
    0.0070616566 = product of:
      0.028246626 = sum of:
        0.028246626 = product of:
          0.056493253 = sum of:
            0.056493253 = weight(_text_:methods in 548) [ClassicSimilarity], result of:
              0.056493253 = score(doc=548,freq=2.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.31093797 = fieldWeight in 548, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=548)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The first Text Retrieval Conference (TREC-1) was held in early November and was attended by about 100 people working in the 25 participating groups. The goal of the conference was to bring research gropus together to discuss their work on a new large test collection. There was a large variety of retrieval techniques reported on, including methods using automatic thesauri, sophisticated term weighting, natural language techniques, relevance feedback, and advanced pattern matching. As results had been run through a common evaluation package, groups were able to compare the effectiveness of different techniques, and discuss how differences among the systems affected performance
  6. Harman, D.: ¬The Text REtrieval Conferences (TRECs) : providing a test-bed for information retrieval systems (1998) 0.01
    0.0070616566 = product of:
      0.028246626 = sum of:
        0.028246626 = product of:
          0.056493253 = sum of:
            0.056493253 = weight(_text_:methods in 1314) [ClassicSimilarity], result of:
              0.056493253 = score(doc=1314,freq=2.0), product of:
                0.18168657 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.045191016 = queryNorm
                0.31093797 = fieldWeight in 1314, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1314)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The Text REtrieval Conference (TREC) workshop series encourages research in information retrieval from large text applications by providing a large test collection, uniform scoring procedures and a forum for organizations interested in comparing their results. Now in its seventh year, the conference has become the major experimental effort in the field. Participants in the TREC conferences have examined a wide variety of retrieval techniques, including methods using automatic thesauri, sophisticated term weighting, natural language techniques, relevance feedback and advanced pattern matching. The TREC conference series is co-sponsored by the National Institute of Standards and Technology (NIST) and the Information Technology Office of the Defense Advanced Research Projects Agency (DARPA)