Search (9 results, page 1 of 1)

  • × author_ss:"Harman, D."
  1. Harman, D.: User-friendly systems instead of user-friendly front-ends (1992) 0.03
    0.02521183 = product of:
      0.05042366 = sum of:
        0.05042366 = product of:
          0.10084732 = sum of:
            0.10084732 = weight(_text_:systems in 3323) [ClassicSimilarity], result of:
              0.10084732 = score(doc=3323,freq=14.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.6288387 = fieldWeight in 3323, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3323)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Most commercial on-line information retrieval systems are not designed to service end users and, therefore, have often built front-ends to their systems specifically to serve the end user market. These front ends have not been well accepted, mostly because the underlying systems are still difficult for end users to use successfully in searching. New techniques, based on statistical methods, that allow natural language input and return lists of records in order of likely relevance, have long been available from research laboratories. Presents 4 prototype implementations of these statistical retrieval systems that demonstrate their potential as powerful and easily used retrieval systems able to service all users. The systems consist of: the PRISE system; the CITE system; the Muscat system; and the News Retrieval Tool
  2. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.02
    0.024745772 = product of:
      0.049491543 = sum of:
        0.049491543 = product of:
          0.09898309 = sum of:
            0.09898309 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
              0.09898309 = score(doc=6438,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.5416616 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    11. 8.2001 16:22:19
  3. Harman, D.; McCoy, W.; Toense, R.: Prototyping a distributed information retrieval system that uses statistical ranking (1991) 0.02
    0.021307886 = product of:
      0.04261577 = sum of:
        0.04261577 = product of:
          0.08523154 = sum of:
            0.08523154 = weight(_text_:systems in 3844) [ClassicSimilarity], result of:
              0.08523154 = score(doc=3844,freq=10.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.5314657 = fieldWeight in 3844, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3844)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Centralised systems continue to dominate the information retrieval market, with increased competition from CD-ROM based systems. As more large organisations begin to implement office automation systems, however, many will find that neither of these types of retrieval systems will satisfy their requirements, especially those requirements involving easy integration into other systems and heavy usage by casual end users. A prototype distributed information retrieval system was designed and built using a distributed architecture and using statistical ranking techniques to help provide better service for the end user. The distributed architecture was shown to be a feasible alternative to centralised or CD-ROM information retrieval, and user testing of the ranking methodology showed both widespread user enthusiasm for this retrieval technique and very fast response times
  4. Harman, D.: Overview of the Second Text Retrieval Conference : TREC-2 (1995) 0.01
    0.010890487 = product of:
      0.021780973 = sum of:
        0.021780973 = product of:
          0.043561947 = sum of:
            0.043561947 = weight(_text_:systems in 1915) [ClassicSimilarity], result of:
              0.043561947 = score(doc=1915,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2716328 = fieldWeight in 1915, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1915)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The conference was attended by about 150 people involved in 31 participating groups. Its goal was to bring research groups together to discuss their work on a new large test collection. There was a large variation of retrieval techniques reported on, including methods using automatic thesauri, sophisticated term weighting, natural language techniques, relevance feedback, and advanced pattern matching. As results had been run through a common evaluation package, groups were able to compare the effectiveness of different techniques, and discuss how differences between the systems affected performance
  5. Harman, D.: Relevance feedback and other query modification techniques (1992) 0.01
    0.010890487 = product of:
      0.021780973 = sum of:
        0.021780973 = product of:
          0.043561947 = sum of:
            0.043561947 = weight(_text_:systems in 3508) [ClassicSimilarity], result of:
              0.043561947 = score(doc=3508,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2716328 = fieldWeight in 3508, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3508)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Presents a survey of relevance feedback techniques that have been used in past research, recommends various query modification approaches for use in different retrieval systems, and gives some guidelines for the efficient design of the relevance feedback component of a retrieval system
  6. Harman, D.: Ranking algorithms (1992) 0.01
    0.010890487 = product of:
      0.021780973 = sum of:
        0.021780973 = product of:
          0.043561947 = sum of:
            0.043561947 = weight(_text_:systems in 3511) [ClassicSimilarity], result of:
              0.043561947 = score(doc=3511,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2716328 = fieldWeight in 3511, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3511)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Presents both a summary of past research done in the development of ranking algorithms and detailed instructions on implementing a ranking type of retrieval system. This type of retrieval system takes as input a natural language query without Boolean syntax and produces a list of records that 'answer' the query, with the records ranked in order of likely relevance. Ranking retrieval systems are particularly appropriate for end-users
  7. Over, P.; Dang, H.; Harman, D.: DUC in context (2007) 0.01
    0.010890487 = product of:
      0.021780973 = sum of:
        0.021780973 = product of:
          0.043561947 = sum of:
            0.043561947 = weight(_text_:systems in 934) [ClassicSimilarity], result of:
              0.043561947 = score(doc=934,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2716328 = fieldWeight in 934, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0625 = fieldNorm(doc=934)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Recent years have seen increased interest in text summarization with emphasis on evaluation of prototype systems. Many factors can affect the design of such evaluations, requiring choices among competing alternatives. This paper examines several major themes running through three evaluations: SUMMAC, NTCIR, and DUC, with a concentration on DUC. The themes are extrinsic and intrinsic evaluation, evaluation procedures and methods, generic versus focused summaries, single- and multi-document summaries, length and compression issues, extracts versus abstracts, and issues with genre.
  8. Harman, D.: Overview of the first Text Retrieval Conference (1993) 0.01
    0.009529176 = product of:
      0.019058352 = sum of:
        0.019058352 = product of:
          0.038116705 = sum of:
            0.038116705 = weight(_text_:systems in 548) [ClassicSimilarity], result of:
              0.038116705 = score(doc=548,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.23767869 = fieldWeight in 548, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=548)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The first Text Retrieval Conference (TREC-1) was held in early November and was attended by about 100 people working in the 25 participating groups. The goal of the conference was to bring research gropus together to discuss their work on a new large test collection. There was a large variety of retrieval techniques reported on, including methods using automatic thesauri, sophisticated term weighting, natural language techniques, relevance feedback, and advanced pattern matching. As results had been run through a common evaluation package, groups were able to compare the effectiveness of different techniques, and discuss how differences among the systems affected performance
  9. Harman, D.: ¬The Text REtrieval Conferences (TRECs) : providing a test-bed for information retrieval systems (1998) 0.01
    0.009529176 = product of:
      0.019058352 = sum of:
        0.019058352 = product of:
          0.038116705 = sum of:
            0.038116705 = weight(_text_:systems in 1314) [ClassicSimilarity], result of:
              0.038116705 = score(doc=1314,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.23767869 = fieldWeight in 1314, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1314)
          0.5 = coord(1/2)
      0.5 = coord(1/2)