Search (5 results, page 1 of 1)

  • × author_ss:"Harman, D."
  • × theme_ss:"Retrievalstudien"
  1. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.05
    0.047836047 = product of:
      0.11959012 = sum of:
        0.009535614 = weight(_text_:a in 6438) [ClassicSimilarity], result of:
          0.009535614 = score(doc=6438,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17835285 = fieldWeight in 6438, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=6438)
        0.1100545 = sum of:
          0.022102704 = weight(_text_:information in 6438) [ClassicSimilarity], result of:
            0.022102704 = score(doc=6438,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.27153665 = fieldWeight in 6438, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.109375 = fieldNorm(doc=6438)
          0.087951794 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
            0.087951794 = score(doc=6438,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.5416616 = fieldWeight in 6438, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=6438)
      0.4 = coord(2/5)
    
    Date
    11. 8.2001 16:22:19
    Source
    Information processing and management. 36(2000) no.1, S.3-36
    Type
    a
  2. Harman, D.: ¬The Text REtrieval Conferences (TRECs) : providing a test-bed for information retrieval systems (1998) 0.01
    0.008684997 = product of:
      0.021712493 = sum of:
        0.010661141 = weight(_text_:a in 1314) [ClassicSimilarity], result of:
          0.010661141 = score(doc=1314,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.19940455 = fieldWeight in 1314, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1314)
        0.011051352 = product of:
          0.022102704 = sum of:
            0.022102704 = weight(_text_:information in 1314) [ClassicSimilarity], result of:
              0.022102704 = score(doc=1314,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.27153665 = fieldWeight in 1314, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1314)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The Text REtrieval Conference (TREC) workshop series encourages research in information retrieval from large text applications by providing a large test collection, uniform scoring procedures and a forum for organizations interested in comparing their results. Now in its seventh year, the conference has become the major experimental effort in the field. Participants in the TREC conferences have examined a wide variety of retrieval techniques, including methods using automatic thesauri, sophisticated term weighting, natural language techniques, relevance feedback and advanced pattern matching. The TREC conference series is co-sponsored by the National Institute of Standards and Technology (NIST) and the Information Technology Office of the Defense Advanced Research Projects Agency (DARPA)
    Source
    Bulletin of the American Society for Information Science. 24(1998), April/May, S.11-13
    Type
    a
  3. Harman, D.: Overview of the Second Text Retrieval Conference : TREC-2 (1995) 0.01
    0.0068851607 = product of:
      0.017212901 = sum of:
        0.010897844 = weight(_text_:a in 1915) [ClassicSimilarity], result of:
          0.010897844 = score(doc=1915,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.20383182 = fieldWeight in 1915, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=1915)
        0.006315058 = product of:
          0.012630116 = sum of:
            0.012630116 = weight(_text_:information in 1915) [ClassicSimilarity], result of:
              0.012630116 = score(doc=1915,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1551638 = fieldWeight in 1915, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1915)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The conference was attended by about 150 people involved in 31 participating groups. Its goal was to bring research groups together to discuss their work on a new large test collection. There was a large variation of retrieval techniques reported on, including methods using automatic thesauri, sophisticated term weighting, natural language techniques, relevance feedback, and advanced pattern matching. As results had been run through a common evaluation package, groups were able to compare the effectiveness of different techniques, and discuss how differences between the systems affected performance
    Source
    Information processing and management. 31(1995) no.3, S.271-289
    Type
    a
  4. Smeaton, A.F.; Harman, D.: ¬The TREC experiments and their impact on Europe (1997) 0.01
    0.00655477 = product of:
      0.016386924 = sum of:
        0.005448922 = weight(_text_:a in 7702) [ClassicSimilarity], result of:
          0.005448922 = score(doc=7702,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10191591 = fieldWeight in 7702, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=7702)
        0.010938003 = product of:
          0.021876005 = sum of:
            0.021876005 = weight(_text_:information in 7702) [ClassicSimilarity], result of:
              0.021876005 = score(doc=7702,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2687516 = fieldWeight in 7702, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7702)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Reviews the overall results of the TREC experiments in information retrieval, which differed from other information retrieval research projects in that the document collections used in the research were massive, and the groups participating in the collaborative evaluation are among the main organizations in the field. Reviews the findings of TREC, the way in which it operates and the specialist 'tracks' it supports and concentrates on european involvement in TREC, examining the participants and the emergence of European TREC like exercises
    Source
    Journal of information science. 23(1997) no.2, S.169-174
    Type
    a
  5. Harman, D.: Overview of the first Text Retrieval Conference (1993) 0.01
    0.0060245167 = product of:
      0.015061291 = sum of:
        0.009535614 = weight(_text_:a in 548) [ClassicSimilarity], result of:
          0.009535614 = score(doc=548,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17835285 = fieldWeight in 548, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=548)
        0.005525676 = product of:
          0.011051352 = sum of:
            0.011051352 = weight(_text_:information in 548) [ClassicSimilarity], result of:
              0.011051352 = score(doc=548,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13576832 = fieldWeight in 548, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=548)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The first Text Retrieval Conference (TREC-1) was held in early November and was attended by about 100 people working in the 25 participating groups. The goal of the conference was to bring research gropus together to discuss their work on a new large test collection. There was a large variety of retrieval techniques reported on, including methods using automatic thesauri, sophisticated term weighting, natural language techniques, relevance feedback, and advanced pattern matching. As results had been run through a common evaluation package, groups were able to compare the effectiveness of different techniques, and discuss how differences among the systems affected performance
    Imprint
    Medford, NJ : Learned Information
    Type
    a