Search (4 results, page 1 of 1)

  • × author_ss:"Robertson, S."
  • × year_i:[1990 TO 2000}
  1. Beaulieu, M.; Robertson, S.; Rasmussen, E.: Evaluating interactive systems in TREC (1996) 0.00
    0.002374294 = product of:
      0.004748588 = sum of:
        0.004748588 = product of:
          0.009497176 = sum of:
            0.009497176 = weight(_text_:a in 2998) [ClassicSimilarity], result of:
              0.009497176 = score(doc=2998,freq=12.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.21843673 = fieldWeight in 2998, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2998)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The TREC experiments were designed to allow large-scale laboratory testing of information retrieval techniques. As the experiments have progressed, groups within TREC have become increasingly interested in finding ways to allow user interaction without invalidating the experimental design. The development of an 'interactive track' within TREC to accomodate user interaction has required some modifications in the way the retrieval task is designed. In particular there is a need to simulate a realistic interactive searching task within a laboratory environment. Through successive interactive studies in TREC, the Okapi team at City University London has identified methodological issues relevant to this process. A diagnostic experiment was conducted as a follow-up to TREC searches which attempted to isolate the human nad automatic contributions to query formulation and retrieval performance
    Type
    a
  2. Robertson, S.: In memoriam Cyril W. Cleverdon (1998) 0.00
    0.001938603 = product of:
      0.003877206 = sum of:
        0.003877206 = product of:
          0.007754412 = sum of:
            0.007754412 = weight(_text_:a in 1797) [ClassicSimilarity], result of:
              0.007754412 = score(doc=1797,freq=2.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.17835285 = fieldWeight in 1797, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1797)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  3. Hancock-Beaulieu, M.; Robertson, S.; Neilson, C.: Evaluation of online catalogues : eliciting information from the user (1991) 0.00
    0.001938603 = product of:
      0.003877206 = sum of:
        0.003877206 = product of:
          0.007754412 = sum of:
            0.007754412 = weight(_text_:a in 2766) [ClassicSimilarity], result of:
              0.007754412 = score(doc=2766,freq=8.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.17835285 = fieldWeight in 2766, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2766)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    An investigation of tools, techniques, and methods for the evaluation of interactive library catalogues is described, with emphasis on diagnostic methods and on use of the catalogue in a wider context of user information seeking behaviour. A front-end system (Olive) was developed to test verious enhancements of traditional transaction logging as a data-gathering technique for evaluation purposes. These include full-screen logging, pre- and post-search, online/offline, and in-search interactive questionnaires, search replys as well as talk-aloud. The extent of subject or hybrid searching activity as opposed to specific item searching is also highlighted
    Type
    a
  4. Jones, S.; Gatford, M.; Robertson, S.; Hancock-Beaulieu, M.; Secker, J.; Walker, S.: Interactive thesaurus navigation : intelligence rules OK? (1995) 0.00
    0.0013707994 = product of:
      0.0027415988 = sum of:
        0.0027415988 = product of:
          0.0054831975 = sum of:
            0.0054831975 = weight(_text_:a in 180) [ClassicSimilarity], result of:
              0.0054831975 = score(doc=180,freq=4.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.12611452 = fieldWeight in 180, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=180)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We discuss whether it is feasible to build intelligent rule- or weight-based algorithms into general-purpose software for interactive thesaurus navigation. We survey some approaches to the problem reported in the literature, particularly those involving the assignement of 'link weights' in a thesaurus network, and point out some problems of both principle and practice. We then describe investigations which entailed logging the behavior of thesaurus users and testing the effect of thesaurus-based query enhancement in an IR system using term weighting, in an attempt to identify successful strategies to incorporate into automatic procedures. The results cause us to question many of the assumptions made by previous researchers in this area
    Type
    a