Search (2 results, page 1 of 1)

  • × author_ss:"Davis, C.H."
  • × author_ss:"Shaw, D."
  1. Sun, Q.; Shaw, D.; Davis, C.H.: ¬A model for estimating the occurence of same-frequency words and the boundary between high- and low-frequency words in texts (1999) 0.00
    0.0029000505 = product of:
      0.005800101 = sum of:
        0.005800101 = product of:
          0.011600202 = sum of:
            0.011600202 = weight(_text_:a in 3063) [ClassicSimilarity], result of:
              0.011600202 = score(doc=3063,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.21843673 = fieldWeight in 3063, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3063)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A simpler model is proposed for estimating the frequency of any same-frequency words and identifying the boundary point between high-frequency words and low-frequency words in a text. The model, based on a 'maximum-ranking method', assigns ranks to the words and estimates word frequency by a formula. The boundary value between high-frequency and low-frequency words is obtained by taking the square root of the number of different words in the text. This straightforward model was used successfully with both English and Chinese texts
    Type
    a
  2. Davis, C.H.; Shaw, D.: Comparison of retrieval system interfaces using an objective measure of screen design effectiveness (1989) 0.00
    0.0020506454 = product of:
      0.004101291 = sum of:
        0.004101291 = product of:
          0.008202582 = sum of:
            0.008202582 = weight(_text_:a in 3325) [ClassicSimilarity], result of:
              0.008202582 = score(doc=3325,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1544581 = fieldWeight in 3325, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3325)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Many evaluations of screen design for computer system interfaces are subjective. At best, they consist of sophisticated measures of user behaviour based on instruments devised by cognitive scientists: at worst, they represent only the preconceived notions of software designers. 2 straightforward experiments are described that use tallies of keyboarding errors as a measure of interface effectiveness. By programming the computer to keep such tallies during the input of search logic for a retrieval system, it is possible to obtain objectives and empirically based data for comparing the effectiveness of different interface designs
    Type
    a