Search (5 results, page 1 of 1)

  • × author_ss:"Armstrong, C.J."
  1. Armstrong, C.J.; Keen, E.M.: Workbook for NEPHIS and KWAC : Microcomputer printed subject indexes teaching package, Pt.1 (1982) 0.00
    0.0042435653 = product of:
      0.059409913 = sum of:
        0.059409913 = weight(_text_:subject in 439) [ClassicSimilarity], result of:
          0.059409913 = score(doc=439,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.5532265 = fieldWeight in 439, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.109375 = fieldNorm(doc=439)
      0.071428575 = coord(1/14)
    
  2. Armstrong, C.J.; Keen, E.M.: Manual for teaching NEPHIS and KWAC : Microcomputer printed subject indexes teaching package, Pt.2 (1982) 0.00
    0.0042435653 = product of:
      0.059409913 = sum of:
        0.059409913 = weight(_text_:subject in 440) [ClassicSimilarity], result of:
          0.059409913 = score(doc=440,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.5532265 = fieldWeight in 440, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.109375 = fieldNorm(doc=440)
      0.071428575 = coord(1/14)
    
  3. Wheatley, A.; Armstrong, C.J.: Metadata, recall, and abstracts : can abstracts ever be reliable indicators of document value? (1997) 0.00
    0.004066672 = product of:
      0.056933407 = sum of:
        0.056933407 = weight(_text_:subject in 824) [ClassicSimilarity], result of:
          0.056933407 = score(doc=824,freq=10.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.5301652 = fieldWeight in 824, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=824)
      0.071428575 = coord(1/14)
    
    Abstract
    Abstracts from 7 Internet subject trees (Euroferret, Excite, Infoseek, Lycos Top 5%, Magellan, WebCrawler, Yahoo!), 5 Internet subject gateways (ADAM, EEVL, NetFirst, OMNI, SOSIG), and 3 online databases (ERIC, ISI, LISA) were examined for their subject content, treatment of various enriching features, physical properties such as overall length, anf their readability. Considerable differences were measured, and consistent similarities among abstracts from each type of source were demonstrated. Internet subject tree abstracts were generally the shortest, and online database abstracts the longest. Subject tree and online database abstracts were the most informative, but the level of coverage of document features such as tables, bibliographies, and geographical constraints were disappointingly poor. On balance, the Internet gateways appeared to be providing the most satisfactory abstracts. The authors discuss the continuing role in networked information retrieval of abstracts and their functional analoques such as metadata
  4. Armstrong, C.J.; Medawar, K.: Investigation into the quality of databases in general use in the UK (1996) 0.00
    0.002513852 = product of:
      0.035193928 = sum of:
        0.035193928 = weight(_text_:bibliographic in 6768) [ClassicSimilarity], result of:
          0.035193928 = score(doc=6768,freq=2.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.30108726 = fieldWeight in 6768, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6768)
      0.071428575 = coord(1/14)
    
    Abstract
    Reports on a Centre for Information Quality Management (CIQM) BLRRD funded project which investigated the quality of databases in general use in the UK. Gives a literature review of quality in library and information services. Reports the results of a CIQM questionnaire survey on the quality problems of databases and their affect on users. Carries out databases evaluations of: INSPEC on ESA-IRS, INSPEC on KR Data-Star, INSPEC on UMI CD-ROM, BNB on CD-ROM, and Information Science Abstracts Plus CD-ROM. Sets out a methodology for evaluation of bibliographic databases
  5. Armstrong, C.J.; Wheatley, A.: Writing abstracts for online databases : results of database producers' guidelines (1998) 0.00
    0.0021217826 = product of:
      0.029704956 = sum of:
        0.029704956 = weight(_text_:subject in 3295) [ClassicSimilarity], result of:
          0.029704956 = score(doc=3295,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.27661324 = fieldWeight in 3295, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3295)
      0.071428575 = coord(1/14)
    
    Abstract
    Reports on one area of research in an Electronic Libraries Programme (eLib) MODELS (MOving to Distributed Environments for Library Services) supporting study in 3 investigative areas: examination of current database producers' guidelines for their abstract writers; a brief survey of abstracts in some traditional online databases; and a detailed survey of abstracts from 3 types of electronic database (print sourced online databases, Internet subject trees or directories, and Internet gateways). Examination of database producers' guidelines, reported here, gave a clear view of the intentions behind professionally produced traditional (printed index based) database abstracts and provided a benchmark against which to judge the conclusions of the larger investigations into abstract style, readability and content