Search (10 results, page 1 of 1)

  • × author_ss:"Gillman, P."
  • × year_i:[1990 TO 2000}
  1. Gillman, P.: ConQuest: retrieval on a large scale (1995) 0.11
    0.10642086 = product of:
      0.23944694 = sum of:
        0.08389453 = weight(_text_:applications in 3361) [ClassicSimilarity], result of:
          0.08389453 = score(doc=3361,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.4864132 = fieldWeight in 3361, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.078125 = fieldNorm(doc=3361)
        0.018332949 = weight(_text_:of in 3361) [ClassicSimilarity], result of:
          0.018332949 = score(doc=3361,freq=6.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2992506 = fieldWeight in 3361, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=3361)
        0.040879667 = weight(_text_:systems in 3361) [ClassicSimilarity], result of:
          0.040879667 = score(doc=3361,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.339541 = fieldWeight in 3361, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.078125 = fieldNorm(doc=3361)
        0.096339785 = weight(_text_:software in 3361) [ClassicSimilarity], result of:
          0.096339785 = score(doc=3361,freq=4.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.6198675 = fieldWeight in 3361, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.078125 = fieldNorm(doc=3361)
      0.44444445 = coord(4/9)
    
    Abstract
    Gives an overview of ConQuest text retrieval software. It is designed for large systems and has 9 different retrieval techniques which can be used in series or combination to enhance retrieval. Describes its thesaurus-based retrieval, and combinations of retrieval techniques. Discusses the size of the application and its speed
    Source
    TIP applications. 9(1995) no.3, S.4-6
    Theme
    Bibliographische Software
  2. Gillman, P.: Data handling and text compression (1992) 0.07
    0.07280823 = product of:
      0.16381851 = sum of:
        0.050336715 = weight(_text_:applications in 5306) [ClassicSimilarity], result of:
          0.050336715 = score(doc=5306,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.2918479 = fieldWeight in 5306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.046875 = fieldNorm(doc=5306)
        0.016802425 = weight(_text_:of in 5306) [ClassicSimilarity], result of:
          0.016802425 = score(doc=5306,freq=14.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2742677 = fieldWeight in 5306, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5306)
        0.034687545 = weight(_text_:systems in 5306) [ClassicSimilarity], result of:
          0.034687545 = score(doc=5306,freq=4.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.28811008 = fieldWeight in 5306, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=5306)
        0.061991822 = product of:
          0.123983644 = sum of:
            0.123983644 = weight(_text_:packages in 5306) [ClassicSimilarity], result of:
              0.123983644 = score(doc=5306,freq=2.0), product of:
                0.2706874 = queryWeight, product of:
                  6.9093957 = idf(docFreq=119, maxDocs=44218)
                  0.03917671 = queryNorm
                0.45803255 = fieldWeight in 5306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.9093957 = idf(docFreq=119, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5306)
          0.5 = coord(1/2)
      0.44444445 = coord(4/9)
    
    Abstract
    Data compression has a function in text storage and data handling, but not at the level of compressing data files. The reason is that the decompression of such files add a time delay to the retrieval process, and users can see this delay as a drawback of the system concerned. Compression techniques can with benefit be applied to index files. A more relevant data handling problem is that posed by the need, in most systems, to store two versions of imported text. The first id the 'native' version, as it might have come from a word processor or text editor. The second is the ASCII version which is what is actually imported. Inverted file indexes form yet another version. The problem arises out of the need for dynamic indexing and re-indexing of revisable documents in very large database applications such as are found in Office Automation systems. Four mainstream text-management packages are used to show how this problem is handled, and how generic document architectures such as OCA/CDA and SGML might help
    Source
    Journal of information science. 18(1992), S.105-110
  3. Gillman, P.: Intelligent OCR (1993) 0.06
    0.0560729 = product of:
      0.25232804 = sum of:
        0.11745234 = weight(_text_:applications in 7049) [ClassicSimilarity], result of:
          0.11745234 = score(doc=7049,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.6809785 = fieldWeight in 7049, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.109375 = fieldNorm(doc=7049)
        0.1348757 = weight(_text_:software in 7049) [ClassicSimilarity], result of:
          0.1348757 = score(doc=7049,freq=4.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.8678145 = fieldWeight in 7049, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.109375 = fieldNorm(doc=7049)
      0.22222222 = coord(2/9)
    
    Abstract
    Reviews the catchword OCR software supplied with Logitech's Scanman hand held scanner. Discusses the OCR process, editing and the software itself
    Source
    C and L applications. 7(1993) no.4, S.8-11
  4. Gillman, P.: Transferring text (1993) 0.04
    0.044529554 = product of:
      0.13358866 = sum of:
        0.06711562 = weight(_text_:applications in 6246) [ClassicSimilarity], result of:
          0.06711562 = score(doc=6246,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.38913056 = fieldWeight in 6246, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0625 = fieldNorm(doc=6246)
        0.011975031 = weight(_text_:of in 6246) [ClassicSimilarity], result of:
          0.011975031 = score(doc=6246,freq=4.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.19546966 = fieldWeight in 6246, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=6246)
        0.054498006 = weight(_text_:software in 6246) [ClassicSimilarity], result of:
          0.054498006 = score(doc=6246,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.35064998 = fieldWeight in 6246, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0625 = fieldNorm(doc=6246)
      0.33333334 = coord(3/9)
    
    Abstract
    Describes a consultancy project for the development of a health care thesaurus involving the movement of text between different application programs. The thesaurus was built from existing text within the organisation originating from 3 sources: natural language registry file headings; descriptions from an internal business directory and a controlled vocabulary. The software used was WordPerfect and Cardbox
    Source
    C and L applications. 6(1993) no.9, S.9-11
  5. Gillman, P.: Text retrieval : key points (1992) 0.04
    0.038918205 = product of:
      0.11675461 = sum of:
        0.06711562 = weight(_text_:applications in 4450) [ClassicSimilarity], result of:
          0.06711562 = score(doc=4450,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.38913056 = fieldWeight in 4450, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0625 = fieldNorm(doc=4450)
        0.016935252 = weight(_text_:of in 4450) [ClassicSimilarity], result of:
          0.016935252 = score(doc=4450,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.27643585 = fieldWeight in 4450, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=4450)
        0.03270373 = weight(_text_:systems in 4450) [ClassicSimilarity], result of:
          0.03270373 = score(doc=4450,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2716328 = fieldWeight in 4450, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0625 = fieldNorm(doc=4450)
      0.33333334 = coord(3/9)
    
    Abstract
    Gives a brief overview of what makes a text retrieval system. The text retrieval probelm is really one of how text is represented, and the tools can be used to find what is wanted. Draws comparisons with database management systems. Describes the workings of a text retrieval system, focusing on the description of concepts and ideas in words
    Source
    C and L applications. 6(1992) no.3, S.9-11 (pt.1); no.2, S.8-9 (pt.2)
  6. Gillman, P.: Assessing database quality (1995) 0.03
    0.025194416 = product of:
      0.11337487 = sum of:
        0.10067343 = weight(_text_:applications in 4085) [ClassicSimilarity], result of:
          0.10067343 = score(doc=4085,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.5836958 = fieldWeight in 4085, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.09375 = fieldNorm(doc=4085)
        0.012701439 = weight(_text_:of in 4085) [ClassicSimilarity], result of:
          0.012701439 = score(doc=4085,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.20732689 = fieldWeight in 4085, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=4085)
      0.22222222 = coord(2/9)
    
    Abstract
    There are 4 ways of assessing database quality; accuracy; standardization; completeness; and fitness-for-purpose. The latter is the most important assessment because it sets the context for the database in which the other elements can be defined
    Source
    TIP applications. 9(1995) no.5, S.4-8
  7. Gillman, P.: Assessing customer requirements (1994) 0.02
    0.022717217 = product of:
      0.10222748 = sum of:
        0.08389453 = weight(_text_:applications in 1397) [ClassicSimilarity], result of:
          0.08389453 = score(doc=1397,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.4864132 = fieldWeight in 1397, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.078125 = fieldNorm(doc=1397)
        0.018332949 = weight(_text_:of in 1397) [ClassicSimilarity], result of:
          0.018332949 = score(doc=1397,freq=6.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2992506 = fieldWeight in 1397, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=1397)
      0.22222222 = coord(2/9)
    
    Abstract
    Series of articles dealing with assessing user requirements for information services. Looks at the role of the concepts of aggregation and differentiation in identifying user types
    Source
    TIP applications. (T.1); 8(1994) no.3, S.9-12 (T.2); 8(1994) no.4, S.11-13 (T.3)
  8. Gillman, P.; Martin, G.: Database management (1993) 0.02
    0.018677972 = product of:
      0.08405087 = sum of:
        0.06711562 = weight(_text_:applications in 6303) [ClassicSimilarity], result of:
          0.06711562 = score(doc=6303,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.38913056 = fieldWeight in 6303, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0625 = fieldNorm(doc=6303)
        0.016935252 = weight(_text_:of in 6303) [ClassicSimilarity], result of:
          0.016935252 = score(doc=6303,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.27643585 = fieldWeight in 6303, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=6303)
      0.22222222 = coord(2/9)
    
    Abstract
    Distinguishes between static and dynamic data. Whether the database in question is a library catalogue or a mailing list it is almost bound to be modified. The cost of the routine upkeep of a database is likely to exceed all other costs. Shows how a database continues to grow as the rate of adding records exceeds that of deleting them. The management overhead has a corresponding increase
    Source
    C and L applications. 7(1993) no.1, S.8-10
  9. Gillman, P.: Text retrieval (1998) 0.02
    0.015874058 = product of:
      0.07143326 = sum of:
        0.016935252 = weight(_text_:of in 1502) [ClassicSimilarity], result of:
          0.016935252 = score(doc=1502,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.27643585 = fieldWeight in 1502, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=1502)
        0.054498006 = weight(_text_:software in 1502) [ClassicSimilarity], result of:
          0.054498006 = score(doc=1502,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.35064998 = fieldWeight in 1502, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0625 = fieldNorm(doc=1502)
      0.22222222 = coord(2/9)
    
    Abstract
    Considers some of the papers given at the 1997 Text Retrieval conference (TR 97) in the context of the development of text retrieval software and research, from the Cranfield experiments of the early 1960s up to the recent TREC tests. Suggests that the primitive techniques currently employed for searching the WWW appear to ignore all the serious work done on information retrieval over the past 4 decades
  10. Gillman, P.: National Name Authority File : Report to the National Council on Archives (1998) 0.00
    0.002730383 = product of:
      0.024573447 = sum of:
        0.024573447 = weight(_text_:of in 1440) [ClassicSimilarity], result of:
          0.024573447 = score(doc=1440,freq=22.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.40111488 = fieldWeight in 1440, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1440)
      0.11111111 = coord(1/9)
    
    Abstract
    Reports results of a National Council on Archives project: to sourvey the state of automation of archival searching aids in the UK; and assesses the feasibility of establishing a National Name Authority File (NNAF). The investigation encompassed 3 elements: extent of use of computerised archival searching aids in record offices and other archives; preparedness of archivists to cooperate with the creation of national name authority files for personal, family, place and corporate names; and requirements and costings for establishing a central server to maintain and disseminate the authority files. Reports results of a questionnaire survey of record offices and archives, followed up by visits to representatives of the major national institutions to establish the context within which the NNAF might be created and used