Search (9 results, page 1 of 1)

  • × author_ss:"Gillman, P."
  • × type_ss:"a"
  • × year_i:[1990 TO 2000}
  1. Gillman, P.: Text retrieval (1998) 0.07
    0.069434814 = product of:
      0.20830444 = sum of:
        0.05249342 = weight(_text_:23 in 1502) [ClassicSimilarity], result of:
          0.05249342 = score(doc=1502,freq=4.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.44800746 = fieldWeight in 1502, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0625 = fieldNorm(doc=1502)
        0.05249342 = weight(_text_:23 in 1502) [ClassicSimilarity], result of:
          0.05249342 = score(doc=1502,freq=4.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.44800746 = fieldWeight in 1502, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0625 = fieldNorm(doc=1502)
        0.045477543 = weight(_text_:software in 1502) [ClassicSimilarity], result of:
          0.045477543 = score(doc=1502,freq=2.0), product of:
            0.12969498 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.032692216 = queryNorm
            0.35064998 = fieldWeight in 1502, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0625 = fieldNorm(doc=1502)
        0.05249342 = weight(_text_:23 in 1502) [ClassicSimilarity], result of:
          0.05249342 = score(doc=1502,freq=4.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.44800746 = fieldWeight in 1502, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0625 = fieldNorm(doc=1502)
        0.005346625 = weight(_text_:in in 1502) [ClassicSimilarity], result of:
          0.005346625 = score(doc=1502,freq=2.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.120230645 = fieldWeight in 1502, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=1502)
      0.33333334 = coord(5/15)
    
    Abstract
    Considers some of the papers given at the 1997 Text Retrieval conference (TR 97) in the context of the development of text retrieval software and research, from the Cranfield experiments of the early 1960s up to the recent TREC tests. Suggests that the primitive techniques currently employed for searching the WWW appear to ignore all the serious work done on information retrieval over the past 4 decades
    Date
    23. 7.1998 10:35:23
  2. Gillman, P.: ConQuest: retrieval on a large scale (1995) 0.01
    0.011610266 = product of:
      0.08707699 = sum of:
        0.08039371 = weight(_text_:software in 3361) [ClassicSimilarity], result of:
          0.08039371 = score(doc=3361,freq=4.0), product of:
            0.12969498 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.032692216 = queryNorm
            0.6198675 = fieldWeight in 3361, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.078125 = fieldNorm(doc=3361)
        0.0066832816 = weight(_text_:in in 3361) [ClassicSimilarity], result of:
          0.0066832816 = score(doc=3361,freq=2.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.15028831 = fieldWeight in 3361, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=3361)
      0.13333334 = coord(2/15)
    
    Abstract
    Gives an overview of ConQuest text retrieval software. It is designed for large systems and has 9 different retrieval techniques which can be used in series or combination to enhance retrieval. Describes its thesaurus-based retrieval, and combinations of retrieval techniques. Discusses the size of the application and its speed
    Theme
    Bibliographische Software
  3. Gillman, P.: Transferring text (1993) 0.01
    0.007956284 = product of:
      0.05967213 = sum of:
        0.045477543 = weight(_text_:software in 6246) [ClassicSimilarity], result of:
          0.045477543 = score(doc=6246,freq=2.0), product of:
            0.12969498 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.032692216 = queryNorm
            0.35064998 = fieldWeight in 6246, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0625 = fieldNorm(doc=6246)
        0.014194585 = weight(_text_:und in 6246) [ClassicSimilarity], result of:
          0.014194585 = score(doc=6246,freq=2.0), product of:
            0.07245795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.032692216 = queryNorm
            0.19590102 = fieldWeight in 6246, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=6246)
      0.13333334 = coord(2/15)
    
    Abstract
    Describes a consultancy project for the development of a health care thesaurus involving the movement of text between different application programs. The thesaurus was built from existing text within the organisation originating from 3 sources: natural language registry file headings; descriptions from an internal business directory and a controlled vocabulary. The software used was WordPerfect and Cardbox
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  4. Gillman, P.: Intelligent OCR (1993) 0.01
    0.0075034127 = product of:
      0.11255118 = sum of:
        0.11255118 = weight(_text_:software in 7049) [ClassicSimilarity], result of:
          0.11255118 = score(doc=7049,freq=4.0), product of:
            0.12969498 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.032692216 = queryNorm
            0.8678145 = fieldWeight in 7049, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.109375 = fieldNorm(doc=7049)
      0.06666667 = coord(1/15)
    
    Abstract
    Reviews the catchword OCR software supplied with Logitech's Scanman hand held scanner. Discusses the OCR process, editing and the software itself
  5. Gillman, P.: Data handling and text compression (1992) 0.00
    5.346625E-4 = product of:
      0.008019937 = sum of:
        0.008019937 = weight(_text_:in in 5306) [ClassicSimilarity], result of:
          0.008019937 = score(doc=5306,freq=8.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.18034597 = fieldWeight in 5306, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=5306)
      0.06666667 = coord(1/15)
    
    Abstract
    Data compression has a function in text storage and data handling, but not at the level of compressing data files. The reason is that the decompression of such files add a time delay to the retrieval process, and users can see this delay as a drawback of the system concerned. Compression techniques can with benefit be applied to index files. A more relevant data handling problem is that posed by the need, in most systems, to store two versions of imported text. The first id the 'native' version, as it might have come from a word processor or text editor. The second is the ASCII version which is what is actually imported. Inverted file indexes form yet another version. The problem arises out of the need for dynamic indexing and re-indexing of revisable documents in very large database applications such as are found in Office Automation systems. Four mainstream text-management packages are used to show how this problem is handled, and how generic document architectures such as OCA/CDA and SGML might help
  6. Gillman, P.: Assessing database quality (1995) 0.00
    5.346625E-4 = product of:
      0.008019937 = sum of:
        0.008019937 = weight(_text_:in in 4085) [ClassicSimilarity], result of:
          0.008019937 = score(doc=4085,freq=2.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.18034597 = fieldWeight in 4085, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.09375 = fieldNorm(doc=4085)
      0.06666667 = coord(1/15)
    
    Abstract
    There are 4 ways of assessing database quality; accuracy; standardization; completeness; and fitness-for-purpose. The latter is the most important assessment because it sets the context for the database in which the other elements can be defined
  7. Gillman, P.: Assessing customer requirements (1994) 0.00
    4.4555214E-4 = product of:
      0.0066832816 = sum of:
        0.0066832816 = weight(_text_:in in 1397) [ClassicSimilarity], result of:
          0.0066832816 = score(doc=1397,freq=2.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.15028831 = fieldWeight in 1397, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=1397)
      0.06666667 = coord(1/15)
    
    Abstract
    Series of articles dealing with assessing user requirements for information services. Looks at the role of the concepts of aggregation and differentiation in identifying user types
  8. Gillman, P.: Text retrieval : key points (1992) 0.00
    3.5644168E-4 = product of:
      0.005346625 = sum of:
        0.005346625 = weight(_text_:in in 4450) [ClassicSimilarity], result of:
          0.005346625 = score(doc=4450,freq=2.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.120230645 = fieldWeight in 4450, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=4450)
      0.06666667 = coord(1/15)
    
    Abstract
    Gives a brief overview of what makes a text retrieval system. The text retrieval probelm is really one of how text is represented, and the tools can be used to find what is wanted. Draws comparisons with database management systems. Describes the workings of a text retrieval system, focusing on the description of concepts and ideas in words
  9. Gillman, P.; Martin, G.: Database management (1993) 0.00
    3.5644168E-4 = product of:
      0.005346625 = sum of:
        0.005346625 = weight(_text_:in in 6303) [ClassicSimilarity], result of:
          0.005346625 = score(doc=6303,freq=2.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.120230645 = fieldWeight in 6303, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=6303)
      0.06666667 = coord(1/15)
    
    Abstract
    Distinguishes between static and dynamic data. Whether the database in question is a library catalogue or a mailing list it is almost bound to be modified. The cost of the routine upkeep of a database is likely to exceed all other costs. Shows how a database continues to grow as the rate of adding records exceeds that of deleting them. The management overhead has a corresponding increase