Search (4 results, page 1 of 1)

  • × author_ss:"Gillman, P."
  • × type_ss:"a"
  • × year_i:[1990 TO 2000}
  1. Gillman, P.: Data handling and text compression (1992) 0.03
    0.032712005 = product of:
      0.08178001 = sum of:
        0.0538205 = weight(_text_:index in 5306) [ClassicSimilarity], result of:
          0.0538205 = score(doc=5306,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.28967714 = fieldWeight in 5306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.046875 = fieldNorm(doc=5306)
        0.027959513 = weight(_text_:system in 5306) [ClassicSimilarity], result of:
          0.027959513 = score(doc=5306,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.20878783 = fieldWeight in 5306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=5306)
      0.4 = coord(2/5)
    
    Abstract
    Data compression has a function in text storage and data handling, but not at the level of compressing data files. The reason is that the decompression of such files add a time delay to the retrieval process, and users can see this delay as a drawback of the system concerned. Compression techniques can with benefit be applied to index files. A more relevant data handling problem is that posed by the need, in most systems, to store two versions of imported text. The first id the 'native' version, as it might have come from a word processor or text editor. The second is the ASCII version which is what is actually imported. Inverted file indexes form yet another version. The problem arises out of the need for dynamic indexing and re-indexing of revisable documents in very large database applications such as are found in Office Automation systems. Four mainstream text-management packages are used to show how this problem is handled, and how generic document architectures such as OCA/CDA and SGML might help
  2. Gillman, P.: Assessing database quality (1995) 0.02
    0.01936723 = product of:
      0.09683614 = sum of:
        0.09683614 = weight(_text_:context in 4085) [ClassicSimilarity], result of:
          0.09683614 = score(doc=4085,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.54950815 = fieldWeight in 4085, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.09375 = fieldNorm(doc=4085)
      0.2 = coord(1/5)
    
    Abstract
    There are 4 ways of assessing database quality; accuracy; standardization; completeness; and fitness-for-purpose. The latter is the most important assessment because it sets the context for the database in which the other elements can be defined
  3. Gillman, P.: Text retrieval (1998) 0.01
    0.0129114855 = product of:
      0.064557426 = sum of:
        0.064557426 = weight(_text_:context in 1502) [ClassicSimilarity], result of:
          0.064557426 = score(doc=1502,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.36633876 = fieldWeight in 1502, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0625 = fieldNorm(doc=1502)
      0.2 = coord(1/5)
    
    Abstract
    Considers some of the papers given at the 1997 Text Retrieval conference (TR 97) in the context of the development of text retrieval software and research, from the Cranfield experiments of the early 1960s up to the recent TREC tests. Suggests that the primitive techniques currently employed for searching the WWW appear to ignore all the serious work done on information retrieval over the past 4 decades
  4. Gillman, P.: Text retrieval : key points (1992) 0.01
    0.010544192 = product of:
      0.05272096 = sum of:
        0.05272096 = weight(_text_:system in 4450) [ClassicSimilarity], result of:
          0.05272096 = score(doc=4450,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.3936941 = fieldWeight in 4450, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=4450)
      0.2 = coord(1/5)
    
    Abstract
    Gives a brief overview of what makes a text retrieval system. The text retrieval probelm is really one of how text is represented, and the tools can be used to find what is wanted. Draws comparisons with database management systems. Describes the workings of a text retrieval system, focusing on the description of concepts and ideas in words