Search (298 results, page 1 of 15)

  • × year_i:[1980 TO 1990}
  1. Malsburg, C. von der: ¬The correlation theory of brain function (1981) 0.10
    0.10310201 = product of:
      0.2749387 = sum of:
        0.039276958 = product of:
          0.11783087 = sum of:
            0.11783087 = weight(_text_:3a in 76) [ClassicSimilarity], result of:
              0.11783087 = score(doc=76,freq=2.0), product of:
                0.25158808 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.029675366 = queryNorm
                0.46834838 = fieldWeight in 76, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=76)
          0.33333334 = coord(1/3)
        0.11783087 = weight(_text_:2f in 76) [ClassicSimilarity], result of:
          0.11783087 = score(doc=76,freq=2.0), product of:
            0.25158808 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.029675366 = queryNorm
            0.46834838 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=76)
        0.11783087 = weight(_text_:2f in 76) [ClassicSimilarity], result of:
          0.11783087 = score(doc=76,freq=2.0), product of:
            0.25158808 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.029675366 = queryNorm
            0.46834838 = fieldWeight in 76, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=76)
      0.375 = coord(3/8)
    
    Source
    http%3A%2F%2Fcogprints.org%2F1380%2F1%2FvdM_correlation.pdf&usg=AOvVaw0g7DvZbQPb2U7dYb49b9v_
  2. Alkula, R.; Sormunen, E.: Problems and guidelines for database descriptions (1989) 0.03
    0.02644246 = product of:
      0.10576984 = sum of:
        0.045056276 = weight(_text_:wide in 2397) [ClassicSimilarity], result of:
          0.045056276 = score(doc=2397,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.342674 = fieldWeight in 2397, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2397)
        0.06071357 = weight(_text_:data in 2397) [ClassicSimilarity], result of:
          0.06071357 = score(doc=2397,freq=14.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.64702475 = fieldWeight in 2397, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2397)
      0.25 = coord(2/8)
    
    Abstract
    An essential part of information retrieval knowledge is the knowledge of data base contents and structures. Currently, the variety of data bases is so wide that it is difficult to know the contents and structure of a particular data base and how they differ from those of other data bases. Because of the lack of commonly acknowledged guidelines for data base descriptions, each on-line service designs and produces printed manuals, on-line help texts and other user documentation in its own manner. For the presentation of exact information and knowledge on a data base, common, structured principles for data base descriptions are needed. Requirements and some solutions for such description principles are presented.
  3. Teskey, F.N.: User models and world models for data, information and knowledge (1989) 0.03
    0.02598612 = product of:
      0.10394448 = sum of:
        0.051492885 = weight(_text_:wide in 2163) [ClassicSimilarity], result of:
          0.051492885 = score(doc=2163,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.3916274 = fieldWeight in 2163, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0625 = fieldNorm(doc=2163)
        0.052451592 = weight(_text_:data in 2163) [ClassicSimilarity], result of:
          0.052451592 = score(doc=2163,freq=8.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.5589768 = fieldWeight in 2163, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=2163)
      0.25 = coord(2/8)
    
    Abstract
    In this article we identify the need for a new theory of data, information, and knowledge. A model is developed that distinguishes between data as directly observable facts, information as structured collections of data, and knowledge as methods of using information. The model is intended to support a wide range of information systems. In the article we develop the use of the model for a semantic information retrieval system using the concept of semantic categories. The likely benefits of this are discussed, though as yet no detailed evaluation has been conducted
  4. Teskey, F.N.: Enriched knowledge representation for information retrieval (1987) 0.02
    0.019377261 = product of:
      0.077509046 = sum of:
        0.045056276 = weight(_text_:wide in 698) [ClassicSimilarity], result of:
          0.045056276 = score(doc=698,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.342674 = fieldWeight in 698, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=698)
        0.03245277 = weight(_text_:data in 698) [ClassicSimilarity], result of:
          0.03245277 = score(doc=698,freq=4.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.34584928 = fieldWeight in 698, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=698)
      0.25 = coord(2/8)
    
    Abstract
    In this paper we identify the need for a new theory of information. An information model is developed which distinguishes between data, as directly observable facts, information, as structured collections of data, and knowledge as methods of using information. The model is intended to support a wide range of information systems. In the paper we develop the use of the model for a semantic information retrieval system using the concept of semantic categories. The likely benefits of this area discussed, though as yet no detailed evaluation has been conducted.
  5. Smith, D.E.: Reference expert systems : humanizing depersonalized service (1989) 0.02
    0.017000962 = product of:
      0.06800385 = sum of:
        0.045056276 = weight(_text_:wide in 1513) [ClassicSimilarity], result of:
          0.045056276 = score(doc=1513,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.342674 = fieldWeight in 1513, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1513)
        0.022947572 = weight(_text_:data in 1513) [ClassicSimilarity], result of:
          0.022947572 = score(doc=1513,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.24455236 = fieldWeight in 1513, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1513)
      0.25 = coord(2/8)
    
    Abstract
    The delivery of library reference service can be practically supplemented through the appropriate incorporation and use of software tools commonly reffered to an expert system. The level of support such systems can affort the reference service organisation is dependent on the degree of complexity characteristic of the rule-based programming techniques used to develop a particular system and the size of its knowledge data base. Since most expert systems are designed to simulate the process of problem-solving practiced by an expert in a given field, an expert system designed to fully emulate library reference work must have the potential to respond to a wide subject range of questions with varying degrees of response adequacy. Describes a microcomputer-based reference expert-type system.
  6. Feather, J.: Towards a European Register of Microform Master (1989) 0.02
    0.017000962 = product of:
      0.06800385 = sum of:
        0.045056276 = weight(_text_:wide in 1549) [ClassicSimilarity], result of:
          0.045056276 = score(doc=1549,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.342674 = fieldWeight in 1549, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1549)
        0.022947572 = weight(_text_:data in 1549) [ClassicSimilarity], result of:
          0.022947572 = score(doc=1549,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.24455236 = fieldWeight in 1549, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1549)
      0.25 = coord(2/8)
    
    Abstract
    The Commision of the European Communities has commissioned a feasibility study for a European Register of Microform Masters (EROMM), the purpose of which is to investigate the desirability and feasability of a community-wide register, to propose possible methodologies of compilation, storage and dissemination and to make estimates of costs. The long-term objective of EROMM is to create a union list of records available in collaborating libraries throughout the countries of the European Community, but it is important to recognise that EROMM is a project for the creation of a data base, not for the creation of films or other surrogate media.
  7. Chartron, G.; Dalbin, S.; Monteil, M.-G.; Verillon, M.: Indexation manuelle et indexation automatique : dépasser les oppositions (1989) 0.02
    0.017000962 = product of:
      0.06800385 = sum of:
        0.045056276 = weight(_text_:wide in 3516) [ClassicSimilarity], result of:
          0.045056276 = score(doc=3516,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.342674 = fieldWeight in 3516, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3516)
        0.022947572 = weight(_text_:data in 3516) [ClassicSimilarity], result of:
          0.022947572 = score(doc=3516,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.24455236 = fieldWeight in 3516, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3516)
      0.25 = coord(2/8)
    
    Abstract
    Report of a study comparing 2 methods of indexing: LEXINET, a computerised system for indexing titles and summaries only; and manual indexing of full texts, using the thesaurus developed by French Electricity (EDF). Both systems were applied to a collection of approximately 2.000 documents on artifical intelligence from the EDF data base. The results were then analysed to compare quantitative performance (number and range of terms) and qualitative performance (ambiguity of terms, specificity, variability, consistency). Overall, neither system proved ideal: LEXINET was deficient as regards lack of accessibility and excessive ambiguity; while the manual system gave rise to an over-wide variation of terms. The ideal system would appear to be a combination of automatic and manual systems, on the evidence produced here.
  8. Griffith, C.: What's all the hype about hypertext? (1989) 0.02
    0.016616028 = product of:
      0.06646411 = sum of:
        0.046361096 = weight(_text_:data in 2505) [ClassicSimilarity], result of:
          0.046361096 = score(doc=2505,freq=4.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.49407038 = fieldWeight in 2505, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.078125 = fieldNorm(doc=2505)
        0.020103013 = product of:
          0.040206026 = sum of:
            0.040206026 = weight(_text_:22 in 2505) [ClassicSimilarity], result of:
              0.040206026 = score(doc=2505,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.38690117 = fieldWeight in 2505, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2505)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Considers the reason why CD-ROM's promise of a large range of legal data bases has, to some extent, been limited. The new range of CD-ROM hypertext data bases, produced by West Publishing Company, are discussed briefly.
    Source
    Information today. 6(1989) no.4, S.22-24
  9. Schabas, A.H.: Postcoordinate retrieval : a comparison of two retrieval languages (1982) 0.02
    0.01660908 = product of:
      0.06643632 = sum of:
        0.038619664 = weight(_text_:wide in 1202) [ClassicSimilarity], result of:
          0.038619664 = score(doc=1202,freq=2.0), product of:
            0.13148437 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029675366 = queryNorm
            0.29372054 = fieldWeight in 1202, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=1202)
        0.027816659 = weight(_text_:data in 1202) [ClassicSimilarity], result of:
          0.027816659 = score(doc=1202,freq=4.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.29644224 = fieldWeight in 1202, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1202)
      0.25 = coord(2/8)
    
    Abstract
    This article reports on a comparison of the postcoordinate retrieval effectiveness of two indexing languages: LCSH and PRECIS. The effect of augmenting each with title words was also studies. The database for the study was over 15.000 UK MARC records. Users returned 5.326 relevant judgements for citations retrieved for 61 SDI profiles, representing a wide variety of subjects. Results are reported in terms of precision and relative recall. Pure/applied sciences data and social science data were analyzed separately. Cochran's significance tests for ratios were used to interpret the findings. Recall emerged as the more important measure discriminating the behavior of the two languages. Addition of title words was found to improve recall of both indexing languages significantly. A direct relationship was observed between recall and exhaustivity. For the social sciences searches, recalls from PRECIS alone and from PRECIS with title words were significantly higher than those from LCSH alone and from LCSH with title words, respectively. Corresponding comparisons for the pure/applied sciences searches revealed no significant differences
  10. Morrow, B.: IMPACT public access catalogue (1989) 0.02
    0.015865577 = product of:
      0.06346231 = sum of:
        0.039338693 = weight(_text_:data in 4059) [ClassicSimilarity], result of:
          0.039338693 = score(doc=4059,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.4192326 = fieldWeight in 4059, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.09375 = fieldNorm(doc=4059)
        0.024123615 = product of:
          0.04824723 = sum of:
            0.04824723 = weight(_text_:22 in 4059) [ClassicSimilarity], result of:
              0.04824723 = score(doc=4059,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.46428138 = fieldWeight in 4059, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4059)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Reviews Auto-Graphics, Inc.'s IMPACT system: designed as a CD-ROM index of a library's data base of holdings and bibliographic information. IMPACT is based in the MARC format.
    Source
    CD-ROM librarian. 4(1989), no.1, S.22-26
  11. Münnich, M.: Katalogisieren auf dem PC : ein Pflichtenheft für die Formalkatalogisierung (1988) 0.02
    0.015376706 = product of:
      0.061506823 = sum of:
        0.045424413 = weight(_text_:data in 2502) [ClassicSimilarity], result of:
          0.045424413 = score(doc=2502,freq=6.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.48408815 = fieldWeight in 2502, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=2502)
        0.01608241 = product of:
          0.03216482 = sum of:
            0.03216482 = weight(_text_:22 in 2502) [ClassicSimilarity], result of:
              0.03216482 = score(doc=2502,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.30952093 = fieldWeight in 2502, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2502)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Examines a simpler cataloguing format offered by PCs, without disturbing compatibility, using A-Z cataloguing rules for data input, category codes for tagging and computer-supported data input through windows. Gives numerous examples of catalogue entries, basing techniques on certain category schemes set out by Klaus Haller and Hans Popst. Examines catalogue entries in respect of categories of data bases for authors and corporate names, titles, single volume works, serial issues of collected works, and limited editions of works in several volumes.
    Source
    Bibliotheksdienst. 22(1988) H.9, S.841-856
  12. Knauth, M.: Bibliographies made easy : a look at PRO-CITE (1989) 0.02
    0.015376706 = product of:
      0.061506823 = sum of:
        0.045424413 = weight(_text_:data in 2830) [ClassicSimilarity], result of:
          0.045424413 = score(doc=2830,freq=6.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.48408815 = fieldWeight in 2830, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=2830)
        0.01608241 = product of:
          0.03216482 = sum of:
            0.03216482 = weight(_text_:22 in 2830) [ClassicSimilarity], result of:
              0.03216482 = score(doc=2830,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.30952093 = fieldWeight in 2830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2830)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The PRO-CITE and BIBLIO-LINK software packages make the process of compiling bibliographies and internal data bases much easier than when all the steps in the process were manual (compiling, typing). The 2 programs work on records that have been downloaded from on-line data bases. BIBLIO-LINK analyses the downloaded records to determine document type and stores the data in the appropriate PRO-CITE workform, putting fields from the downloaded record into the correct PRO-CITE fields.
    Source
    Computers in libraries. 9(1989) no.4, S.22-24
  13. Lancaster, F.W.: Vocabulary control for information retrieval (1986) 0.01
    0.013292822 = product of:
      0.05317129 = sum of:
        0.03708888 = weight(_text_:data in 217) [ClassicSimilarity], result of:
          0.03708888 = score(doc=217,freq=4.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.3952563 = fieldWeight in 217, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=217)
        0.01608241 = product of:
          0.03216482 = sum of:
            0.03216482 = weight(_text_:22 in 217) [ClassicSimilarity], result of:
              0.03216482 = score(doc=217,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.30952093 = fieldWeight in 217, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=217)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Classification
    ST 271 Informatik / Monographien / Software und -entwicklung / Datenbanken, Datenbanksysteme, Data base management, Informationssysteme / Einzelne Datenbanksprachen und Datenbanksysteme
    Date
    22. 4.2007 10:07:51
    RVK
    ST 271 Informatik / Monographien / Software und -entwicklung / Datenbanken, Datenbanksysteme, Data base management, Informationssysteme / Einzelne Datenbanksprachen und Datenbanksysteme
  14. Sievert, M.E.; McKinin, E.J.: Why full-text misses some relevant documents : an analysis of documents not retrieved by CCML or MEDIS (1989) 0.01
    0.012850125 = product of:
      0.0514005 = sum of:
        0.039338693 = weight(_text_:data in 3564) [ClassicSimilarity], result of:
          0.039338693 = score(doc=3564,freq=8.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.4192326 = fieldWeight in 3564, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3564)
        0.012061807 = product of:
          0.024123615 = sum of:
            0.024123615 = weight(_text_:22 in 3564) [ClassicSimilarity], result of:
              0.024123615 = score(doc=3564,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.23214069 = fieldWeight in 3564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3564)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Searches conducted as part of the MEDLINE/Full-Text Research Project revealed that the full-text data bases of clinical medical journal articles (CCML (Comprehensive Core Medical Library) from BRS Information Technologies, and MEDIS from Mead Data Central) did not retrieve all the relevant citations. An analysis of the data indicated that 204 relevant citations were retrieved only by MEDLINE. A comparison of the strategies used on the full-text data bases with the text of the articles of these 204 citations revealed that 2 reasons contributed to these failure. The searcher often constructed a restrictive strategy which resulted in the loss of relevant documents; and as in other kinds of retrieval, the problems of natural language caused the loss of relevant documents.
    Date
    9. 1.1996 10:22:31
  15. Cramer, M.D.; Markland, M.J.: Newspaper indexing with Pro-Cite (1989) 0.01
    0.010577051 = product of:
      0.042308204 = sum of:
        0.026225796 = weight(_text_:data in 2855) [ClassicSimilarity], result of:
          0.026225796 = score(doc=2855,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.2794884 = fieldWeight in 2855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=2855)
        0.01608241 = product of:
          0.03216482 = sum of:
            0.03216482 = weight(_text_:22 in 2855) [ClassicSimilarity], result of:
              0.03216482 = score(doc=2855,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.30952093 = fieldWeight in 2855, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2855)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The university libraries at Virginia Polytechnic Institute and State University began an innovative indexing method in 1985 with the objectives of stronger and easier subject access to local newspapers and the creation of an index which could store information compactly and economically. Discusses the decision to use Pro-Cite software and describes the creation of 10 area data base files. Outlines 2 areas of difficulty: documentation and terminology.
    Date
    30.11.1995 17:22:01
  16. Dack, D.: Australian attends conference on Dewey (1989) 0.01
    0.00925492 = product of:
      0.03701968 = sum of:
        0.022947572 = weight(_text_:data in 2509) [ClassicSimilarity], result of:
          0.022947572 = score(doc=2509,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.24455236 = fieldWeight in 2509, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2509)
        0.014072108 = product of:
          0.028144216 = sum of:
            0.028144216 = weight(_text_:22 in 2509) [ClassicSimilarity], result of:
              0.028144216 = score(doc=2509,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.2708308 = fieldWeight in 2509, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2509)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Edited version of a report to the Australian Library and Information Association on the Conference on classification theory in the computer age, Albany, New York, 18-19 Nov 88, and on the meeting of the Dewey Editorial Policy Committee which preceded it. The focus of the Editorial Policy Committee Meeting lay in the following areas: browsing; potential for improved subject access; system design; potential conflict between shelf location and information retrieval; and users. At the Conference on classification theory in the computer age the following papers were presented: Applications of artificial intelligence to bibliographic classification, by Irene Travis; Automation and classification, By Elaine Svenonious; Subject classification and language processing for retrieval in large data bases, by Diana Scott; Implications for information processing, by Carol Mandel; and implications for information science education, by Richard Halsey.
    Date
    8.11.1995 11:52:22
  17. Snow, M.: Visual depictions and the use of MARC : a view from the trenches of slide librarianship (1989) 0.01
    0.00925492 = product of:
      0.03701968 = sum of:
        0.022947572 = weight(_text_:data in 2862) [ClassicSimilarity], result of:
          0.022947572 = score(doc=2862,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.24455236 = fieldWeight in 2862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2862)
        0.014072108 = product of:
          0.028144216 = sum of:
            0.028144216 = weight(_text_:22 in 2862) [ClassicSimilarity], result of:
              0.028144216 = score(doc=2862,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.2708308 = fieldWeight in 2862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2862)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Paper presented at a symposium on 'Implementing the Art and Architecture Thesaurus (AAT): Controlled Vocabulary in the Extended MARC format', held at the 1989 Annual Conference of the Art Libraries Society of North America. The only way to get bibliographic records on to campus on-line library catalogues, and slide records on the national bibliographic utilities, is through the use of MARC. Discusses the importance of having individual slide and photograph records on the national bibliographic utilities, and considers the obstacles which currently make this difficult. Discusses mapping to MARC from data base management systems.
    Date
    4.12.1995 22:51:36
  18. Walker, A.: Australia's pictorial collections on interactive videodisc (1989) 0.01
    0.00925492 = product of:
      0.03701968 = sum of:
        0.022947572 = weight(_text_:data in 2477) [ClassicSimilarity], result of:
          0.022947572 = score(doc=2477,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.24455236 = fieldWeight in 2477, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2477)
        0.014072108 = product of:
          0.028144216 = sum of:
            0.028144216 = weight(_text_:22 in 2477) [ClassicSimilarity], result of:
              0.028144216 = score(doc=2477,freq=2.0), product of:
                0.103918076 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029675366 = queryNorm
                0.2708308 = fieldWeight in 2477, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2477)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The use of interactive videodiscs for preserving, presenting and retrieving pictorial collections in Australia was pioneered at the New South Wales Government Printing Office, Sydney. This project has now stored some 200.000 historical photographs on interactive videodisc, associated with sophisticated microcomputer data bases using the specially developed Just Image software. Videodisc systems to retrieve pictorial material are being used to preserve and present the pictorial collections of various Australian libraries and museums, and standards for the description and indexing of photographs are being developed.
    Date
    3. 1.1999 11:22:04
  19. Duncan, E.B.: Structuring knowledge bases for designers of learning materials (1989) 0.01
    0.008462749 = product of:
      0.033850998 = sum of:
        0.017459875 = weight(_text_:web in 2478) [ClassicSimilarity], result of:
          0.017459875 = score(doc=2478,freq=2.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.18028519 = fieldWeight in 2478, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2478)
        0.016391123 = weight(_text_:data in 2478) [ClassicSimilarity], result of:
          0.016391123 = score(doc=2478,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.17468026 = fieldWeight in 2478, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2478)
      0.25 = coord(2/8)
    
    Abstract
    Three pre-web articles about using hypertext for knowledge representation. Duncan discusses how to use graphical, hypertext displays (she used Xerox PARC's NoteCards on a Xerox 1186 workstation) along with concept maps and facet analysis, a combination that would now be done with topic maps. The screen shots of her graphical displays are quite interesting. Her interest in facets is in how to use them to show things to different people in different ways, for example, so that experts can enter knowledge into a system in one way while novices can see it in another. Duncan found that facet labels (e.g. Process and Product) prompted the expert to think of related concepts when inputting data, and made navigation easier for users. Facets can be joined together, e.g. "Agents (causing) Process," leading to a "reasoning system." She is especially interested in how to show relstionships between two things: e.g., A causes B, A uses B, A occurs in B. This is an important question in facet theory, but probably not worth worrying about in a small online classification where the relations are fixed and obvious. These articles may be difficult to find, in which case the reader can find a nice sumary in the next article, by Ellis and Vasconcelos (2000). Anyone interested in tracing the history of facets and hypertext will, however, want to see the originals.
  20. Duncan, E.B.: ¬A faceted approach to hypertext (1989) 0.01
    0.008462749 = product of:
      0.033850998 = sum of:
        0.017459875 = weight(_text_:web in 2480) [ClassicSimilarity], result of:
          0.017459875 = score(doc=2480,freq=2.0), product of:
            0.096845865 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029675366 = queryNorm
            0.18028519 = fieldWeight in 2480, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2480)
        0.016391123 = weight(_text_:data in 2480) [ClassicSimilarity], result of:
          0.016391123 = score(doc=2480,freq=2.0), product of:
            0.093835 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.029675366 = queryNorm
            0.17468026 = fieldWeight in 2480, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2480)
      0.25 = coord(2/8)
    
    Abstract
    Three pre-web articles about using hypertext for knowledge representation. Duncan discusses how to use graphical, hypertext displays (she used Xerox PARC's NoteCards on a Xerox 1186 workstation) along with concept maps and facet analysis, a combination that would now be done with topic maps. The screen shots of her graphical displays are quite interesting. Her interest in facets is in how to use them to show things to different people in different ways, for example, so that experts can enter knowledge into a system in one way while novices can see it in another. Duncan found that facet labels (e.g. Process and Product) prompted the expert to think of related concepts when inputting data, and made navigation easier for users. Facets can be joined together, e.g. "Agents (causing) Process," leading to a "reasoning system." She is especially interested in how to show relstionships between two things: e.g., A causes B, A uses B, A occurs in B. This is an important question in facet theory, but probably not worth worrying about in a small online classification where the relations are fixed and obvious. These articles may be difficult to find, in which case the reader can find a nice sumary in the next article, by Ellis and Vasconcelos (2000). Anyone interested in tracing the history of facets and hypertext will, however, want to see the originals.

Authors

Languages

  • e 227
  • d 64
  • f 4
  • m 1
  • More… Less…

Types

  • a 240
  • m 30
  • s 19
  • ? 2
  • u 2
  • x 2
  • b 1
  • d 1
  • h 1
  • n 1
  • r 1
  • More… Less…

Themes

Subjects

Classifications