Search (251 results, page 13 of 13)

  • × year_i:[1960 TO 1970}
  1. Moss, R.: Categories and relations : Origins of two classification theories (1964) 0.00
    0.001682769 = product of:
      0.010096614 = sum of:
        0.010096614 = weight(_text_:in in 1816) [ClassicSimilarity], result of:
          0.010096614 = score(doc=1816,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.17003182 = fieldWeight in 1816, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=1816)
      0.16666667 = coord(1/6)
    
    Abstract
    The resemblances between the categories of Aristotle and those of Ranganathan are shown. These categories are examined in the light of criticism made by Bertrand Russell and are shown to have no validity. Similar comparisons are made between the relations of Huma and Farradane. Farradane's work is a return to Hume, who is generally acknowledged as one of the founders of the British school of empirical philosophy which continues to Russell and beyond. In Russell's work lies the most promising line of development for information classification and indexing
  2. Zadeh, L.A.: Fuzzy sets (1965) 0.00
    0.001682769 = product of:
      0.010096614 = sum of:
        0.010096614 = weight(_text_:in in 5460) [ClassicSimilarity], result of:
          0.010096614 = score(doc=5460,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.17003182 = fieldWeight in 5460, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=5460)
      0.16666667 = coord(1/6)
    
    Abstract
    A fuzzy set is a class of objects with a continuum of grades of membership. Such a set is characterized by a membership (characteristic) function which assigns to each object a grade of membership ranging between zero and one. The notions of inclusion, union, intersection, complement, relation, convexity, etc., are extended to such sets, and various properties of theses notions in the context of fuzzy sets are established. In particular, a separation theorem for convex fuzzy sets is proved without requiring that the fuzzy sets be disjoint
  3. Garfield, E.: Chemico-linguistics : computer translation of chemical nomenclature (1961) 0.00
    0.0014873719 = product of:
      0.008924231 = sum of:
        0.008924231 = weight(_text_:in in 3458) [ClassicSimilarity], result of:
          0.008924231 = score(doc=3458,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15028831 = fieldWeight in 3458, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=3458)
      0.16666667 = coord(1/6)
    
    Content
    Zusammenfassung der Dissertation Vgl. auch: Garfield, E.: An algorithm for translating chemical names to molecular formulas. Doctoral dissertation, University of Pennsylvania, 1961. In: Essays of an information scientist. Vol. 7. Philadelphia, PA: ISI Press, 1985. S.441-513.
  4. Lesk, M.E.; Salton, G.: Relevance assements and retrieval system evaluation (1969) 0.00
    0.0014724231 = product of:
      0.008834538 = sum of:
        0.008834538 = weight(_text_:in in 4151) [ClassicSimilarity], result of:
          0.008834538 = score(doc=4151,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.14877784 = fieldWeight in 4151, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4151)
      0.16666667 = coord(1/6)
    
    Abstract
    Two widerly used criteria for evaluating the effectiveness of information retrieval systems are, respectively, the recall and the precision. Since the determiniation of these measures is dependent on a distinction between documents which are relevant to a given query and documents which are not relevant to that query, it has sometimes been claimed that an accurate, generally valid evaluation cannot be based on recall and precision measure. A study was made to determine the effect of variations in relevance assesments do not produce significant variations in average recall and precision. It thus appears that properly computed recall and precision data may represent effectiveness indicators which are gemerally valid for many distinct user classes.
  5. Soergel, D.: Mathematical analysis of documentation systems : an attempt to a theory of classification and search request formulation (1967) 0.00
    0.0012881019 = product of:
      0.007728611 = sum of:
        0.007728611 = weight(_text_:in in 5449) [ClassicSimilarity], result of:
          0.007728611 = score(doc=5449,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1301535 = fieldWeight in 5449, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5449)
      0.16666667 = coord(1/6)
    
    Abstract
    As an attempt to make a general structural theory of information retrieval, a documentation system (DS) is defined as a formal system consisting of (a) a set o of objects (documents); (b) a set A++ of elementary attributes (key-words), from which further attributes may be constructed: A++ generates A; (c) a set of axioms of the form X++(x)=m (m¯M, M a set of constant connecting attributes with objects: from the axioms further theorems (=true statements) may be constructed. By use of the theorems, different mappings O -> P(o) (P(o) set of all subsets of o) (search question -> set of documents retrieved) are defined. The type of a DS depends on two basic decisions: (1) choice of the rules for the construction of attributes and theorems, e.g., logical product in coordinate indexing; links. (2) choice of M; M may consist of the two constants 'applicable' and 'not applicable', or some positive integers, ...; Further practical decisions: A++ hierarchical or not; kind of mapping; introduction of roles (=further attributes). The most simple case - ordinary two-valued Coordinate Indexing - is discusssed in detail; o is a free distributive (but not Boolean) lattice, the homographic image a ring of subsets of o; instead of negation which is not useful, a useful retrieval operation 'praeternagation' is introduced. Furthermore these are discussed: a generalized definition of superimposed coding, some functions for the distance of objects or attributes; optimization and automatic derivation of classifications. The model takes into account term-term relations and document-document relations. It may serve as a structural framework in terms of which the functional problems of retrieval theory may be expressed more clearly
  6. Garfield, E.: ¬An algorithm for translating chemical names to molecular formulas (1961) 0.00
    0.0012881019 = product of:
      0.007728611 = sum of:
        0.007728611 = weight(_text_:in in 3465) [ClassicSimilarity], result of:
          0.007728611 = score(doc=3465,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1301535 = fieldWeight in 3465, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3465)
      0.16666667 = coord(1/6)
    
    Abstract
    This dissertation discusses, explains, and demonstrates a new algorithm for translating chemica l nomenclature into molecular formulas. In order to place the study in its proper context and perspective the historical development of nomenclature is first discussed, aa well as other related aspects of the chemical information problem. The relationship of nomenclature to modern linguistic studies is then introduced. Tire relevance of structural linguistic procedures to the study of chemical nomenclature is shown. The methods of the linguist are illustrated by examples from chemical discourse. The algorithm is then explained, first for the human translator and then for use by a computer. Flow diagrams for the computer syntactic analysis, dictionary Iook-up routine, and furmula calculation routine are included. The sampling procedure for testing the algorithm is explained and finalIy, conclusions are drawn with respect to the general validity of the method and the dirsction that might be taken for future research. A summary of modern chemical nomenclature practice is appened primarily for use by the reader who is not familiar with chemical nomenclature.
    Content
    Doctoral dissertation, University of Pennsylvania, 1961. Vgl..: http://www.garfield.library.upenn.edu/essays/v7p441y1984.pdf. Auch in: Essays of an information scientist. Vol. 7. Philadelphia, PA: ISI Press, 1985. S.441-513.
  7. Classification and information control : Papers representing the work of the Classification Research Group during 1960-1968 (1969) 0.00
    0.0012620769 = product of:
      0.0075724614 = sum of:
        0.0075724614 = weight(_text_:in in 3402) [ClassicSimilarity], result of:
          0.0075724614 = score(doc=3402,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.12752387 = fieldWeight in 3402, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3402)
      0.16666667 = coord(1/6)
    
    Content
    Enthält die Beiträge: FAIRTHORNE, R.A.: 'Browsing' schemes and 'specialist' schemes; KYLE, B.R.F.: Lessons learned from experience in drafting the Kyle classification; MILLS, J.: Inadequacies of exing general classification schemes; COATES, E.J.: CRG proposals for a new general classification; TOMLINSON, H.: Notes on initial work for NATO classification; TOMLINSON, H.: Report on work for new general classification scheme; TOMLINSON, H.: Expansion of categories using mining terms; TOMLINSON, H.: Relationship between geology and mining; TOMLINSON, H.: Use of categories for sculpture; TOMLINSON, H.: Expansion of categories using terms from physics; TOMLINSON, H.: The distinction between physical and chemical entities; TOMLINSON, H.: Concepts within politics; TOMLINSON, H.: Problems arising from first GCS papers; AUSTIN, D.: The theory of integrative levels reconsidered as the basis of a general classification; AUSTIN, D.: Demonstration: provisional scheme for naturally occuring entities; AUSTIN, D.: Stages in classing and exercises; AUSTIN, D.: Report to the Library Association Research Committee on the use of the NATO grant
  8. Quillian, M.R.: Word concepts : a theory and simulation of some basic semantic capabilities. (1967) 0.00
    0.0012620769 = product of:
      0.0075724614 = sum of:
        0.0075724614 = weight(_text_:in in 4414) [ClassicSimilarity], result of:
          0.0075724614 = score(doc=4414,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.12752387 = fieldWeight in 4414, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4414)
      0.16666667 = coord(1/6)
    
    Abstract
    In order to discover design principles for a large memory that can enable it to serve as the base of knowledge underlying human-like language behavior, experiments with a model memory are being performed. This model is built up within a computer by "recoding" a body of information from an ordinary dictionary into a complex network of elements and associations interconnecting them. Then, the ability of a program to use the resulting model memory effectively for simulating human performance provides a test of its design. One simulation program, now running, is given the model memory and is required to compare and contrast the meanings of arbitrary pairs of English words. For each pair, the program locates any relevant semantic information within the model memory, draws inferences on the basis of this, and thereby discovers various relationships between the meanings of the two words. Finally, it creates English text to express its conclusions. The design principles embodied in the memory model, together with some of the methods used by the program, constitute a theory of how human memory for semantic and other conceptual material may be formatted, organized, and used.
  9. Ranganathan, S.R.: Subject headings and facet analysis (1964) 0.00
    0.0011898974 = product of:
      0.0071393843 = sum of:
        0.0071393843 = weight(_text_:in in 1834) [ClassicSimilarity], result of:
          0.0071393843 = score(doc=1834,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.120230645 = fieldWeight in 1834, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=1834)
      0.16666667 = coord(1/6)
    
    Abstract
    After establishing the terminology, shows how the choice of the name of the subject of a document and the rendering of the name in the heading of the specific subject entry can be got by facet analysis based on postulates and principles. After showing that subject headings constitute an artificial language, points out that using facet analysis for subject heading does not amount to using class number. Marks out the area for an objective statistical survey of sought heading for subject entry. Calls on Council for Library Resources Incorporated to provide for this project
  10. Laureilhe, M.T.: Essai de bibliographie des thesauri et index par matières parus depuis 1960 (1969) 0.00
    0.0011898974 = product of:
      0.0071393843 = sum of:
        0.0071393843 = weight(_text_:in in 1966) [ClassicSimilarity], result of:
          0.0071393843 = score(doc=1966,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.120230645 = fieldWeight in 1966, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=1966)
      0.16666667 = coord(1/6)
    
    Footnote
    Supplemente in: 15(1970), no.1, S.21-26; 16(1971), no.1, S.33-38; 17(1972), no.2, S.67-73; 18(1973), no.3, S.101-107; 19(1974), no.5, S.257-262; 20(1975), no.3, S.119-127; 21(1976), no.3, S.107-113;
  11. Wilson, P.: Subjects and the sense of position (1968) 0.00
    0.0011898974 = product of:
      0.0071393843 = sum of:
        0.0071393843 = weight(_text_:in in 1353) [ClassicSimilarity], result of:
          0.0071393843 = score(doc=1353,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.120230645 = fieldWeight in 1353, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=1353)
      0.16666667 = coord(1/6)
    
    Footnote
    Nachdruck in: Theory of subject analysis: a sourcebook. Eds.: L.M. Chan et al., S.308-325.

Languages

  • d 137
  • e 113
  • f 1
  • More… Less…

Types

  • a 162
  • m 48
  • x 11
  • s 10
  • ? 9
  • r 6
  • b 3
  • el 1
  • h 1
  • n 1
  • More… Less…

Classifications