Search (207 results, page 11 of 11)

  • × type_ss:"a"
  • × year_i:[1980 TO 1990}
  1. Pettee, J.: ¬The subject approach to books and the development of the dictionary catalog (1985) 0.00
    0.0025485782 = product of:
      0.010194313 = sum of:
        0.010194313 = product of:
          0.020388626 = sum of:
            0.020388626 = weight(_text_:22 in 3624) [ClassicSimilarity], result of:
              0.020388626 = score(doc=3624,freq=2.0), product of:
                0.13174312 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037621226 = queryNorm
                0.15476047 = fieldWeight in 3624, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3624)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Footnote
    Original in: Pettee, J.: The history and theory of the alphabetical subject approach to books. New York: Wilson 1946. S.22-25.
  2. Mooers, C.N.: ¬The indexing language of an information retrieval system (1985) 0.00
    0.002230006 = product of:
      0.008920024 = sum of:
        0.008920024 = product of:
          0.017840048 = sum of:
            0.017840048 = weight(_text_:22 in 3644) [ClassicSimilarity], result of:
              0.017840048 = score(doc=3644,freq=2.0), product of:
                0.13174312 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037621226 = queryNorm
                0.1354154 = fieldWeight in 3644, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3644)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Footnote
    Original in: Information retrieval today: papers presented at an Institute conducted by the Library School and the Center for Continuation Study, University of Minnesota, Sept. 19-22, 1962. Ed. by Wesley Simonton. Minneapolis, Minn.: The Center, 1963. S.21-36.
  3. Feibleman, J.K.: Theory of integrative levels (1985) 0.00
    0.0017656278 = product of:
      0.007062511 = sum of:
        0.007062511 = product of:
          0.021187533 = sum of:
            0.021187533 = weight(_text_:k in 3637) [ClassicSimilarity], result of:
              0.021187533 = score(doc=3637,freq=2.0), product of:
                0.13429943 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.037621226 = queryNorm
                0.15776339 = fieldWeight in 3637, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3637)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    In the early 1960s, the Classification Research Group in London (q.v.) had reached the point in its experimentation with faceted classification systems where some kind of amalgamation of individual schemes was needed. They sought a unifying principle or set of principles that would provide a basis for a general system. The individual faceted schemes would not merge; what was central to one subject was fringe to another, but the fringes did not coalesce. In looking farther afield, they discovered the theory of "integrative levels" set forth by James K. Feibleman, Chairman and Professor of Philosophy at Tulane University until 1969 and author of forty-five books and more than 175 articles in various fields of philosophy. Feibleman's research concerned the development of the sciences considered in terms of an organizing principle. In the physical sciences, one Gould begin with subparticles and work up to atoms, molecules, and molecular assemblages, interpolating the biological equivalents. Feibleman separates the various levels by use of a "no return" device: "each level organizes the level or levels below it plus one emergent quality." The process is not reversible without loss of identity. A dog, in his system, is no longer a dog when it has been run over by a car; the smashed parts cannot be put together again to function as a dog. The theory of integrative levels is an interesting one. The levels from subparticles to clusters of galaxies or from nuclei to organisms are relatively clearly defined. A discipline, such as any of the ones comprising the "hard sciences," is made up of integrative levels. Research is cumulative so that scholars are ready to contribute when very young. Classification in these fields can make good use of the theory of integrative levels-in fact it should do so. It would appear that the method is more difficult to apply in the social sciences and humanities. This appearance may, however, be superficial. Almost all past happenings are irrevocable; one cannot recall the French Revolution and re-fight it. Any academic discipline that moves an over time does not usually return to an earlier position, even when there are schools of thought involved. Philosophy may have "neo-" this or that, but the subsequent new is not the same as the previous new. One has only to look at the various kinds of neo-Platonists that arise from time to time to realize that. Physical science recognizes a series of paradigms in changing its methodology over time and a similar situation may also turn out to be true in cognitive science." If this should turn out to be the case, integrative levels would probably have a part in that field as weIl.
  4. Kaiser, J.O.: Systematic indexing (1985) 0.00
    0.0017656278 = product of:
      0.007062511 = sum of:
        0.007062511 = product of:
          0.021187533 = sum of:
            0.021187533 = weight(_text_:k in 571) [ClassicSimilarity], result of:
              0.021187533 = score(doc=571,freq=2.0), product of:
                0.13429943 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.037621226 = queryNorm
                0.15776339 = fieldWeight in 571, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.03125 = fieldNorm(doc=571)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    A native of Germany and a former teacher of languages and music, Julius Otto Kaiser (1868-1927) came to the Philadelphia Commercial Museum to be its librarian in 1896. Faced with the problem of making "information" accessible, he developed a method of indexing he called systematic indexing. The first draft of his scheme, published in 1896-97, was an important landmark in the history of subject analysis. R. K. Olding credits Kaiser with making the greatest single advance in indexing theory since Charles A. Cutter and John Metcalfe eulogizes him by observing that "in sheer capacity for really scientific and logical thinking, Kaiser's was probably the best mind that has ever applied itself to subject indexing." Kaiser was an admirer of "system." By systematic indexing he meant indicating information not with natural language expressions as, for instance, Cutter had advocated, but with artificial expressions constructed according to formulas. Kaiser grudged natural language its approximateness, its vagaries, and its ambiguities. The formulas he introduced were to provide a "machinery for regularising or standardising language" (paragraph 67). Kaiser recognized three categories or "facets" of index terms: (1) terms of concretes, representing things, real or imaginary (e.g., money, machines); (2) terms of processes, representing either conditions attaching to things or their actions (e.g., trade, manufacture); and (3) terms of localities, representing, for the most part, countries (e.g., France, South Africa). Expressions in Kaiser's index language were called statements. Statements consisted of sequences of terms, the syntax of which was prescribed by formula. These formulas specified sequences of terms by reference to category types. Only three citation orders were permitted: a term in the concrete category followed by one in the process category (e.g., Wool-Scouring); (2) a country term followed by a process term (e.g., Brazil - Education); and (3) a concrete term followed by a country term, followed by a process term (e.g., Nitrate-Chile-Trade). Kaiser's system was a precursor of two of the most significant developments in twentieth-century approaches to subject access-the special purpose use of language for indexing, thus the concept of index language, which was to emerge as a generative idea at the time of the second Cranfield experiment (1966) and the use of facets to categorize subject indicators, which was to become the characterizing feature of analytico-synthetic indexing methods such as the Colon classification. In addition to its visionary quality, Kaiser's work is notable for its meticulousness and honesty, as can be seen, for instance, in his observations about the difficulties in facet definition.
  5. Foskett, D.J.: Classification and integrative levels (1985) 0.00
    0.0015449243 = product of:
      0.0061796973 = sum of:
        0.0061796973 = product of:
          0.018539092 = sum of:
            0.018539092 = weight(_text_:k in 3639) [ClassicSimilarity], result of:
              0.018539092 = score(doc=3639,freq=2.0), product of:
                0.13429943 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.037621226 = queryNorm
                0.13804297 = fieldWeight in 3639, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3639)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Very interesting experimental work was done by Douglas Foskett and other British classificationists during the fifteen-year period following the end of World War II. The research was effective in demonstrating that it was possible to make very sophisticated classification systems for virtually any subject-systems suitable for experts and for the general user needing a detailed subject classification. The success of these special systems led to consideration of the possibility of putting them together to form a new general classification system. To do such a thing would require a general, overall framework of some kind, since systems limited to a special subject are easier to construct because one does not have to worry about including all of the pertinent facets needed for a general system. Individual subject classifications do not automatically coalesce into a general pattern. For example, what is central to one special classification might be fringe in another or in several others. Fringe terminologies may not coincide in terms of logical relationships. Homographs and homonyms may not rear their ugly heads until attempts at merger are made. Foskett points out that even identifying a thing in terms of a noun or verb involves different assumptions in approach. For these and other reasons, it made sense to look for existing work in fields where the necessary framework already existed. Foskett found the rudiments of such a system in a number of writings, culminating in a logical system called "integrative levels" suggested by James K. Feibleman (q.v.). This system consists of a set of advancing conceptual levels relating to the apparent organization of nature. These levels are irreversible in that if one once reached a certain level there was no going back. Foskett points out that with higher levels and greater complexity in structure the analysis needed to establish valid levels becomes much more difficult, especially as Feibleman stipulates that a higher level must not be reducible to a lower one. (That is, one cannot put Humpty Dumpty together again.) Foskett is optimistic to the extent of suggesting that references from level to level be made upwards, with inductive reasoning, a system used by Derek Austin (q.v.) for making reference structures in PRECIS. Though the method of integrative levels so far has not been used successfully with the byproducts of human social behavior and thought, so much has been learned about these areas during the past twenty years that Foskett may yet be correct in his optimism. Foskett's name has Jong been associated with classification in the social sciences. As with many of the British classificationists included in this book, he has been a member of the Classification Research Group for about forty years. Like the others, he continues to contribute to the field.
  6. Needham, R.M.; Sparck Jones, K.: Keywords and clumps (1985) 0.00
    0.0015449243 = product of:
      0.0061796973 = sum of:
        0.0061796973 = product of:
          0.018539092 = sum of:
            0.018539092 = weight(_text_:k in 3645) [ClassicSimilarity], result of:
              0.018539092 = score(doc=3645,freq=2.0), product of:
                0.13429943 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.037621226 = queryNorm
                0.13804297 = fieldWeight in 3645, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3645)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
  7. Borko, H.: Research in computer based classification systems (1985) 0.00
    0.0015449243 = product of:
      0.0061796973 = sum of:
        0.0061796973 = product of:
          0.018539092 = sum of:
            0.018539092 = weight(_text_:k in 3647) [ClassicSimilarity], result of:
              0.018539092 = score(doc=3647,freq=2.0), product of:
                0.13429943 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.037621226 = queryNorm
                0.13804297 = fieldWeight in 3647, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3647)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    The selection in this reader by R. M. Needham and K. Sparck Jones reports an early approach to automatic classification that was taken in England. The following selection reviews various approaches that were being pursued in the United States at about the same time. It then discusses a particular approach initiated in the early 1960s by Harold Borko, at that time Head of the Language Processing and Retrieval Research Staff at the System Development Corporation, Santa Monica, California and, since 1966, a member of the faculty at the Graduate School of Library and Information Science, University of California, Los Angeles. As was described earlier, there are two steps in automatic classification, the first being to identify pairs of terms that are similar by virtue of co-occurring as index terms in the same documents, and the second being to form equivalence classes of intersubstitutable terms. To compute similarities, Borko and his associates used a standard correlation formula; to derive classification categories, where Needham and Sparck Jones used clumping, the Borko team used the statistical technique of factor analysis. The fact that documents can be classified automatically, and in any number of ways, is worthy of passing notice. Worthy of serious attention would be a demonstra tion that a computer-based classification system was effective in the organization and retrieval of documents. One reason for the inclusion of the following selection in the reader is that it addresses the question of evaluation. To evaluate the effectiveness of their automatically derived classification, Borko and his team asked three questions. The first was Is the classification reliable? in other words, could the categories derived from one sample of texts be used to classify other texts? Reliability was assessed by a case-study comparison of the classes derived from three different samples of abstracts. The notso-surprising conclusion reached was that automatically derived classes were reliable only to the extent that the sample from which they were derived was representative of the total document collection. The second evaluation question asked whether the classification was reasonable, in the sense of adequately describing the content of the document collection. The answer was sought by comparing the automatically derived categories with categories in a related classification system that was manually constructed. Here the conclusion was that the automatic method yielded categories that fairly accurately reflected the major area of interest in the sample collection of texts; however, since there were only eleven such categories and they were quite broad, they could not be regarded as suitable for use in a university or any large general library. The third evaluation question asked whether automatic classification was accurate, in the sense of producing results similar to those obtainabie by human cIassifiers. When using human classification as a criterion, automatic classification was found to be 50 percent accurate.

Languages

  • e 144
  • d 58
  • ? 1
  • dk 1
  • f 1
  • More… Less…