Search (34 results, page 1 of 2)

  • × language_ss:"e"
  • × theme_ss:"Klassifikationstheorie: Elemente / Struktur"
  • × year_i:[1990 TO 2000}
  1. Connaway, L.S.; Sievert, M.C.: Comparison of three classification systems for information on health insurance (1996) 0.03
    0.028138978 = product of:
      0.08441693 = sum of:
        0.016935252 = weight(_text_:of in 7242) [ClassicSimilarity], result of:
          0.016935252 = score(doc=7242,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.27643585 = fieldWeight in 7242, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=7242)
        0.046250064 = weight(_text_:systems in 7242) [ClassicSimilarity], result of:
          0.046250064 = score(doc=7242,freq=4.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.38414678 = fieldWeight in 7242, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0625 = fieldNorm(doc=7242)
        0.021231614 = product of:
          0.042463228 = sum of:
            0.042463228 = weight(_text_:22 in 7242) [ClassicSimilarity], result of:
              0.042463228 = score(doc=7242,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.30952093 = fieldWeight in 7242, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7242)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    Reports results of a comparative study of 3 classification schemes: LCC, DDC and NLM Classification to determine their effectiveness in classifying materials on health insurance. Examined 2 hypotheses: that there would be no differences in the scatter of the 3 classification schemes; and that there would be overlap between all 3 schemes but no difference in the classes into which the subject was placed. There was subject scatter in all 3 classification schemes and litlle overlap between the 3 systems
    Date
    22. 4.1997 21:10:19
  2. Classification research for knowledge representation and organization : Proc. of the 5th Int. Study Conf. on Classification Research, Toronto, Canada, 24.-28.6.1991 (1992) 0.02
    0.024568737 = product of:
      0.07370621 = sum of:
        0.020822227 = weight(_text_:of in 2072) [ClassicSimilarity], result of:
          0.020822227 = score(doc=2072,freq=86.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.33988333 = fieldWeight in 2072, product of:
              9.273619 = tf(freq=86.0), with freq of:
                86.0 = termFreq=86.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2072)
        0.03244723 = weight(_text_:systems in 2072) [ClassicSimilarity], result of:
          0.03244723 = score(doc=2072,freq=14.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2695023 = fieldWeight in 2072, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2072)
        0.020436753 = weight(_text_:software in 2072) [ClassicSimilarity], result of:
          0.020436753 = score(doc=2072,freq=2.0), product of:
            0.15541996 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03917671 = queryNorm
            0.13149375 = fieldWeight in 2072, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2072)
      0.33333334 = coord(3/9)
    
    Abstract
    This volume deals with both theoretical and empirical research in classification and encompasses universal classification systems, special classification systems, thesauri and the place of classification in a broad spectrum of document and information systems. Papers fall into one or three major areas as follows: 1) general principles and policies 2) structure and logic in classification; and empirical investigation; classification in the design of various types of document/information systems. The papers originate from the ISCCR '91 conference and have been selected according to the following criteria: relevance to the conference theme; importance of the topic in the representation and organization of knowledge; quality; and originality in terms of potential contribution to research and new knowledge.
    Content
    Enthält die Beiträge: SVENONIUS, E.: Classification: prospects, problems, and possibilities; BEALL, J.: Editing the Dewey Decimal Classification online: the evolution of the DDC database; BEGHTOL, C.: Toward a theory of fiction analysis for information storage and retrieval; CRAVEN, T.C.: Concept relation structures and their graphic display; FUGMANN, R.: Illusory goals in information science research; GILCHRIST, A.: UDC: the 1990's and beyond; GREEN, R.: The expression of syntagmatic relationships in indexing: are frame-based index languages the answer?; HUMPHREY, S.M.: Use and management of classification systems for knowledge-based indexing; MIKSA, F.L.: The concept of the universe of knowledge and the purpose of LIS classification; SCOTT, M. u. A.F. FONSECA: Methodology for functional appraisal of records and creation of a functional thesaurus; ALBRECHTSEN, H.: PRESS: a thesaurus-based information system for software reuse; AMAESHI, B.: A preliminary AAT compatible African art thesaurus; CHATTERJEE, A.: Structures of Indian classification systems of the pre-Ranganathan era and their impact on the Colon Classification; COCHRANE, P.A.: Indexing and searching thesauri, the Janus or Proteus of information retrieval; CRAVEN, T.C.: A general versus a special algorithm in the graphic display of thesauri; DAHLBERG, I.: The basis of a new universal classification system seen from a philosophy of science point of view: DRABENSTOTT, K.M., RIESTER, L.C. u. B.A.DEDE: Shelflisting using expert systems; FIDEL, R.: Thesaurus requirements for an intermediary expert system; GREEN, R.: Insights into classification from the cognitive sciences: ramifications for index languages; GROLIER, E. de: Towards a syndetic information retrieval system; GUENTHER, R.: The USMARC format for classification data: development and implementation; HOWARTH, L.C.: Factors influencing policies for the adoption and integration of revisions to classification schedules; HUDON, M.: Term definitions in subject thesauri: the Canadian literacy thesaurus experience; HUSAIN, S.: Notational techniques for the accomodation of subjects in Colon Classification 7th edition: theoretical possibility vis-à-vis practical need; KWASNIK, B.H. u. C. JORGERSEN: The exploration by means of repertory grids of semantic differences among names of official documents; MICCO, M.: Suggestions for automating the Library of Congress Classification schedules; PERREAULT, J.M.: An essay on the prehistory of general categories (II): G.W. Leibniz, Conrad Gesner; REES-POTTER, L.K.: How well do thesauri serve the social sciences?; REVIE, C.W. u. G. SMART: The construction and the use of faceted classification schema in technical domains; ROCKMORE, M.: Structuring a flexible faceted thsaurus record for corporate information retrieval; ROULIN, C.: Sub-thesauri as part of a metathesaurus; SMITH, L.C.: UNISIST revisited: compatibility in the context of collaboratories; STILES, W.G.: Notes concerning the use chain indexing as a possible means of simulating the inductive leap within artificial intelligence; SVENONIUS, E., LIU, S. u. B. SUBRAHMANYAM: Automation in chain indexing; TURNER, J.: Structure in data in the Stockshot database at the National Film Board of Canada; VIZINE-GOETZ, D.: The Dewey Decimal Classification as an online classification tool; WILLIAMSON, N.J.: Restructuring UDC: problems and possibilies; WILSON, A.: The hierarchy of belief: ideological tendentiousness in universal classification; WILSON, B.F.: An evaluation of the systematic botany schedule of the Universal Decimal Classification (English full edition, 1979); ZENG, L.: Research and development of classification and thesauri in China; CONFERENCE SUMMARY AND CONCLUSIONS
    Footnote
    Rez. in: International classification 19(1992) no.4, S.228-229 (B.C. Vickery); Journal of classification 11(1994) no.2, S.255-256 (W. Gödert)
    LCSH
    Knowledge, Theory of / Congresses
    Subject
    Knowledge, Theory of / Congresses
  3. Koshman, S.: Categorization and classification revisited : a review of concept in library science and cognitive psychology (1993) 0.02
    0.022717217 = product of:
      0.10222748 = sum of:
        0.08389453 = weight(_text_:applications in 8349) [ClassicSimilarity], result of:
          0.08389453 = score(doc=8349,freq=2.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.4864132 = fieldWeight in 8349, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.078125 = fieldNorm(doc=8349)
        0.018332949 = weight(_text_:of in 8349) [ClassicSimilarity], result of:
          0.018332949 = score(doc=8349,freq=6.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2992506 = fieldWeight in 8349, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=8349)
      0.22222222 = coord(2/9)
    
    Abstract
    Reviews the basic concepts associated with categorization and classification in order to examine the cognitive psychology and library science perspectives toward these processes, to discover if a theoretical affinity exists and to discuss potential applications of cognitive categorization theory to the field of library science
  4. Winske, E.: ¬The development and structure of an urban, regional, and local documents classification scheme (1996) 0.02
    0.02226542 = product of:
      0.06679626 = sum of:
        0.01960283 = weight(_text_:of in 7241) [ClassicSimilarity], result of:
          0.01960283 = score(doc=7241,freq=14.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.31997898 = fieldWeight in 7241, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7241)
        0.028615767 = weight(_text_:systems in 7241) [ClassicSimilarity], result of:
          0.028615767 = score(doc=7241,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.23767869 = fieldWeight in 7241, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7241)
        0.018577661 = product of:
          0.037155323 = sum of:
            0.037155323 = weight(_text_:22 in 7241) [ClassicSimilarity], result of:
              0.037155323 = score(doc=7241,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.2708308 = fieldWeight in 7241, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7241)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    Discusses the reasons for the decision, taken at Florida International University Library to develop an in house classification system for their local documents collections. Reviews the structures of existing classification systems, noting their strengths and weaknesses in relation to the development of an in house system and describes the 5 components of the new system; geography, subject categories, extensions for population group and/or function, extensions for type of publication, and title/series designator
    Footnote
    Paper presented at conference on 'Local documents, a new classification scheme' at the Research Caucus of the Florida Library Association Annual Conference, Fort Lauderdale, Florida 22 Apr 95
    Source
    Journal of educational media and library sciences. 34(1996) no.1, S.19-34
  5. Curras, E.: Ranganathan's classification theories under the systems science postulates (1992) 0.02
    0.018828548 = product of:
      0.084728464 = sum of:
        0.02808394 = weight(_text_:of in 6993) [ClassicSimilarity], result of:
          0.02808394 = score(doc=6993,freq=22.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.458417 = fieldWeight in 6993, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=6993)
        0.05664453 = weight(_text_:systems in 6993) [ClassicSimilarity], result of:
          0.05664453 = score(doc=6993,freq=6.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.4704818 = fieldWeight in 6993, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0625 = fieldNorm(doc=6993)
      0.22222222 = coord(2/9)
    
    Abstract
    Describes the basic ideas concerning system science and discusses S.R. Ranganathan's ideas about concepts of 'universe of ideas', 'universe of science', 'universe of knowledge' and 'universe of classification'. Examines the principles, canons and postulates underlying Colon Classification. Discusses the structure of Colon Classification. Points out that the ideas of Ranganathan conform to the concept 'unity of science' and concludes that the principles of systems science or systems thinking are helpful in understanding the theory of classification formulated by Ranganathan
    Source
    Journal of library and information science. 17(1992) no.1, S.45-65
  6. Molholt, P.: Qualities of classification schemes for the Information Superhighway (1995) 0.02
    0.01681507 = product of:
      0.050445206 = sum of:
        0.016735615 = weight(_text_:of in 5562) [ClassicSimilarity], result of:
          0.016735615 = score(doc=5562,freq=20.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.27317715 = fieldWeight in 5562, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5562)
        0.020439833 = weight(_text_:systems in 5562) [ClassicSimilarity], result of:
          0.020439833 = score(doc=5562,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.1697705 = fieldWeight in 5562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5562)
        0.013269759 = product of:
          0.026539518 = sum of:
            0.026539518 = weight(_text_:22 in 5562) [ClassicSimilarity], result of:
              0.026539518 = score(doc=5562,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.19345059 = fieldWeight in 5562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5562)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    For my segment of this program I'd like to focus on some basic qualities of classification schemes. These qualities are critical to our ability to truly organize knowledge for access. As I see it, there are at least five qualities of note. The first one of these properties that I want to talk about is "authoritative." By this I mean standardized, but I mean more than standardized with a built in consensus-building process. A classification scheme constructed by a collaborative, consensus-building process carries the approval, and the authority, of the discipline groups that contribute to it and that it affects... The next property of classification systems is "expandable," living, responsive, with a clear locus of responsibility for its continuous upkeep. The worst thing you can do with a thesaurus, or a classification scheme, is to finish it. You can't ever finish it because it reflects ongoing intellectual activity... The third property is "intuitive." That is, the system has to be approachable, it has to be transparent, or at least capable of being transparent. It has to have an underlying logic that supports the classification scheme but doesn't dominate it... The fourth property is "organized and logical." I advocate very strongly, and agree with Lois Chan, that classification must be based on a rule-based structure, on somebody's world-view of the syndetic structure... The fifth property is "universal" by which I mean the classification scheme needs be useable by any specific system or application, and be available as a language for multiple purposes.
    Source
    Cataloging and classification quarterly. 21(1995) no.2, S.19-22
  7. Kochar, R.S.: Library classification systems (1998) 0.02
    0.016011083 = product of:
      0.07204988 = sum of:
        0.014818345 = weight(_text_:of in 931) [ClassicSimilarity], result of:
          0.014818345 = score(doc=931,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.24188137 = fieldWeight in 931, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=931)
        0.057231534 = weight(_text_:systems in 931) [ClassicSimilarity], result of:
          0.057231534 = score(doc=931,freq=8.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.47535738 = fieldWeight in 931, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=931)
      0.22222222 = coord(2/9)
    
    Abstract
    Library classification traces the origins of the subject and leads an to the latest developments in it. This user-friendly text explains concepts through analogies, diagrams, and tables. The fundamental but important topics an terminology of classification has been uniquely explained. The book deals with the recent trends in the use of computers in cataloguing including on-line systems, artificial intelligence systems etc. With its up-to-date and comprehensive coverage the book will serve as a degree students of Library and Information Science and also prove to be invaluable reference material to professionals and researchers.
    Content
    Contents: Preface. 1. Classification systems. 2. Automatic classification. 3. Knowledge classification. 4. Reflections on library classification. 5. General classification schemes. 6. Hierarchical classification. 7. Faceted classification. B. Present methods and future directions. Index.
  8. Zackland, M.; Fontaine, D.: Systematic building of conceptual classification systems with C-KAT (1996) 0.02
    0.015953662 = product of:
      0.07179148 = sum of:
        0.022227516 = weight(_text_:of in 5145) [ClassicSimilarity], result of:
          0.022227516 = score(doc=5145,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.36282203 = fieldWeight in 5145, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5145)
        0.049563963 = weight(_text_:systems in 5145) [ClassicSimilarity], result of:
          0.049563963 = score(doc=5145,freq=6.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.41167158 = fieldWeight in 5145, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5145)
      0.22222222 = coord(2/9)
    
    Abstract
    C-KAT is a method and a tool which supports the design of feature oriented classification systems for knowlegde based systems. It uses a specialized Heuristic Classification conceptual model named 'classification by structural shift' which sees the classification process as the matching of different classifications of the same set of objects or situations organized around different structural principles. To manage the complexity induced by the cross-product, C-KAT supports the use of a leastcommittment strategy which applies in a context of constraint-directed reasoning. Presents this method using an example from the field of industrial fire insurance
    Source
    International journal of human-computer studies. 44(1996) no.5, S.603-627
  9. Dahlberg, I.: Classification structure principles : Investigations, experiences, conclusions (1998) 0.02
    0.01508584 = product of:
      0.06788628 = sum of:
        0.025402877 = weight(_text_:of in 47) [ClassicSimilarity], result of:
          0.025402877 = score(doc=47,freq=32.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.41465378 = fieldWeight in 47, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=47)
        0.042483397 = weight(_text_:systems in 47) [ClassicSimilarity], result of:
          0.042483397 = score(doc=47,freq=6.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.35286134 = fieldWeight in 47, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=47)
      0.22222222 = coord(2/9)
    
    Abstract
    For the purpose of establishing compatibility between the major universal classification systems in use, their structure principles were investigated and crucial points of difficulty for this undertaking were looked for, in order to relate the guiding classes, e.g. of the DDC, UDC, LCC, BC, and CC, to the subject groups of the ICC. With the help of a matrix into whose fields all subject groups of the ICC were inserted, it was not difficult at all to enter the notations of the universal classification systems mentioned. However, differences in terms of level of subdivision were found, as well as differences of occurrences. Most, though not all, of the fields of the ICC matrix could be completely filled with the corresponding notations of the other systems. Through this matrix, a first table of some 81 equivalences was established on which further work regarding the next levels of subject fields can be based
    Source
    Structures and relations in knowledge organization: Proceedings of the 5th International ISKO-Conference, Lille, 25.-29.8.1998. Ed.: W. Mustafa el Hadi et al
  10. Beghtol, C.: General classification systems : structural principles for multidisciplinary specification (1998) 0.01
    0.014892921 = product of:
      0.067018144 = sum of:
        0.017962547 = weight(_text_:of in 44) [ClassicSimilarity], result of:
          0.017962547 = score(doc=44,freq=16.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2932045 = fieldWeight in 44, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=44)
        0.0490556 = weight(_text_:systems in 44) [ClassicSimilarity], result of:
          0.0490556 = score(doc=44,freq=8.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.4074492 = fieldWeight in 44, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=44)
      0.22222222 = coord(2/9)
    
    Abstract
    In this century, knowledge creation, production, dissemination and use have changed profoundly. Intellectual and physical barriers have been substantially reduced by the rise of multidisciplinarity and by the influence of computerization, particularly by the spread of the World Wide Web (WWW). Bibliographic classification systems need to respond to this situation. Three possible strategic responses are described: 1) adopting an existing system; 2) adapting an existing system; and 3) finding new structural principles for classification systems. Examples of these three responses are given. An extended example of the third option uses the knowledge outline in the Spectrum of Britannica Online to suggest a theory of "viewpoint warrant" that could be used to incorporate differing perspectives into general classification systems
    Source
    Structures and relations in knowledge organization: Proceedings of the 5th International ISKO-Conference, Lille, 25.-29.8.1998. Ed.: W. Mustafa el Hadi et al
  11. Dahlberg, I.: DIN 32705: the German standard on classification systems : a critical appraisal (1992) 0.01
    0.013674567 = product of:
      0.061535552 = sum of:
        0.019052157 = weight(_text_:of in 2669) [ClassicSimilarity], result of:
          0.019052157 = score(doc=2669,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.3109903 = fieldWeight in 2669, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2669)
        0.042483397 = weight(_text_:systems in 2669) [ClassicSimilarity], result of:
          0.042483397 = score(doc=2669,freq=6.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.35286134 = fieldWeight in 2669, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=2669)
      0.22222222 = coord(2/9)
    
    Abstract
    The German standard on the construction and further development of classification systems is introduced with its background. The contents of its 8 chapters is described. A critical appraisal considers (1) the fact that the standard does not openly deal with the optimal form of CS, viz. faceted CS, but treats them as one possibility among others, although the authors seem to have had this kind in mind when recommending the section on steps of CS development and other sections of the standard; (2) that the standard does not give any recommendation on the computerization of the necessary activities in establishing CS; and (3) that a convergence of CS and thesauri in the form of faceted CS and faceted thesauri has not been taken into consideration. - Concludingly some doubts are raised whether a standard would be the best medium to provide recommendations or guidelines for the construction of such systems. More adequate ways for this should be explored
  12. Hurt, C.D.: Classification and subject analysis : looking to the future at a distance (1997) 0.01
    0.011876687 = product of:
      0.053445093 = sum of:
        0.020741362 = weight(_text_:of in 6929) [ClassicSimilarity], result of:
          0.020741362 = score(doc=6929,freq=12.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.33856338 = fieldWeight in 6929, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=6929)
        0.03270373 = weight(_text_:systems in 6929) [ClassicSimilarity], result of:
          0.03270373 = score(doc=6929,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2716328 = fieldWeight in 6929, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0625 = fieldNorm(doc=6929)
      0.22222222 = coord(2/9)
    
    Abstract
    Classic classification schemes are uni-dimensional, with few exceptions. One of the challenges of distance education and new learning strategies is that the proliferation of course work defies the traditional categorization. The rigidity of most present classification schemes does not mesh well with the burgeoning fluidity of the academic environment. One solution is a return to a largely forgotten area of study - classification theory. Some suggestions for exploration are nonmonotonic logic systems, neural network models, and non-library models.
  13. Garcia Marco, F.J.; Esteban Navarro, M.A.: On some contributions of the cognitive sciences and epistemology to a theory of classification (1993) 0.01
    0.010131279 = product of:
      0.045590755 = sum of:
        0.021062955 = weight(_text_:of in 5876) [ClassicSimilarity], result of:
          0.021062955 = score(doc=5876,freq=22.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.34381276 = fieldWeight in 5876, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5876)
        0.0245278 = weight(_text_:systems in 5876) [ClassicSimilarity], result of:
          0.0245278 = score(doc=5876,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2037246 = fieldWeight in 5876, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=5876)
      0.22222222 = coord(2/9)
    
    Abstract
    Intended is first of all a preliminary review of the implications that the new approaches to the theory of classification, mainly from cognitive psychology and epistemology may have for information work and research. As a secondary topic the scientific relations existing among information science, epistemology and the cognitive sciences are discussed. Classification is seen as a central activity in all daily and scientific activities, and, of course, of knowledge organization in information services. There is a mutual implication between classification and conceptualization, as the former moves in a natural way to the latter and the best result elaborated for classification is the concept. Research in concept theory is a need for a theory of classification. In this direction it is of outstanding importance to integrate the achievements of 'natural concept formation theory' (NCFT) as an alternative approach to conceptualization different from the traditional one of logicians and problem solving researchers. In conclusion both approaches are seen as being complementary: the NCFT approach being closer to the user and the logical one being more suitable for experts, including 'expert systems'
  14. Spiteri, L.: ¬A simplified model for facet analysis : Ranganathan 101 (1998) 0.01
    0.008273165 = product of:
      0.03722924 = sum of:
        0.012701439 = weight(_text_:of in 3842) [ClassicSimilarity], result of:
          0.012701439 = score(doc=3842,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.20732689 = fieldWeight in 3842, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3842)
        0.0245278 = weight(_text_:systems in 3842) [ClassicSimilarity], result of:
          0.0245278 = score(doc=3842,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2037246 = fieldWeight in 3842, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=3842)
      0.22222222 = coord(2/9)
    
    Abstract
    Ranganathan's canons, principles, and postulates can easily confuse readers, especially because he revised and added to them in various editions of his many books. The Classification Research Group, who drew on Ranganathan's work as their basis for classification theory but developed it in their own way, has never clearly organized all their equivalent canons and principles. In this article Spiteri gathers the fundamental rules from both systems and compares and contrasts them. She makes her own clearer set of principles for constructing facets, stating the subject of a document, and designing notation. Spiteri's "simplified model" is clear and understandable, but certainly not simplistic. The model does not include methods for making a faceted system, but will serve as a very useful guide in how to turn initial work into a rigorous classification. Highly recommended
    Source
    Canadian journal of information and library science. 23(1998) nos.1/2, S.1-30
  15. Bowker, G.C.; Star, S.L.: Sorting things out : classification and its consequences (1999) 0.01
    0.008259334 = product of:
      0.037167 = sum of:
        0.01404197 = weight(_text_:of in 733) [ClassicSimilarity], result of:
          0.01404197 = score(doc=733,freq=22.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2292085 = fieldWeight in 733, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=733)
        0.023125032 = weight(_text_:systems in 733) [ClassicSimilarity], result of:
          0.023125032 = score(doc=733,freq=4.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.19207339 = fieldWeight in 733, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03125 = fieldNorm(doc=733)
      0.22222222 = coord(2/9)
    
    Abstract
    Is this book sociology, anthropology, or taxonomy? Sorting Things Out, by communications theorists Geoffrey C. Bowker and Susan Leigh Star, covers a lot of conceptual ground in its effort to sort out exactly how and why we classify and categorize the things and concepts we encounter day to day. But the analysis doesn't stop there; the authors go on to explore what happens to our thinking as a result of our classifications. With great insight and precise academic language, they pick apart our information systems and language structures that lie deeper than the everyday categories we use. The authors focus first on the International Classification of Diseases (ICD), a widely used scheme used by health professionals worldwide, but also look at other health information systems, racial classifications used by South Africa during apartheid, and more. Though it comes off as a bit too academic at times (by the end of the 20th century, most writers should be able to get the spelling of McDonald's restaurant right), the book has a clever charm that thoughtful readers will surely appreciate. A sly sense of humor sneaks into the writing, giving rise to the chapter title "The Kindness of Strangers," for example. After arguing that categorization is both strongly influenced by and a powerful reinforcer of ideology, it follows that revolutions (political or scientific) must change the way things are sorted in order to throw over the old system. Who knew that such simple, basic elements of thought could have such far-reaching consequences? Whether you ultimately place it with social science, linguistics, or (as the authors fear) fantasy, make sure you put Sorting Things Out in your reading pile.
    LCSH
    Knowledge, Sociology of
    Subject
    Knowledge, Sociology of
  16. Khanna, J.K.: Analytico-synthetic classification : (a study in CC-7) (1994) 0.01
    0.0077348063 = product of:
      0.034806628 = sum of:
        0.018454762 = weight(_text_:of in 1471) [ClassicSimilarity], result of:
          0.018454762 = score(doc=1471,freq=38.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.30123898 = fieldWeight in 1471, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=1471)
        0.016351866 = weight(_text_:systems in 1471) [ClassicSimilarity], result of:
          0.016351866 = score(doc=1471,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.1358164 = fieldWeight in 1471, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03125 = fieldNorm(doc=1471)
      0.22222222 = coord(2/9)
    
    Abstract
    ANALYTICO-SYNTHETIC CLASSIFICATION- the brain-child of S.R. Ranganathan has brought about an intellectual revolution in the theory and methodology of library classification by generating new ideas. By his vast erudition and deeper research in the Universe of Subjects, Ranganathan applied a postulation approach to classification based on the concept of facet analysis, Phase Analysis, Sector Analysis and Zone Analysis. His enquiry into the concept of fundamental Categories as well as the Analytico-Synthetic quality associated with it, the use of different connecting symbols as in the Meccano apparatus for constructing expressive class numbers for subjects of any depth, the versality of Notation, the analysis of Rounds and Levels, the formation and sharpening of Isolates through various devices, the introduction of the novel concepts of Specals, Systems, Speciators, and Environment Constituents has systematized the whole study of classification into principles, rules and canons. These new methodologies in classification invented as a part of Colon Classification have not only lifted practical classification form mere guess work to scientific methodology but also form an important theme in international conferences. The present work discusses in details the unique methodologies of Ranganathan as used in CC-7. The concepts of Primary Basic Subjects and Non -Primary Basic Subjects have also been discussed at length.
    Content
    Inhalt: 1. Species of Clasification 2. The Making of an Analytico -Synthetic Classification 3. Analytico -Synthetic Classification 4. Basic Subject 5. Primary Basic Subject 6. Non-Primary Basic Subject 7. Notation 8. Fundamental Categories 9. Rounds and Lvels 10. Facet Analyysis and Facet Sequence 11. Phase Realtion 12. Devices in Colon Classification 13. Common Isolates 14. Spece Isolates 15. Lnaguage Isolates 16. Time Isolates 17. Call Number-Class Numbers-Book Number 18. Ranganathan's nfluence on International Classification Thought 19. Alphabetical Index to the Schedule of Basic Subjects
  17. Hjoerland, B.: ¬The classification of psychology : a case study in the classification of a knowledge field (1998) 0.01
    0.007512961 = product of:
      0.033808324 = sum of:
        0.017456459 = weight(_text_:of in 3783) [ClassicSimilarity], result of:
          0.017456459 = score(doc=3783,freq=34.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.28494355 = fieldWeight in 3783, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=3783)
        0.016351866 = weight(_text_:systems in 3783) [ClassicSimilarity], result of:
          0.016351866 = score(doc=3783,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.1358164 = fieldWeight in 3783, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03125 = fieldNorm(doc=3783)
      0.22222222 = coord(2/9)
    
    Abstract
    Different approaches to the classification of a knowledge field include empiristic, rationalistic, historistic, and pragmatic methods. This paper demonstrates how these different methids have been applied to the classification of psychology. An etymological apporach is insufficient to define the subject matter of psychology, because other terms can be used to describe the same domain. To define the subject matter of psychology from the point of view of its formal establishment as a science and academic discipline (in Leipzig, 1879) it is also insufficient because this was done in specific historical circumstances, which narrowed the subject matter to physiologically-related issues. When defining the subject area of a scientific field it is necessary to consider how different ontological and epistemological views have made their influences. A subject area and the approaches by which this subject area has been studied cannot be separated from each other without tracing their mutual historical interactions. The classification of a subject field is theory-laden and thus cannot be neutral or ahistorical. If classification research can claim to have a method that is more general than the study of concrete developments in the single knowledge fields the key is to be found in the general epistemological theories. It is shown how basic epistemological assumptions have formed the different approaches to psychology during the 20th century. The progress in the understanding of basic philosophical questions is decisive both for the development of a knowledge field and as the point of departure of classification. The theoretical principles developed in this paper are applied in a brief analysis of some concrete classification systems, including the one used by PsycINFO / Psychologcal Abstracts. The role of classification in modern information retrieval is also briefly discussed
  18. Kwasnik, B.H.: ¬The role of classification in knowledge representation (1999) 0.01
    0.0066943 = product of:
      0.03012435 = sum of:
        0.014200641 = weight(_text_:of in 2464) [ClassicSimilarity], result of:
          0.014200641 = score(doc=2464,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.23179851 = fieldWeight in 2464, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2464)
        0.015923709 = product of:
          0.031847417 = sum of:
            0.031847417 = weight(_text_:22 in 2464) [ClassicSimilarity], result of:
              0.031847417 = score(doc=2464,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.23214069 = fieldWeight in 2464, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2464)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    A fascinating, broad-ranging article about classification, knowledge, and how they relate. Hierarchies, trees, paradigms (a two-dimensional classification that can look something like a spreadsheet), and facets are covered, with descriptions of how they work and how they can be used for knowledge discovery and creation. Kwasnick outlines how to make a faceted classification: choose facets, develop facets, analyze entities using the facets, and make a citation order. Facets are useful for many reasons: they do not require complete knowledge of the entire body of material; they are hospitable, flexible, and expressive; they do not require a rigid background theory; they can mix theoretical structures and models; and they allow users to view things from many perspectives. Facets do have faults: it can be hard to pick the right ones; it is hard to show relations between them; and it is difficult to visualize them. The coverage of the other methods is equally thorough and there is much to consider for anyone putting a classification on the web.
    Source
    Library trends. 48(1999) no.1, S.22-47
  19. Holman, E.E.: Statistical properties of large published classifications (1992) 0.00
    0.003155698 = product of:
      0.028401282 = sum of:
        0.028401282 = weight(_text_:of in 4250) [ClassicSimilarity], result of:
          0.028401282 = score(doc=4250,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.46359703 = fieldWeight in 4250, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=4250)
      0.11111111 = coord(1/9)
    
    Abstract
    Reports the results of a survey of 23 published classifications taken from a variety of subject fields
    Source
    Journal of classification. 9(1992) no.2, S.187-210
  20. Garcia Marco, F.J.; Esteban Navarro, M.A.: On some contributions of the cognitive sciences and epistemology to a theory of classification (1995) 0.00
    0.0029752206 = product of:
      0.026776984 = sum of:
        0.026776984 = weight(_text_:of in 5559) [ClassicSimilarity], result of:
          0.026776984 = score(doc=5559,freq=20.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.43708345 = fieldWeight in 5559, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=5559)
      0.11111111 = coord(1/9)
    
    Abstract
    Discusses classification as a central resource of human informational activity and as a central aspect of research for many sciences. Argues that thinking about the background of classification can help improve, or at least clarify, the practical tasks of documentary workers and librarians. Discusses the relationship and gaps between cognitive science and information science, and considers the contributions of epistemology and cognitive psychology; in particular, focuses on the role of the latter in the development of an integrative theory of classification