Search (107 results, page 6 of 6)

  • × theme_ss:"Universale Facettenklassifikationen"
  1. Raju, A.A.N.: Colon Classification: theory and practice : a self instructional manual (2001) 0.00
    0.0011797264 = product of:
      0.005898632 = sum of:
        0.005898632 = weight(_text_:a in 1482) [ClassicSimilarity], result of:
          0.005898632 = score(doc=1482,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.11032722 = fieldWeight in 1482, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1482)
      0.2 = coord(1/5)
    
    Abstract
    Colon Classification (CC) is truly the first freely faceted scheme for library classification devised and propagated by Dr. S.R. Ranganathan. The scheme is being taught in theory and practice to the students in most of the LIS schools in India and abroad also. Many manuals, Guide books and Introductory works have been published on CC in the past. But the present work tread a new path in presenting CC to the student, teaching and professional community. The present work Colon Classification: Theory and Practice; A Self Instructional Manual is the result of author's twenty-five years experience of teaching theory and practice of CC to the students of LIS. For the first ime concerted and systematic attempt has been made to present theory and practice of CC in self-instructional mode, keeping in view the requirements of students learners of Open Universities/ Distance Education Institutions in particular. The other singificant and novel features introduced in this manual are: Presenting the scope of each block consisting certain units bollowed by objectives, introduction, sections, sub-sections, self check exercises, glossary and assignment of each unit. It is hoped that all these features will help the users/readers of this manual to understand and grasp quickly, the intricacies involved in theory and practice of CC(6th Edition). The manual is presented in three blocks and twelve units.
  2. Szostak, R.: Basic Concepts Classification (BCC) (2020) 0.00
    0.0011797264 = product of:
      0.005898632 = sum of:
        0.005898632 = weight(_text_:a in 5883) [ClassicSimilarity], result of:
          0.005898632 = score(doc=5883,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.11032722 = fieldWeight in 5883, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5883)
      0.2 = coord(1/5)
    
    Abstract
    The Basics Concept Classification (BCC) is a "universal" scheme: it attempts to encompass all areas of human understanding. Whereas most universal schemes are organized around scholarly disciplines, the BCC is instead organized around phenomena (things), the relationships that exist among phenomena, and the properties that phenomena and relators may possess. This structure allows the BCC to apply facet analysis without requiring the use of "facet indicators." The main motivation for the BCC was a recognition that existing classifications that are organized around disciplines serve interdisciplinary scholarship poorly. Complex concepts that might be understood quite differently across groups and individuals can generally be broken into basic concepts for which there is enough shared understanding for the purposes of classification. Documents, ideas, and objects are classified synthetically by combining entries from the schedules of phenomena, relators, and properties. The inclusion of separate schedules of-generally verb-like-relators is one of the most unusual aspects of the BCC. This (and the schedules of properties that serve as adjectives or adverbs) allows the production of sentence-like subject strings. Documents can then be classified in terms of the main arguments made in the document. BCC provides very precise descriptors of documents by combining phenomena, relators, and properties synthetically. The terminology employed in the BCC reduces terminological ambiguity. The BCC is still being developed and it needs to be fleshed out in certain respects. Yet it also needs to be applied; only in application can the feasibility and desirability of the classification be adequately assessed.
    Type
    a
  3. Gnoli, C.: "Classic"vs. "freely" faceted classification (2007) 0.00
    0.001155891 = product of:
      0.005779455 = sum of:
        0.005779455 = weight(_text_:a in 715) [ClassicSimilarity], result of:
          0.005779455 = score(doc=715,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10809815 = fieldWeight in 715, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=715)
      0.2 = coord(1/5)
    
    Abstract
    Claudio Gnoli of the University of Pavia in Italy and Chair of ISKO Italy, explored the relative merits of classic 'faceted classification' (FC) and 'freely faceted classification' (FFC). In classic FC, the facets (and their relationships) which might be combined to express a compound subject, are restricted to those prescribed as inherent in the subject area. FC is therefore largely bounded by and restricted to a specific subject area. At the other extreme, free classification (as in the Web or folksonomies) allows the combination of values from multiple, disparate domains where the relationships among the elements are often indeterminate, and the semantics obscure. Claudio described how punched cards were an early example of free classification, and cited the coordination of dogs : postmen : bites as one where the absence of defined relationships made the semantics ambiguous
  4. Faceted classification today : International UDC Seminar 2017, 14.-15. Spetember, London, UK. (2017) 0.00
    0.0010897844 = product of:
      0.005448922 = sum of:
        0.005448922 = weight(_text_:a in 3773) [ClassicSimilarity], result of:
          0.005448922 = score(doc=3773,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10191591 = fieldWeight in 3773, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=3773)
      0.2 = coord(1/5)
    
    Abstract
    Faceted analytical theory is a widely accepted approach for constructing modern classification schemes and other controlled vocabularies. While the advantages of faceted approach are broadly accepted and understood the actual implementation is coupled with many challenges when it comes to data modelling, management and retrieval. UDC Seminar 2017 revisits faceted analytical theory as one of the most influential methodologies in the development of knowledge organization systems.
  5. Green, R.: Facet analysis and semantic frames (2017) 0.00
    9.6324255E-4 = product of:
      0.0048162127 = sum of:
        0.0048162127 = weight(_text_:a in 3849) [ClassicSimilarity], result of:
          0.0048162127 = score(doc=3849,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.090081796 = fieldWeight in 3849, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3849)
      0.2 = coord(1/5)
    
    Abstract
    Various fields, each with its own theories, techniques, and tools, are concerned with identifying and representing the conceptual structure of specific knowledge domains. This paper compares facet analysis, an analytic technique coming out of knowledge organization (especially as undertaken by members of the Classification Research Group (CRG)), with semantic frame analysis, an analytic technique coming out of lexical semantics (especially as undertaken by the developers of Frame-Net) The investigation addresses three questions: 1) how do CRG-style facet analysis and semantic frame analysis characterize the conceptual structures that they identify?; 2) how similar are the techniques they use?; and, 3) how similar are the conceptual structures they produce? Facet analysis is concerned with the logical categories underlying the terminology of an entire field, while semantic frame analysis is concerned with the participant-and-prop structure manifest in sentences about a type of situation or event. When their scope of application is similar, as, for example, in the areas of the performing arts or education, the resulting facets and semantic frame elements often bear striking resemblance, without being the same; facets are more often expressed as semantic types, while frame elements are more often expressed as roles.
    Type
    a
  6. Khanna, J.K.: Analytico-synthetic classification : (a study in CC-7) (1994) 0.00
    9.4378105E-4 = product of:
      0.0047189053 = sum of:
        0.0047189053 = weight(_text_:a in 1471) [ClassicSimilarity], result of:
          0.0047189053 = score(doc=1471,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.088261776 = fieldWeight in 1471, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=1471)
      0.2 = coord(1/5)
    
    Abstract
    ANALYTICO-SYNTHETIC CLASSIFICATION- the brain-child of S.R. Ranganathan has brought about an intellectual revolution in the theory and methodology of library classification by generating new ideas. By his vast erudition and deeper research in the Universe of Subjects, Ranganathan applied a postulation approach to classification based on the concept of facet analysis, Phase Analysis, Sector Analysis and Zone Analysis. His enquiry into the concept of fundamental Categories as well as the Analytico-Synthetic quality associated with it, the use of different connecting symbols as in the Meccano apparatus for constructing expressive class numbers for subjects of any depth, the versality of Notation, the analysis of Rounds and Levels, the formation and sharpening of Isolates through various devices, the introduction of the novel concepts of Specals, Systems, Speciators, and Environment Constituents has systematized the whole study of classification into principles, rules and canons. These new methodologies in classification invented as a part of Colon Classification have not only lifted practical classification form mere guess work to scientific methodology but also form an important theme in international conferences. The present work discusses in details the unique methodologies of Ranganathan as used in CC-7. The concepts of Primary Basic Subjects and Non -Primary Basic Subjects have also been discussed at length.
  7. Dahlberg, I.: Wissensmuster und Musterwissen im Erfassen klassifikatorischer Ganzheiten (1980) 0.00
    6.8111526E-4 = product of:
      0.0034055763 = sum of:
        0.0034055763 = weight(_text_:a in 124) [ClassicSimilarity], result of:
          0.0034055763 = score(doc=124,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.06369744 = fieldWeight in 124, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=124)
      0.2 = coord(1/5)
    
    Type
    a

Languages

  • e 101
  • d 5
  • chi 1
  • More… Less…

Types

  • a 93
  • el 9
  • m 7
  • s 4
  • b 1
  • More… Less…