Search (93 results, page 5 of 5)

  • × theme_ss:"Universale Facettenklassifikationen"
  • × type_ss:"a"
  1. Frické, M.: Logical division (2016) 0.00
    0.0016913437 = product of:
      0.0033826875 = sum of:
        0.0033826875 = product of:
          0.006765375 = sum of:
            0.006765375 = weight(_text_:a in 3183) [ClassicSimilarity], result of:
              0.006765375 = score(doc=3183,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12739488 = fieldWeight in 3183, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3183)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Division is obviously important to Knowledge Organization. Typically, an organizational infrastructure might acknowledge three types of connecting relationships: class hierarchies, where some classes are subclasses of others, partitive hierarchies, where some items are parts of others, and instantiation, where some items are members of some classes (see Z39.19 ANSI/NISO 2005 as an example). The first two of these involve division (the third, instantiation, does not involve division). Logical division would usually be a part of hierarchical classification systems, which, in turn, are central to shelving in libraries, to subject classification schemes, to controlled vocabularies, and to thesauri. Partitive hierarchies, and partitive division, are often essential to controlled vocabularies, thesauri, and subject tagging systems. Partitive hierarchies also relate to the bearers of information; for example, a journal would typically have its component articles as parts and, in turn, they might have sections as their parts, and, of course, components might be arrived at by partitive division (see Tillett 2009 as an illustration). Finally, verbal division, disambiguating homographs, is basic to controlled vocabularies. Thus Division is a broad and relevant topic. This article, though, is going to focus on Logical Division.
    Type
    a
  2. Satija, M.P.: Colon Classification (CC) (2017) 0.00
    0.0016913437 = product of:
      0.0033826875 = sum of:
        0.0033826875 = product of:
          0.006765375 = sum of:
            0.006765375 = weight(_text_:a in 3842) [ClassicSimilarity], result of:
              0.006765375 = score(doc=3842,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12739488 = fieldWeight in 3842, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3842)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Shiyali Ramamrita Ranganathan (1892-1972) has been called the father of the Indian library movement. He developed the revolutionary Colon Classification (CC) from 1924 to 1928, which was published in seven editions from 1933 to 1987. In this article, the evolution of CC through its seven editions is discussed. The unique features of CC are described, including the work in idea, verbal, and notational planes. Tools for designing and evaluating a system are enshrined in his fifty-five canons, twenty-two principles, thirteen postulates, and ten devices (Indian Statistical Institute 2012, 34-38). Semantic and syntactic relations are enshrined in his order of main classes, Principles of Helpful Sequence in arrays, the PMEST facet formula fitted with rounds and levels of facets, and other principles, such as the famous wall-picture principle for citation order of facets, and numerous devices for improvising class numbers for non-existent isolates and potential subjects. Briefly explained are facet and phase analyses and number building with its notational base of seventy-four characters and symbols. The entry concludes with a discussion of the extent of application of CC in libraries, its contribution to the science of classification, and a view of its future.
    Type
    a
  3. Sharada, B.A.: Ranganathan's Colon Classification : Kannada-English Version 'dwibindu vargiikaraNa' (2012) 0.00
    0.001674345 = product of:
      0.00334869 = sum of:
        0.00334869 = product of:
          0.00669738 = sum of:
            0.00669738 = weight(_text_:a in 827) [ClassicSimilarity], result of:
              0.00669738 = score(doc=827,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12611452 = fieldWeight in 827, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=827)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Categories, contexts and relations in knowledge organization: Proceedings of the Twelfth International ISKO Conference 6-9 August 2012, Mysore, India. Eds.: Neelameghan, A. u. K.S. Raghavan
    Type
    a
  4. Szostak, R.: Basic Concepts Classification (BCC) (2020) 0.00
    0.0014647468 = product of:
      0.0029294936 = sum of:
        0.0029294936 = product of:
          0.005858987 = sum of:
            0.005858987 = weight(_text_:a in 5883) [ClassicSimilarity], result of:
              0.005858987 = score(doc=5883,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.11032722 = fieldWeight in 5883, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5883)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Basics Concept Classification (BCC) is a "universal" scheme: it attempts to encompass all areas of human understanding. Whereas most universal schemes are organized around scholarly disciplines, the BCC is instead organized around phenomena (things), the relationships that exist among phenomena, and the properties that phenomena and relators may possess. This structure allows the BCC to apply facet analysis without requiring the use of "facet indicators." The main motivation for the BCC was a recognition that existing classifications that are organized around disciplines serve interdisciplinary scholarship poorly. Complex concepts that might be understood quite differently across groups and individuals can generally be broken into basic concepts for which there is enough shared understanding for the purposes of classification. Documents, ideas, and objects are classified synthetically by combining entries from the schedules of phenomena, relators, and properties. The inclusion of separate schedules of-generally verb-like-relators is one of the most unusual aspects of the BCC. This (and the schedules of properties that serve as adjectives or adverbs) allows the production of sentence-like subject strings. Documents can then be classified in terms of the main arguments made in the document. BCC provides very precise descriptors of documents by combining phenomena, relators, and properties synthetically. The terminology employed in the BCC reduces terminological ambiguity. The BCC is still being developed and it needs to be fleshed out in certain respects. Yet it also needs to be applied; only in application can the feasibility and desirability of the classification be adequately assessed.
    Type
    a
  5. Broughton, V.: Bliss Bibliographic Classification Second Edition (2009) 0.00
    0.001353075 = product of:
      0.00270615 = sum of:
        0.00270615 = product of:
          0.0054123 = sum of:
            0.0054123 = weight(_text_:a in 3755) [ClassicSimilarity], result of:
              0.0054123 = score(doc=3755,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10191591 = fieldWeight in 3755, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3755)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  6. Dahlberg, I.: Information Coding Classification : Geschichtliches, Prinzipien, Inhaltliches (2010) 0.00
    0.001353075 = product of:
      0.00270615 = sum of:
        0.00270615 = product of:
          0.0054123 = sum of:
            0.0054123 = weight(_text_:a in 4057) [ClassicSimilarity], result of:
              0.0054123 = score(doc=4057,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10191591 = fieldWeight in 4057, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4057)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  7. Rout, R.; Panigrahi, P.: Revisiting Ranganathan's canons in online cataloguing environment (2015) 0.00
    0.001353075 = product of:
      0.00270615 = sum of:
        0.00270615 = product of:
          0.0054123 = sum of:
            0.0054123 = weight(_text_:a in 2796) [ClassicSimilarity], result of:
              0.0054123 = score(doc=2796,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10191591 = fieldWeight in 2796, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2796)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  8. Montoya, R.D.: Parsimony in biological and colon classifications (2018) 0.00
    0.001353075 = product of:
      0.00270615 = sum of:
        0.00270615 = product of:
          0.0054123 = sum of:
            0.0054123 = weight(_text_:a in 4754) [ClassicSimilarity], result of:
              0.0054123 = score(doc=4754,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10191591 = fieldWeight in 4754, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4754)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  9. Dutta, B.: Ranganathan's elucidation of subject in the light of 'Infinity (8)' (2015) 0.00
    0.0011959607 = product of:
      0.0023919214 = sum of:
        0.0023919214 = product of:
          0.0047838427 = sum of:
            0.0047838427 = weight(_text_:a in 2794) [ClassicSimilarity], result of:
              0.0047838427 = score(doc=2794,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.090081796 = fieldWeight in 2794, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2794)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper reviews Ranganathan's description of subject from mathematical angle. Ranganathan was highly influenced by Nineteenth Century mathematician George Cantor and he used the concept of infinity in developing an axiomatic interpretation of subject. Majority of library scientists interpreted the concept of subject merely as a term or descriptor or heading to include the same in cataloguing and subject indexing. Some library scientists interpreted subject on the basis of document, i.e. from the angle of the concept of aboutness or epistemological potential of the document etc. Some people explained subject from the viewpoint of social, cultural or socio-cultural process. Attempts were made to describe subject from epistemological viewpoint. But S R Ranganathan was the first to develop an axiomatic concept of subject on its own. He built up an independent idea of subject that is ubiquitously pervasive with human cognition process. To develop the basic foundation of subject, he used the mathematical concepts of infinity and infinitesimal and construed the set of subjects or universe of subjects as continuous infinite universe. The subject may also exist in extremely micro-form, which was termed as spot subject and analogized with point, which is dimensionless having only an existence. The influence of Twentieth Century physicist George Gamow on Ranganathan's thought has also been discussed.
    Type
    a
  10. Green, R.: Facet analysis and semantic frames (2017) 0.00
    0.0011959607 = product of:
      0.0023919214 = sum of:
        0.0023919214 = product of:
          0.0047838427 = sum of:
            0.0047838427 = weight(_text_:a in 3849) [ClassicSimilarity], result of:
              0.0047838427 = score(doc=3849,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.090081796 = fieldWeight in 3849, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3849)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Various fields, each with its own theories, techniques, and tools, are concerned with identifying and representing the conceptual structure of specific knowledge domains. This paper compares facet analysis, an analytic technique coming out of knowledge organization (especially as undertaken by members of the Classification Research Group (CRG)), with semantic frame analysis, an analytic technique coming out of lexical semantics (especially as undertaken by the developers of Frame-Net) The investigation addresses three questions: 1) how do CRG-style facet analysis and semantic frame analysis characterize the conceptual structures that they identify?; 2) how similar are the techniques they use?; and, 3) how similar are the conceptual structures they produce? Facet analysis is concerned with the logical categories underlying the terminology of an entire field, while semantic frame analysis is concerned with the participant-and-prop structure manifest in sentences about a type of situation or event. When their scope of application is similar, as, for example, in the areas of the performing arts or education, the resulting facets and semantic frame elements often bear striking resemblance, without being the same; facets are more often expressed as semantic types, while frame elements are more often expressed as roles.
    Type
    a
  11. Broughton, V.: Facet analysis : the evolution of an idea (2023) 0.00
    0.0011839407 = product of:
      0.0023678814 = sum of:
        0.0023678814 = product of:
          0.0047357627 = sum of:
            0.0047357627 = weight(_text_:a in 1164) [ClassicSimilarity], result of:
              0.0047357627 = score(doc=1164,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.089176424 = fieldWeight in 1164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  12. Dahlberg, I.: Ontische Strukturen und Wissensmuster in der Wissensorganisation (2004) 0.00
    0.0010148063 = product of:
      0.0020296127 = sum of:
        0.0020296127 = product of:
          0.0040592253 = sum of:
            0.0040592253 = weight(_text_:a in 3155) [ClassicSimilarity], result of:
              0.0040592253 = score(doc=3155,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.07643694 = fieldWeight in 3155, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3155)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  13. Dahlberg, I.: Wissensmuster und Musterwissen im Erfassen klassifikatorischer Ganzheiten (1980) 0.00
    8.4567186E-4 = product of:
      0.0016913437 = sum of:
        0.0016913437 = product of:
          0.0033826875 = sum of:
            0.0033826875 = weight(_text_:a in 124) [ClassicSimilarity], result of:
              0.0033826875 = score(doc=124,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.06369744 = fieldWeight in 124, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=124)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a

Languages

  • e 88
  • d 4
  • chi 1
  • More… Less…