Search (39 results, page 1 of 2)

  • × theme_ss:"Universale Facettenklassifikationen"
  1. Dahlberg, I.: Grundlagen universaler Wissensordnung : Probleme und Möglichkeiten eines universalen Klassifikationssystems des Wissens (1974) 0.01
    0.014161265 = product of:
      0.070806324 = sum of:
        0.070806324 = weight(_text_:22 in 127) [ClassicSimilarity], result of:
          0.070806324 = score(doc=127,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.38690117 = fieldWeight in 127, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=127)
      0.2 = coord(1/5)
    
    Footnote
    Zugleich Dissertation Univ. Düsseldorf. - Rez. in: ZfBB. 22(1975) S.53-57 (H.-A. Koch)
  2. Dahlberg, I.: ¬The future of classification in libraries and networks : a theoretical point of view (1995) 0.01
    0.012779278 = product of:
      0.06389639 = sum of:
        0.06389639 = weight(_text_:it in 5563) [ClassicSimilarity], result of:
          0.06389639 = score(doc=5563,freq=14.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.42272866 = fieldWeight in 5563, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5563)
      0.2 = coord(1/5)
    
    Abstract
    Some time ago, some people said classification is dead, we don't need it any more. They probably thought that subject headings could do the job of the necessary subject analysis and shelving of books. However, all of a sudden in 1984 the attitude changed, when an OCLC study of Karen Markey started to show what could be done even with an "outdated system" such as the Dewey Decimal Classification in the computer, once it was visible on a screen to show the helpfulness of a classified library catalogue called an OPAC; classification was brought back into the minds of doubtful librarians and of all those who thought they would not need it any longer. But the problem once phrased: "We are stuck with the two old systems, LCC and DDC" would not find a solution and is still with us today. We know that our systems are outdated but we seem still to be unable to replace them with better ones. What then should one do and advise, knowing that we need something better? Perhaps a new universal ordering system which more adequately represents and mediates the world of our present day knowledge? If we were to develop it from scratch, how would we create it and implement it in such a way that it would be acceptable to the majority of the present intellectual world population?
  3. Austin, D.: Prospects for a new general classification (1969) 0.01
    0.011831312 = product of:
      0.05915656 = sum of:
        0.05915656 = weight(_text_:it in 1519) [ClassicSimilarity], result of:
          0.05915656 = score(doc=1519,freq=12.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.39137068 = fieldWeight in 1519, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1519)
      0.2 = coord(1/5)
    
    Abstract
    In traditional classification schemes, the universe of knowledge is brokeii down into self- contained disciplines which are further analysed to the point at which a particular concept is located. This leads to problems of: (a) currency: keeping the scheme in line with new discoveries. (b) hospitality: allowing room for insertion of new subjects (c) cross-classification: a concept may be considered in such a way that it fits as logically into one discipline as another. Machine retrieval is also hampered by the fact that any individual concept is notated differently, depending on where in the scheme it appears. The approach now considered is from an organized universe of concepts, every concept being set down only once in an appropriate vocabulary, where it acquires the notation which identifies it wherever it is used. It has been found that all the concepts present in any compound subject can be handled as though they belong to one of two basic concept types, being either Entities or Attributes. In classing, these concepts are identified, and notation is selected from appropriate schedules. Subjects are then built according to formal rules, the final class number incorporating operators which convey the fundamental relationships between concepts. From this viewpoint, the Rules and Operators of the proposed system can be seen as the grammar of an IR language, and the schedules of Entities and Attributes as its vocabulary.
  4. Satija, M.P.; Oh, D.-G.: ¬The DDC and the knowledge categories : Dewey did faceting without knowing it (2017) 0.01
    0.011592272 = product of:
      0.057961356 = sum of:
        0.057961356 = weight(_text_:it in 4157) [ClassicSimilarity], result of:
          0.057961356 = score(doc=4157,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.38346338 = fieldWeight in 4157, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.09375 = fieldNorm(doc=4157)
      0.2 = coord(1/5)
    
  5. Lin, W.-Y.C.: ¬The concept and applications of faceted classifications (2006) 0.01
    0.011329013 = product of:
      0.05664506 = sum of:
        0.05664506 = weight(_text_:22 in 5083) [ClassicSimilarity], result of:
          0.05664506 = score(doc=5083,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.30952093 = fieldWeight in 5083, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=5083)
      0.2 = coord(1/5)
    
    Date
    27. 5.2007 22:19:35
  6. Tennis, J.T.: Facets and fugit tempus : considering time's effect on faceted classification schemes (2012) 0.01
    0.011329013 = product of:
      0.05664506 = sum of:
        0.05664506 = weight(_text_:22 in 826) [ClassicSimilarity], result of:
          0.05664506 = score(doc=826,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.30952093 = fieldWeight in 826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=826)
      0.2 = coord(1/5)
    
    Date
    2. 6.2013 18:33:22
  7. Satija, M.P.: Save the national heritage : revise the Colon Classification (2015) 0.01
    0.010929298 = product of:
      0.05464649 = sum of:
        0.05464649 = weight(_text_:it in 2791) [ClassicSimilarity], result of:
          0.05464649 = score(doc=2791,freq=4.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.36153275 = fieldWeight in 2791, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0625 = fieldNorm(doc=2791)
      0.2 = coord(1/5)
    
    Abstract
    The paper presents a case for the revival of Colon Classification (CC). It traces the status of CC in brief and discusses its features. The author brings to light attempts made at providing a base for continuous improvements in the scheme and bringing it back to life. Measures for the revival of CC are suggested.
  8. Mills, J.: Faceted classification and logical division in information retrieval (2004) 0.01
    0.010039202 = product of:
      0.050196007 = sum of:
        0.050196007 = weight(_text_:it in 831) [ClassicSimilarity], result of:
          0.050196007 = score(doc=831,freq=6.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.33208904 = fieldWeight in 831, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.046875 = fieldNorm(doc=831)
      0.2 = coord(1/5)
    
    Abstract
    The main object of the paper is to demonstrate in detail the role of classification in information retrieval (IR) and the design of classificatory structures by the application of logical division to all forms of the content of records, subject and imaginative. The natural product of such division is a faceted classification. The latter is seen not as a particular kind of library classification but the only viable form enabling the locating and relating of information to be optimally predictable. A detailed exposition of the practical steps in facet analysis is given, drawing on the experience of the new Bliss Classification (BC2). The continued existence of the library as a highly organized information store is assumed. But, it is argued, it must acknowledge the relevance of the revolution in library classification that has taken place. It considers also how alphabetically arranged subject indexes may utilize controlled use of categorical (generically inclusive) and syntactic relations to produce similarly predictable locating and relating systems for IR.
  9. Broughton, V.: Finding Bliss on the Web : some problems of representing faceted terminologies in digital environments 0.01
    0.010039202 = product of:
      0.050196007 = sum of:
        0.050196007 = weight(_text_:it in 3532) [ClassicSimilarity], result of:
          0.050196007 = score(doc=3532,freq=6.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.33208904 = fieldWeight in 3532, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.046875 = fieldNorm(doc=3532)
      0.2 = coord(1/5)
    
    Abstract
    The Bliss Bibliographic Classification is the only example of a fully faceted general classification scheme in the Western world. Although it is the object of much interest as a model for other tools it suffers from the lack of a web presence, and remedying this is an immediate objective for its editors. Understanding how this might be done presents some challenges, as the scheme is semantically very rich and complex in the range and nature of the relationships it contains. The automatic management of these is already in place using local software, but exporting this to a common data format needs careful thought and planning. Various encoding schemes, both for traditional classifications, and for digital materials, represent variously: the concepts; their functional roles; and the relationships between them. Integrating these aspects in a coherent and interchangeable manner appears to be achievable, but the most appropriate format is as yet unclear.
  10. Perugini, S.: Supporting multiple paths to objects in information hierarchies : faceted classification, faceted search, and symbolic links (2010) 0.01
    0.009912886 = product of:
      0.04956443 = sum of:
        0.04956443 = weight(_text_:22 in 4227) [ClassicSimilarity], result of:
          0.04956443 = score(doc=4227,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.2708308 = fieldWeight in 4227, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4227)
      0.2 = coord(1/5)
    
    Source
    Information processing and management. 46(2010) no.1, S.22-43
  11. Heuvel, C. van den: Multidimensional classifications : past and future conceptualizations and visualizations (2012) 0.01
    0.009912886 = product of:
      0.04956443 = sum of:
        0.04956443 = weight(_text_:22 in 632) [ClassicSimilarity], result of:
          0.04956443 = score(doc=632,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.2708308 = fieldWeight in 632, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=632)
      0.2 = coord(1/5)
    
    Date
    22. 2.2013 11:31:25
  12. Barité, M.; Rauch, M.: Systematifier : in rescue of a useful tool in domain analysis (2017) 0.01
    0.009660226 = product of:
      0.04830113 = sum of:
        0.04830113 = weight(_text_:it in 4142) [ClassicSimilarity], result of:
          0.04830113 = score(doc=4142,freq=8.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.31955284 = fieldWeight in 4142, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4142)
      0.2 = coord(1/5)
    
    Abstract
    Literature on the systematifier is remarkably limited in knowledge organization. Dahlberg created the procedure in the seventies as a guide for the construction of classification systems and showed its applicability in systems she developed. According to her initial conception, all disciplines should be structured in the following sequence: Foundations and theories-Subjects of study-Methods-Influences-Applications-Environment. The nature of the procedure is determined in this study and the concept is situated in relation with the domain analysis methodologies. As a tool for the organization of the map of a certain domain, it is associated with a rationalist perspective and the top-down design of systems construction. It would require a reassessment of its scope in order to ensure its applicability to multidisciplinary and interdisciplinary domains. Among other conclusions, it is highlighted that the greatest potential of the systematifier is given by the fact that-as a methodological device-it can act as: i)an analyzer of a subject area; ii)an organizer of its main terms; and, iii)an identifier of links, bridges and intersection points with other knowledge areas.
  13. Integrative level classification: Research project (2004-) 0.01
    0.009563136 = product of:
      0.04781568 = sum of:
        0.04781568 = weight(_text_:it in 1151) [ClassicSimilarity], result of:
          0.04781568 = score(doc=1151,freq=4.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.31634116 = fieldWeight in 1151, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1151)
      0.2 = coord(1/5)
    
    Abstract
    Integrative level classification (ILC) is a research project being developed since 2004 by some members of the Italian chapter of ISKO, also involving cooperation with other researchers. Anyone interested is welcome to contact us at: ilc@mate.unipv.it. Aim of the project is to test application of the theory of integrative levels to knowledge organization (KO). This implies a naturalistic-ontological approach to KO, which is obviously not the only possible approach - actually it even looks to be unfashionable nowadays, although it agrees with current trends towards interdisciplinarity and interrelation between many research fields.
  14. Broughton, V.: Facet analysis : the evolution of an idea (2023) 0.01
    0.009563136 = product of:
      0.04781568 = sum of:
        0.04781568 = weight(_text_:it in 1164) [ClassicSimilarity], result of:
          0.04781568 = score(doc=1164,freq=4.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.31634116 = fieldWeight in 1164, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1164)
      0.2 = coord(1/5)
    
    Abstract
    Facets are widely encountered in information and knowledge organization, but there is much disparity in the use and understanding of concepts such as "facet," "facet analysis," and "faceted classification." The paper traces the history of these ideas and how they have been employed in different contexts. What may be termed the classical school of faceted classification is given some prominence, through the ideas of Ranganathan and the Classification Research Group, but other interpretations are also explored. Attention is paid not only to the idea of what facet analysis is, and what purpose it serves, but also the language utilized to describe and explain it.
  15. Szostak, R.: Facet analysis using grammar (2017) 0.01
    0.008366001 = product of:
      0.041830003 = sum of:
        0.041830003 = weight(_text_:it in 3866) [ClassicSimilarity], result of:
          0.041830003 = score(doc=3866,freq=6.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.27674085 = fieldWeight in 3866, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3866)
      0.2 = coord(1/5)
    
    Abstract
    Basic grammar can achieve most/all of the goals of facet analysis without requiring the use of facet indicators. Facet analysis is thus rendered far simpler for classificationist, classifier, and user. We compare facet analysis and grammar, and show how various facets can be represented grammatically. We then address potential challenges in employing grammar as subject classification. A detailed review of basic grammar supports the hypothesis that it is feasible to usefully employ grammatical construction in subject classification. A manageable - and programmable - set of adjustments is required as classifiers move fairly directly from sentences in a document (or object or idea) description to formulating a subject classification. The user likewise can move fairly quickly from a query to the identification of relevant works. A review of theories in linguistics indicates that a grammatical approach should reduce ambiguity while encouraging ease of use. This paper applies the recommended approach to a small sample of recently published books. It finds that the approach is feasible and results in a more precise subject description than the subject headings assigned at present. It then explores PRECIS, an indexing system developed in the 1970s. Though our approach differs from PRECIS in many important ways, the experience of PRECIS supports our conclusions regarding both feasibility and precision.
  16. Szostak, R.: Basic Concepts Classification (BCC) (2020) 0.01
    0.008366001 = product of:
      0.041830003 = sum of:
        0.041830003 = weight(_text_:it in 5883) [ClassicSimilarity], result of:
          0.041830003 = score(doc=5883,freq=6.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.27674085 = fieldWeight in 5883, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5883)
      0.2 = coord(1/5)
    
    Abstract
    The Basics Concept Classification (BCC) is a "universal" scheme: it attempts to encompass all areas of human understanding. Whereas most universal schemes are organized around scholarly disciplines, the BCC is instead organized around phenomena (things), the relationships that exist among phenomena, and the properties that phenomena and relators may possess. This structure allows the BCC to apply facet analysis without requiring the use of "facet indicators." The main motivation for the BCC was a recognition that existing classifications that are organized around disciplines serve interdisciplinary scholarship poorly. Complex concepts that might be understood quite differently across groups and individuals can generally be broken into basic concepts for which there is enough shared understanding for the purposes of classification. Documents, ideas, and objects are classified synthetically by combining entries from the schedules of phenomena, relators, and properties. The inclusion of separate schedules of-generally verb-like-relators is one of the most unusual aspects of the BCC. This (and the schedules of properties that serve as adjectives or adverbs) allows the production of sentence-like subject strings. Documents can then be classified in terms of the main arguments made in the document. BCC provides very precise descriptors of documents by combining phenomena, relators, and properties synthetically. The terminology employed in the BCC reduces terminological ambiguity. The BCC is still being developed and it needs to be fleshed out in certain respects. Yet it also needs to be applied; only in application can the feasibility and desirability of the classification be adequately assessed.
  17. Babbar, P.: Web CC : an effort towards its revival (2015) 0.01
    0.008196974 = product of:
      0.04098487 = sum of:
        0.04098487 = weight(_text_:it in 2792) [ClassicSimilarity], result of:
          0.04098487 = score(doc=2792,freq=4.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.27114958 = fieldWeight in 2792, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.046875 = fieldNorm(doc=2792)
      0.2 = coord(1/5)
    
    Abstract
    Colon Classification (CC), based on dynamic theory of classification saw seven editions from 1928 to 1987. Libraries practising it continued with extensions and additions carried out to meet their needs since it was not revised for long after the 7th edition. Revision requires adding terms in different disciplines, organising them in relation to each other and assigning notation for shelf classification. Use of ICT would help in reviving CC and is essential for regular revision of a classification scheme. The paper explores the possibility for creation of an expert system through the design of Web based Colon Classification. The author explores the possibility by designing a prototype for online revision of Colon Classification in the paper.
  18. Broughton, V.: Bliss Bibliographic Classification Second Edition (2009) 0.01
    0.0077281813 = product of:
      0.038640905 = sum of:
        0.038640905 = weight(_text_:it in 3755) [ClassicSimilarity], result of:
          0.038640905 = score(doc=3755,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.25564227 = fieldWeight in 3755, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0625 = fieldNorm(doc=3755)
      0.2 = coord(1/5)
    
    Abstract
    This entry looks at the origins of the Bliss Bibliographic Classification 2nd edition and the theory on which it is built. The reasons for the decision to revise the classification are examined, as are the influences on classification theory of the mid-twentieth century. The process of revision and construction of schedules using facet analysis is described. The use of BC2 is considered along with some recent development work on thesaural and digital formats.
  19. Panigrahi, P.: Ranganathan and Dewey in hierarchical subject classification : some similarities (2015) 0.01
    0.0077281813 = product of:
      0.038640905 = sum of:
        0.038640905 = weight(_text_:it in 2789) [ClassicSimilarity], result of:
          0.038640905 = score(doc=2789,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.25564227 = fieldWeight in 2789, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0625 = fieldNorm(doc=2789)
      0.2 = coord(1/5)
    
    Abstract
    S R Ranganathan and Melvil Dewey devised two types of classification schemes viz., faceted and enumerative. Ranganathan's faceted classification scheme is based on postulates, principles and canons. It has a strong theory. While working with the two schemes, similarities are observed. This paper tries to identify and present some relationships.
  20. Faceted classification today : International UDC Seminar 2017, 14.-15. Spetember, London, UK. (2017) 0.01
    0.0077281813 = product of:
      0.038640905 = sum of:
        0.038640905 = weight(_text_:it in 3773) [ClassicSimilarity], result of:
          0.038640905 = score(doc=3773,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.25564227 = fieldWeight in 3773, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0625 = fieldNorm(doc=3773)
      0.2 = coord(1/5)
    
    Abstract
    Faceted analytical theory is a widely accepted approach for constructing modern classification schemes and other controlled vocabularies. While the advantages of faceted approach are broadly accepted and understood the actual implementation is coupled with many challenges when it comes to data modelling, management and retrieval. UDC Seminar 2017 revisits faceted analytical theory as one of the most influential methodologies in the development of knowledge organization systems.