Search (157 results, page 1 of 8)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.24
    0.23582384 = product of:
      0.3144318 = sum of:
        0.07465742 = product of:
          0.22397225 = sum of:
            0.22397225 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.22397225 = score(doc=400,freq=2.0), product of:
                0.39851433 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04700564 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.22397225 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.22397225 = score(doc=400,freq=2.0), product of:
            0.39851433 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04700564 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.015802111 = product of:
          0.031604223 = sum of:
            0.031604223 = weight(_text_:classification in 400) [ClassicSimilarity], result of:
              0.031604223 = score(doc=400,freq=2.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.21111822 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  2. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.13
    0.13046734 = product of:
      0.26093468 = sum of:
        0.049771614 = product of:
          0.14931484 = sum of:
            0.14931484 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.14931484 = score(doc=5820,freq=2.0), product of:
                0.39851433 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04700564 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.21116306 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.21116306 = score(doc=5820,freq=4.0), product of:
            0.39851433 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04700564 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.5 = coord(2/4)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  3. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.10
    0.09954323 = product of:
      0.19908646 = sum of:
        0.049771614 = product of:
          0.14931484 = sum of:
            0.14931484 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.14931484 = score(doc=701,freq=2.0), product of:
                0.39851433 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04700564 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.14931484 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.14931484 = score(doc=701,freq=2.0), product of:
            0.39851433 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04700564 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5 = coord(2/4)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  4. Miles, A.; Pérez-Agüera, J.R.: SKOS: Simple Knowledge Organisation for the Web (2006) 0.05
    0.04884935 = product of:
      0.0976987 = sum of:
        0.06576697 = weight(_text_:subject in 504) [ClassicSimilarity], result of:
          0.06576697 = score(doc=504,freq=4.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.3911902 = fieldWeight in 504, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0546875 = fieldNorm(doc=504)
        0.031931736 = product of:
          0.06386347 = sum of:
            0.06386347 = weight(_text_:classification in 504) [ClassicSimilarity], result of:
              0.06386347 = score(doc=504,freq=6.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.42661208 = fieldWeight in 504, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=504)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This article introduces the Simple Knowledge Organisation System (SKOS), a Semantic Web language for representing controlled structured vocabularies, including thesauri, classification schemes, subject heading systems and taxonomies. SKOS provides a framework for publishing thesauri, classification schemes, and subject indexes on the Web, and for applying these systems to resource collections that are part of the SemanticWeb. SemanticWeb applications may harvest and merge SKOS data, to integrate and enhances retrieval service across multiple collections (e.g. libraries). This article also describes some alternatives for integrating Semantic Web services based on the Resource Description Framework (RDF) and SKOS into a distributed enterprise architecture.
    Source
    Cataloging and classification quarterly. 43(2006) nos.3/4, S.69-83
  5. Panzer, M.: Towards the "webification" of controlled subject vocabulary : a case study involving the Dewey Decimal Classification (2007) 0.05
    0.04591956 = product of:
      0.09183912 = sum of:
        0.06576697 = weight(_text_:subject in 538) [ClassicSimilarity], result of:
          0.06576697 = score(doc=538,freq=4.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.3911902 = fieldWeight in 538, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0546875 = fieldNorm(doc=538)
        0.026072152 = product of:
          0.052144304 = sum of:
            0.052144304 = weight(_text_:classification in 538) [ClassicSimilarity], result of:
              0.052144304 = score(doc=538,freq=4.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.34832728 = fieldWeight in 538, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=538)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The presentation will briefly introduce a series of major principles for bringing subject terminology to the network level. A closer look at one KOS in particular, the Dewey Decimal Classification, should help to gain more insight into the perceived difficulties and potential benefits of building taxonomy services out and on top of classic large-scale vocabularies or taxonomies.
  6. Gödert, W.; Hubrich, J.; Nagelschmidt, M.: Semantic knowledge representation for information retrieval (2014) 0.04
    0.044073388 = product of:
      0.088146776 = sum of:
        0.06904093 = weight(_text_:subject in 987) [ClassicSimilarity], result of:
          0.06904093 = score(doc=987,freq=6.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.41066417 = fieldWeight in 987, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=987)
        0.019105844 = product of:
          0.03821169 = sum of:
            0.03821169 = weight(_text_:22 in 987) [ClassicSimilarity], result of:
              0.03821169 = score(doc=987,freq=2.0), product of:
                0.16460574 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04700564 = queryNorm
                0.23214069 = fieldWeight in 987, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=987)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This book covers the basics of semantic web technologies and indexing languages, and describes their contribution to improve languages as a tool for subject queries and knowledge exploration. The book is relevant to information scientists, knowledge workers and indexers. It provides a suitable combination of theoretical foundations and practical applications.
    Date
    23. 7.2017 13:49:22
    LCSH
    World Wide Web / Subject access
    Subject
    World Wide Web / Subject access
  7. Deokattey, S.; Neelameghan, A.; Kumar, V.: ¬A method for developing a domain ontology : a case study for a multidisciplinary subject (2010) 0.04
    0.04402856 = product of:
      0.08805712 = sum of:
        0.06576697 = weight(_text_:subject in 3694) [ClassicSimilarity], result of:
          0.06576697 = score(doc=3694,freq=4.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.3911902 = fieldWeight in 3694, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3694)
        0.022290153 = product of:
          0.044580307 = sum of:
            0.044580307 = weight(_text_:22 in 3694) [ClassicSimilarity], result of:
              0.044580307 = score(doc=3694,freq=2.0), product of:
                0.16460574 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04700564 = queryNorm
                0.2708308 = fieldWeight in 3694, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3694)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    A method to develop a prototype domain ontology has been described. The domain selected for the study is Accelerator Driven Systems. This is a multidisciplinary and interdisciplinary subject comprising Nuclear Physics, Nuclear and Reactor Engineering, Reactor Fuels and Radioactive Waste Management. Since Accelerator Driven Systems is a vast topic, select areas in it were singled out for the study. Both qualitative and quantitative methods such as Content analysis, Facet analysis and Clustering were used, to develop the web-based model.
    Date
    22. 7.2010 19:41:16
  8. Buizza, G.: Subject analysis and indexing : an "Italian version" of the analytico-synthetic model (2011) 0.04
    0.04242152 = product of:
      0.08484304 = sum of:
        0.06904093 = weight(_text_:subject in 1812) [ClassicSimilarity], result of:
          0.06904093 = score(doc=1812,freq=6.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.41066417 = fieldWeight in 1812, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=1812)
        0.015802111 = product of:
          0.031604223 = sum of:
            0.031604223 = weight(_text_:classification in 1812) [ClassicSimilarity], result of:
              0.031604223 = score(doc=1812,freq=2.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.21111822 = fieldWeight in 1812, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1812)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The paper presents the theoretical foundation of Italian indexing system. A consistent integration of vocabulary control through a thesaurus (semantics) and of role analysis to construct subject strings (syntax) allows to represent the full theme of a work, even if complex, in one string. The conceptual model produces a binary scheme: each aspect (entities, relationships, etc.) consists of a couple of elements, drawing the two lines of semantics and syntax. The meaning of 'concept' and 'theme' is analysed, also in comparison with the FRBR and FRSAD models, with the proposal of an en riched model. A double existence of concepts is suggested: document-independent adn document-dependent.
    Source
    Subject access: preparing for the future. Conference on August 20 - 21, 2009 in Florence, the IFLA Classification and Indexing Section sponsored an IFLA satellite conference entitled "Looking at the Past and Preparing for the Future". Eds.: P. Landry et al
  9. Peponakis, M.; Mastora, A.; Kapidakis, S.; Doerr, M.: Expressiveness and machine processability of Knowledge Organization Systems (KOS) : an analysis of concepts and relations (2020) 0.04
    0.03807854 = product of:
      0.07615708 = sum of:
        0.05753411 = weight(_text_:subject in 5787) [ClassicSimilarity], result of:
          0.05753411 = score(doc=5787,freq=6.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.34222013 = fieldWeight in 5787, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5787)
        0.018622966 = product of:
          0.037245933 = sum of:
            0.037245933 = weight(_text_:classification in 5787) [ClassicSimilarity], result of:
              0.037245933 = score(doc=5787,freq=4.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.24880521 = fieldWeight in 5787, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5787)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This study considers the expressiveness (that is the expressive power or expressivity) of different types of Knowledge Organization Systems (KOS) and discusses its potential to be machine-processable in the context of the Semantic Web. For this purpose, the theoretical foundations of KOS are reviewed based on conceptualizations introduced by the Functional Requirements for Subject Authority Data (FRSAD) and the Simple Knowledge Organization System (SKOS); natural language processing techniques are also implemented. Applying a comparative analysis, the dataset comprises a thesaurus (Eurovoc), a subject headings system (LCSH) and a classification scheme (DDC). These are compared with an ontology (CIDOC-CRM) by focusing on how they define and handle concepts and relations. It was observed that LCSH and DDC focus on the formalism of character strings (nomens) rather than on the modelling of semantics; their definition of what constitutes a concept is quite fuzzy, and they comprise a large number of complex concepts. By contrast, thesauri have a coherent definition of what constitutes a concept, and apply a systematic approach to the modelling of relations. Ontologies explicitly define diverse types of relations, and are by their nature machine-processable. The paper concludes that the potential of both the expressiveness and machine processability of each KOS is extensively regulated by its structural rules. It is harder to represent subject headings and classification schemes as semantic networks with nodes and arcs, while thesauri are more suitable for such a representation. In addition, a paradigm shift is revealed which focuses on the modelling of relations between concepts, rather than the concepts themselves.
  10. Sperber, W.; Ion, P.D.F.: Content analysis and classification in mathematics (2011) 0.04
    0.037597697 = product of:
      0.075195394 = sum of:
        0.0398608 = weight(_text_:subject in 4818) [ClassicSimilarity], result of:
          0.0398608 = score(doc=4818,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.23709705 = fieldWeight in 4818, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=4818)
        0.035334595 = product of:
          0.07066919 = sum of:
            0.07066919 = weight(_text_:classification in 4818) [ClassicSimilarity], result of:
              0.07066919 = score(doc=4818,freq=10.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.4720747 = fieldWeight in 4818, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4818)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The number of publications in mathematics increases faster each year. Presently far more than 100,000 mathematically relevant journal articles and books are published annually. Efficient and high-quality content analysis of this material is important for mathematical bibliographic services such as ZBMath or MathSciNet. Content analysis has different facets and levels: classification, keywords, abstracts and reviews, and (in the future) formula analysis. It is the opinion of the authors that the different levels have to be enhanced and combined using the methods and technology of the Semantic Web. In the presentation, the problems and deficits of the existing methods and tools, the state of the art and current activities are discussed. As a first step, the Mathematical Subject Classification Scheme (MSC), has been encoded with Simple Knowledge Organization System (SKOS) and Resource Description Framework (RDF) at its recent revision to MSC2010. The use of SKOS principally opens new possibilities for the enrichment and wider deployment of this classification scheme and for machine-based content analysis of mathematical publications.
    Source
    Classification and ontology: formal approaches and access to knowledge: proceedings of the International UDC Seminar, 19-20 September 2011, The Hague, The Netherlands. Eds.: A. Slavic u. E. Civallero
  11. Rolland-Thomas, P.: Thesaural codes : an appraisal of their use in the Library of Congress Subject Headings (1993) 0.04
    0.037159674 = product of:
      0.07431935 = sum of:
        0.059420973 = weight(_text_:subject in 549) [ClassicSimilarity], result of:
          0.059420973 = score(doc=549,freq=10.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.35344344 = fieldWeight in 549, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03125 = fieldNorm(doc=549)
        0.014898373 = product of:
          0.029796746 = sum of:
            0.029796746 = weight(_text_:classification in 549) [ClassicSimilarity], result of:
              0.029796746 = score(doc=549,freq=4.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.19904417 = fieldWeight in 549, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.03125 = fieldNorm(doc=549)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    LCSH is known as such since 1975. It always has created headings to serve the LC collections instead of a theoretical basis. It started to replace cross reference codes by thesaural codes in 1986, in a mechanical fashion. It was in no way transformed into a thesaurus. Its encyclopedic coverage, its pre-coordinate concepts make it substantially distinct, considering that thesauri usually map a restricted field of knowledge and use uniterms. The questions raised are whether the new symbols comply with thesaurus standards and if they are true to one or to several models. Explanations and definitions from other lists of subject headings and thesauri, literature in the field of classification and subject indexing will provide some answers. For instance, see refers from a subject heading not used to another or others used. Exceptionally it will lead from a specific term to a more general one. Some equate a see reference with the equivalence relationship. Such relationships are pointed by USE in LCSH. See also references are made from the broader subject to narrower parts of it and also between associated subjects. They suggest lateral or vertical connexions as well as reciprocal relationships. They serve a coordination purpose for some, lay down a methodical search itinerary for others. Since their inception in the 1950's thesauri have been devised for indexing and retrieving information in the fields of science and technology. Eventually they attended to a number of social sciences and humanities. Research derived from thesauri was voluminous. Numerous guidelines are designed. They did not discriminate between the "hard" sciences and the social sciences. RT relationships are widely but diversely used in numerous controlled vocabularies. LCSH's aim is to achieve a list almost free of RT and SA references. It thus restricts relationships to BT/NT, USE and UF. This raises the question as to whether all fields of knowledge can "fit" in the Procrustean bed of RT/NT, i.e., genus/species relationships. Standard codes were devised. It was soon realized that BT/NT, well suited to the genus/species couple could not signal a whole-part relationship. In LCSH, BT and NT function as reciprocals, the whole-part relationship is taken into account by ISO. It is amply elaborated upon by authors. The part-whole connexion is sometimes studied apart. The decision to replace cross reference codes was an improvement. Relations can now be distinguished through the distinct needs of numerous fields of knowledge are not attended to. Topic inclusion, and topic-subtopic, could provide the missing link where genus/species or whole/part are inadequate. Distinct codes, BT/NT and whole/part, should be provided. Sorting relationships with mechanical means can only lead to confusion.
    Source
    Cataloging and classification quarterly. 16(1993) no.2, S.71-91
  12. Buxton, A.: Ontologies and classification of chemicals : can they help each other? (2011) 0.04
    0.036361303 = product of:
      0.07272261 = sum of:
        0.033217333 = weight(_text_:subject in 4817) [ClassicSimilarity], result of:
          0.033217333 = score(doc=4817,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.19758089 = fieldWeight in 4817, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4817)
        0.039505273 = product of:
          0.079010546 = sum of:
            0.079010546 = weight(_text_:classification in 4817) [ClassicSimilarity], result of:
              0.079010546 = score(doc=4817,freq=18.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.5277955 = fieldWeight in 4817, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4817)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The chemistry schedule in the Universal Decimal Classification (UDC) is badly in need of revision. In many places it is enumerative rather than synthetic (giving rules for constructing numbers for any compound required). In principle, chemistry should be the ideal subject for a synthetic classification but many common compounds have complex formulae and a synthetic system becomes unwieldy. Also, all compounds belong to several hierarchies, e.g. chloroquin is a heterocycle, an aromatic compound, amine, antimalarial drug, etc. and rules need to be drawn up as to which ones take precedence and which ones should be taken into account in classifying a compound. There are obvious similarities between a classification and an ontology. This paper looks at existing ontologies for chemistry, especially ChEBI which is one of the largest, to examine how a classification and an ontology might draw on each other and what the problem areas are. An ontology might help in creating an index to a classification (for chemicals not listed or to provide access by facets not used in the classification) and a classification could provide a hierarchy to use in an ontology.
    Source
    Classification and ontology: formal approaches and access to knowledge: proceedings of the International UDC Seminar, 19-20 September 2011, The Hague, The Netherlands. Eds.: A. Slavic u. E. Civallero
  13. Jacobs, I.: From chaos, order: W3C standard helps organize knowledge : SKOS Connects Diverse Knowledge Organization Systems to Linked Data (2009) 0.03
    0.03308688 = product of:
      0.06617376 = sum of:
        0.056955863 = weight(_text_:subject in 3062) [ClassicSimilarity], result of:
          0.056955863 = score(doc=3062,freq=12.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.33878064 = fieldWeight in 3062, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3062)
        0.009217897 = product of:
          0.018435795 = sum of:
            0.018435795 = weight(_text_:classification in 3062) [ClassicSimilarity], result of:
              0.018435795 = score(doc=3062,freq=2.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.12315229 = fieldWeight in 3062, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3062)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    18 August 2009 -- Today W3C announces a new standard that builds a bridge between the world of knowledge organization systems - including thesauri, classifications, subject headings, taxonomies, and folksonomies - and the linked data community, bringing benefits to both. Libraries, museums, newspapers, government portals, enterprises, social networking applications, and other communities that manage large collections of books, historical artifacts, news reports, business glossaries, blog entries, and other items can now use Simple Knowledge Organization System (SKOS) to leverage the power of linked data. As different communities with expertise and established vocabularies use SKOS to integrate them into the Semantic Web, they increase the value of the information for everyone.
    Content
    SKOS Adapts to the Diversity of Knowledge Organization Systems A useful starting point for understanding the role of SKOS is the set of subject headings published by the US Library of Congress (LOC) for categorizing books, videos, and other library resources. These headings can be used to broaden or narrow queries for discovering resources. For instance, one can narrow a query about books on "Chinese literature" to "Chinese drama," or further still to "Chinese children's plays." Library of Congress subject headings have evolved within a community of practice over a period of decades. By now publishing these subject headings in SKOS, the Library of Congress has made them available to the linked data community, which benefits from a time-tested set of concepts to re-use in their own data. This re-use adds value ("the network effect") to the collection. When people all over the Web re-use the same LOC concept for "Chinese drama," or a concept from some other vocabulary linked to it, this creates many new routes to the discovery of information, and increases the chances that relevant items will be found. As an example of mapping one vocabulary to another, a combined effort from the STITCH, TELplus and MACS Projects provides links between LOC concepts and RAMEAU, a collection of French subject headings used by the Bibliothèque Nationale de France and other institutions. SKOS can be used for subject headings but also many other approaches to organizing knowledge. Because different communities are comfortable with different organization schemes, SKOS is designed to port diverse knowledge organization systems to the Web. "Active participation from the library and information science community in the development of SKOS over the past seven years has been key to ensuring that SKOS meets a variety of needs," said Thomas Baker, co-chair of the Semantic Web Deployment Working Group, which published SKOS. "One goal in creating SKOS was to provide new uses for well-established knowledge organization systems by providing a bridge to the linked data cloud." SKOS is part of the Semantic Web technology stack. Like the Web Ontology Language (OWL), SKOS can be used to define vocabularies. But the two technologies were designed to meet different needs. SKOS is a simple language with just a few features, tuned for sharing and linking knowledge organization systems such as thesauri and classification schemes. OWL offers a general and powerful framework for knowledge representation, where additional "rigor" can afford additional benefits (for instance, business rule processing). To get started with SKOS, see the SKOS Primer.
  14. Kleineberg, M.: ¬The blind men and the elephant : towards an organization of epistemic contexts (2013) 0.03
    0.032799684 = product of:
      0.06559937 = sum of:
        0.046976402 = weight(_text_:subject in 1074) [ClassicSimilarity], result of:
          0.046976402 = score(doc=1074,freq=4.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.27942157 = fieldWeight in 1074, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1074)
        0.018622966 = product of:
          0.037245933 = sum of:
            0.037245933 = weight(_text_:classification in 1074) [ClassicSimilarity], result of:
              0.037245933 = score(doc=1074,freq=4.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.24880521 = fieldWeight in 1074, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1074)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In the last two decades of knowledge organization (KO) research, there has been an increasing interest in the context-dependent nature of human knowledge. Contextualism maintains that knowledge is not available in a neutral and objective way, but is always interwoven with the process of knowledge production and the prerequisites of the knower. As a first step towards a systematic organization of epistemic contexts, the concept of knowledge will be considered in its ontological (WHAT) and epistemological (WHO) including methodological (HOW) dimensions. In current KO research, however, either the contextualism is not fully implemented (classification-as-ontology) or the ambition for a context-transcending universal KOS seems to have been abandoned (classification-as-epistemology). Based on a combined ontology and epistemology it will be argued that a phenomena-based approach to KO as stipulated by the León Manifesto, for example, requires a revision of the underlying phenomenon concept as a relation between the known object (WHAT) and the knowing subject (WHO), which is constituted by the application of specific methods (HOW). While traditional subject indexing of documents often relies on the organizing principle "levels of being" (WHAT), for a future context indexing, two novel principles are proposed, namely "levels of knowing" (WHO) and "integral methodological pluralism" (HOW).
  15. Frické, M.: Logical division (2016) 0.03
    0.032799684 = product of:
      0.06559937 = sum of:
        0.046976402 = weight(_text_:subject in 3183) [ClassicSimilarity], result of:
          0.046976402 = score(doc=3183,freq=4.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.27942157 = fieldWeight in 3183, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3183)
        0.018622966 = product of:
          0.037245933 = sum of:
            0.037245933 = weight(_text_:classification in 3183) [ClassicSimilarity], result of:
              0.037245933 = score(doc=3183,freq=4.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.24880521 = fieldWeight in 3183, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3183)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Division is obviously important to Knowledge Organization. Typically, an organizational infrastructure might acknowledge three types of connecting relationships: class hierarchies, where some classes are subclasses of others, partitive hierarchies, where some items are parts of others, and instantiation, where some items are members of some classes (see Z39.19 ANSI/NISO 2005 as an example). The first two of these involve division (the third, instantiation, does not involve division). Logical division would usually be a part of hierarchical classification systems, which, in turn, are central to shelving in libraries, to subject classification schemes, to controlled vocabularies, and to thesauri. Partitive hierarchies, and partitive division, are often essential to controlled vocabularies, thesauri, and subject tagging systems. Partitive hierarchies also relate to the bearers of information; for example, a journal would typically have its component articles as parts and, in turn, they might have sections as their parts, and, of course, components might be arrived at by partitive division (see Tillett 2009 as an illustration). Finally, verbal division, disambiguating homographs, is basic to controlled vocabularies. Thus Division is a broad and relevant topic. This article, though, is going to focus on Logical Division.
  16. SKOS Simple Knowledge Organization System Reference : W3C Recommendation 18 August 2009 (2009) 0.03
    0.027831456 = product of:
      0.05566291 = sum of:
        0.0398608 = weight(_text_:subject in 4688) [ClassicSimilarity], result of:
          0.0398608 = score(doc=4688,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.23709705 = fieldWeight in 4688, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=4688)
        0.015802111 = product of:
          0.031604223 = sum of:
            0.031604223 = weight(_text_:classification in 4688) [ClassicSimilarity], result of:
              0.031604223 = score(doc=4688,freq=2.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.21111822 = fieldWeight in 4688, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4688)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This document defines the Simple Knowledge Organization System (SKOS), a common data model for sharing and linking knowledge organization systems via the Web. Many knowledge organization systems, such as thesauri, taxonomies, classification schemes and subject heading systems, share a similar structure, and are used in similar applications. SKOS captures much of this similarity and makes it explicit, to enable data and technology sharing across diverse applications. The SKOS data model provides a standard, low-cost migration path for porting existing knowledge organization systems to the Semantic Web. SKOS also provides a lightweight, intuitive language for developing and sharing new knowledge organization systems. It may be used on its own, or in combination with formal knowledge representation languages such as the Web Ontology language (OWL). This document is the normative specification of the Simple Knowledge Organization System. It is intended for readers who are involved in the design and implementation of information systems, and who already have a good understanding of Semantic Web technology, especially RDF and OWL. For an informative guide to using SKOS, see the [SKOS-PRIMER].
  17. SKOS Core Guide (2005) 0.03
    0.027831456 = product of:
      0.05566291 = sum of:
        0.0398608 = weight(_text_:subject in 4689) [ClassicSimilarity], result of:
          0.0398608 = score(doc=4689,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.23709705 = fieldWeight in 4689, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=4689)
        0.015802111 = product of:
          0.031604223 = sum of:
            0.031604223 = weight(_text_:classification in 4689) [ClassicSimilarity], result of:
              0.031604223 = score(doc=4689,freq=2.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.21111822 = fieldWeight in 4689, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4689)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    SKOS Core provides a model for expressing the basic structure and content of concept schemes such as thesauri, classification schemes, subject heading lists, taxonomies, 'folksonomies', other types of controlled vocabulary, and also concept schemes embedded in glossaries and terminologies. The SKOS Core Vocabulary is an application of the Resource Description Framework (RDF), that can be used to express a concept scheme as an RDF graph. Using RDF allows data to be linked to and/or merged with other data, enabling data sources to be distributed across the web, but still be meaningfully composed and integrated. This document is a guide using the SKOS Core Vocabulary, for readers who already have a basic understanding of RDF concepts. This edition of the SKOS Core Guide [SKOS Core Guide] is a W3C Public Working Draft. It is the authoritative guide to recommended usage of the SKOS Core Vocabulary at the time of publication.
  18. SKOS Simple Knowledge Organization System Primer (2009) 0.03
    0.027831456 = product of:
      0.05566291 = sum of:
        0.0398608 = weight(_text_:subject in 4795) [ClassicSimilarity], result of:
          0.0398608 = score(doc=4795,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.23709705 = fieldWeight in 4795, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=4795)
        0.015802111 = product of:
          0.031604223 = sum of:
            0.031604223 = weight(_text_:classification in 4795) [ClassicSimilarity], result of:
              0.031604223 = score(doc=4795,freq=2.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.21111822 = fieldWeight in 4795, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4795)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    SKOS (Simple Knowledge Organisation System) provides a model for expressing the basic structure and content of concept schemes such as thesauri, classification schemes, subject heading lists, taxonomies, folksonomies, and other types of controlled vocabulary. As an application of the Resource Description Framework (RDF) SKOS allows concepts to be documented, linked and merged with other data, while still being composed, integrated and published on the World Wide Web. This document is an implementors guide for those who would like to represent their concept scheme using SKOS. In basic SKOS, conceptual resources (concepts) can be identified using URIs, labelled with strings in one or more natural languages, documented with various types of notes, semantically related to each other in informal hierarchies and association networks, and aggregated into distinct concept schemes. In advanced SKOS, conceptual resources can be mapped to conceptual resources in other schemes and grouped into labelled or ordered collections. Concept labels can also be related to each other. Finally, the SKOS vocabulary itself can be extended to suit the needs of particular communities of practice.
  19. Prieto-Díaz, R.: ¬A faceted approach to building ontologies (2002) 0.03
    0.027831456 = product of:
      0.05566291 = sum of:
        0.0398608 = weight(_text_:subject in 2259) [ClassicSimilarity], result of:
          0.0398608 = score(doc=2259,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.23709705 = fieldWeight in 2259, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=2259)
        0.015802111 = product of:
          0.031604223 = sum of:
            0.031604223 = weight(_text_:classification in 2259) [ClassicSimilarity], result of:
              0.031604223 = score(doc=2259,freq=2.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.21111822 = fieldWeight in 2259, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2259)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    An ontology is "an explicit conceptualization of a domain of discourse, and thus provides a shared and common understanding of the domain." We have been producing ontologies for millennia to understand and explain our rationale and environment. From Plato's philosophical framework to modern day classification systems, ontologies are, in most cases, the product of extensive analysis and categorization. Only recently has the process of building ontologies become a research topic of interest. Today, ontologies are built very much ad-hoc. A terminology is first developed providing a controlled vocabulary for the subject area or domain of interest, then it is organized into a taxonomy where key concepts are identified, and finally these concepts are defined and related to create an ontology. The intent of this paper is to show that domain analysis methods can be used for building ontologies. Domain analysis aims at generic models that represent groups of similar systems within an application domain. In this sense, it deals with categorization of common objects and operations, with clear, unambiguous definitions of them and with defining their relationships.
  20. Curras, E.: Ontologies, taxonomy and thesauri in information organisation and retrieval (2010) 0.03
    0.025920149 = product of:
      0.051840298 = sum of:
        0.033217333 = weight(_text_:subject in 3276) [ClassicSimilarity], result of:
          0.033217333 = score(doc=3276,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.19758089 = fieldWeight in 3276, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3276)
        0.018622966 = product of:
          0.037245933 = sum of:
            0.037245933 = weight(_text_:classification in 3276) [ClassicSimilarity], result of:
              0.037245933 = score(doc=3276,freq=4.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.24880521 = fieldWeight in 3276, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3276)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The originality of this book, which deals with such a new subject matter, lies in the application of methods and concepts never used before - such as Ontologies and Taxonomies, as well as Thesauri - to the ordering of knowledge based on primary information. Chapters in the book also examine the study of Ontologies, Taxonomies and Thesauri from the perspective of Systematics and General Systems Theory. "Ontologies, Taxonomy and Thesauri in Information Organisation and Retrieval" will be extremely useful to those operating within the network of related fields, which includes Documentation and Information Science.
    Content
    Inhalt: 1. From classifications to ontologies Knowledge - A new concept of knowledge - Knowledge and information - Knowledge organisation - Knowledge organisation and representation - Cognitive sciences - Talent management - Learning systematisation - Historical evolution - From classification to knowledge organisation - Why ontologies exist - Ontologies - The structure of ontologies 2. Taxonomies and thesauri From ordering to taxonomy - The origins of taxonomy - Hierarchical and horizontal order - Correlation with classifications - Taxonomy in computer science - Computing taxonomy - Definitions - Virtual taxonomy, cybernetic taxonomy - Taxonomy in Information Science - Similarities between taxonomies and thesauri - ifferences between taxonomies and thesauri 3. Thesauri Terminology in classification systems - Terminological languages - Thesauri - Thesauri definitions - Conditions that a thesaurus must fulfil - Historical evolution - Classes of thesauri 4. Thesauri in (cladist) systematics Systematics - Systematics as a noun - Definitions and historic evolution over time - Differences between taxonomy and systematics - Systematics in thesaurus construction theory - Classic, numerical and cladist systematics - Classic systematics in information science - Numerical systematics in information science - Thesauri in cladist systematics - Systematics in information technology - Some examples 5. Thesauri in systems theory Historical evolution - Approach to systems - Systems theory applied to the construction of thesauri - Components - Classes of system - Peculiarities of these systems - Working methods - Systems theory applied to ontologies and taxonomies

Authors

Years

Languages

  • e 143
  • d 12
  • sp 1
  • More… Less…

Types

  • a 112
  • el 41
  • m 10
  • x 7
  • n 5
  • p 3
  • r 1
  • More… Less…