Search (7 results, page 1 of 1)

  • × theme_ss:"Klassifikationssysteme"
  • × type_ss:"el"
  1. GHB-Systematik (1996-) 0.01
    0.011325077 = product of:
      0.04530031 = sum of:
        0.04530031 = weight(_text_:von in 6232) [ClassicSimilarity], result of:
          0.04530031 = score(doc=6232,freq=2.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.35372335 = fieldWeight in 6232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.09375 = fieldNorm(doc=6232)
      0.25 = coord(1/4)
    
    Issue
    1996 aktualisierte Fassung der Ausgabe von 1977.
  2. Electronic Dewey (1993) 0.01
    0.009611643 = product of:
      0.03844657 = sum of:
        0.03844657 = product of:
          0.057669856 = sum of:
            0.005640907 = weight(_text_:a in 1088) [ClassicSimilarity], result of:
              0.005640907 = score(doc=1088,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.10191591 = fieldWeight in 1088, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1088)
            0.052028947 = weight(_text_:22 in 1088) [ClassicSimilarity], result of:
              0.052028947 = score(doc=1088,freq=2.0), product of:
                0.16809508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04800207 = queryNorm
                0.30952093 = fieldWeight in 1088, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1088)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    The CD-ROM version of the 20th DDC ed., featuring advanced online search and windowing techniques, full-text indexing, personal notepad, LC subject headings linked to DDC numbers and a database of all DDC changes
    Footnote
    Rez. in: Cataloging and classification quarterly 19(1994) no.1, S.134-137 (M. Carpenter). - Inzwischen existiert auch eine Windows-Version: 'Electronic Dewey for Windows', vgl. Knowledge organization 22(1995) no.1, S.17
  3. Systematik für Bibliotheken : SfB (1997) 0.01
    0.009342712 = product of:
      0.03737085 = sum of:
        0.03737085 = weight(_text_:von in 893) [ClassicSimilarity], result of:
          0.03737085 = score(doc=893,freq=4.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.29180688 = fieldWeight in 893, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0546875 = fieldNorm(doc=893)
      0.25 = coord(1/4)
    
    Abstract
    Die SfB ist eine Aufstellungssystematik, die aus 31 Fachgebieten und mehreren Formal- und Sachschlüsseln besteht. Gerade wegen der differenzierten Gliederung von Bibliotheksbeständen sowie dem Bestreben der Bibliotheken, Klassen zu reduzieren, ist die SfB für die sachliche Aufstellung von kleinen bis sehr großen Bibliotheksbeständen geeignet. Die SfB wurde ursprünglich auf der Grundlage der hannoverschen Systematik (SSH) erarbeitet und zwischen 1978 und 1987 vom K.G.Saur-Verlag erstmals publiziert. Neben der ASB findet die SfB Anwendung in 40 öffentlichen und wissenschaftlichen Bibliotheken. Die jetzt vorliegende Überarbeitung berücksichtigt fachlich neue Entwicklungen und richtet sich terminologisch soweit wie möglich nach dem Vokabular der Schlagwortnormdatei. Der Buchausgabe der SfB liegt eine maschinenlesbare Version auf 3,5"-Diskette bei. Eine regelmäßige Aktualisierung ist vorgesehen
  4. Voß, J.: Verbundzentrale des GBV übernimmt BARTOC (2020) 0.01
    0.007550052 = product of:
      0.030200208 = sum of:
        0.030200208 = weight(_text_:von in 25) [ClassicSimilarity], result of:
          0.030200208 = score(doc=25,freq=2.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.23581557 = fieldWeight in 25, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0625 = fieldNorm(doc=25)
      0.25 = coord(1/4)
    
    Abstract
    Die VZG hat zum November 2020 den Betrieb und die technische Weiterentwicklung übernommen. BARTOC enthält Informationen zu über mehr als 5.000 Wissensorganisationssysteme wie Klassifikationen, Thesauri, Ontologien und Normdateien. Die Inhalte werden durch eine internationale Gruppe von Redakteuren betreut und stehen in verschiedener Form frei zur Verfügung. Die Adresse https://bartoc.org/ und alle BARTOC-URLs bleiben bestehen. Das Redaktionsteam freut sich über Hinweise auf Ergänzungen und Korrekturen.
  5. Frey, J.; Streitmatter, D.; Götz, F.; Hellmann, S.; Arndt, N.: DBpedia Archivo : a Web-Scale interface for ontology archiving under consumer-oriented aspects (2020) 0.00
    8.2263234E-4 = product of:
      0.0032905294 = sum of:
        0.0032905294 = product of:
          0.009871588 = sum of:
            0.009871588 = weight(_text_:a in 52) [ClassicSimilarity], result of:
              0.009871588 = score(doc=52,freq=8.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.17835285 = fieldWeight in 52, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=52)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    While thousands of ontologies exist on the web, a unified sys-tem for handling online ontologies - in particular with respect to discov-ery, versioning, access, quality-control, mappings - has not yet surfacedand users of ontologies struggle with many challenges. In this paper, wepresent an online ontology interface and augmented archive called DB-pedia Archivo, that discovers, crawls, versions and archives ontologies onthe DBpedia Databus. Based on this versioned crawl, different features,quality measures and, if possible, fixes are deployed to handle and sta-bilize the changes in the found ontologies at web-scale. A comparison toexisting approaches and ontology repositories is given.
    Type
    a
  6. ¬The Computer Science Ontology (CSO) (2018) 0.00
    6.569507E-4 = product of:
      0.0026278028 = sum of:
        0.0026278028 = product of:
          0.007883408 = sum of:
            0.007883408 = weight(_text_:a in 4429) [ClassicSimilarity], result of:
              0.007883408 = score(doc=4429,freq=10.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.14243183 = fieldWeight in 4429, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4429)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    The Computer Science Ontology (CSO) is a large-scale ontology of research areas that was automatically generated using the Klink-2 algorithm on the Rexplore dataset, which consists of about 16 million publications, mainly in the field of Computer Science. The Klink-2 algorithm combines semantic technologies, machine learning, and knowledge from external sources to automatically generate a fully populated ontology of research areas. Some relationships were also revised manually by experts during the preparation of two ontology-assisted surveys in the field of Semantic Web and Software Architecture. The main root of CSO is Computer Science, however, the ontology includes also a few secondary roots, such as Linguistics, Geometry, Semantics, and so on. CSO presents two main advantages over manually crafted categorisations used in Computer Science (e.g., 2012 ACM Classification, Microsoft Academic Search Classification). First, it can characterise higher-level research areas by means of hundreds of sub-topics and related terms, which enables to map very specific terms to higher-level research areas. Secondly, it can be easily updated by running Klink-2 on a set of new publications. A more comprehensive discussion of the advantages of adopting an automatically generated ontology in the scholarly domain can be found in.
  7. Frey, J.; Streitmatter, D.; Götz, F.; Hellmann, S.; Arndt, N.: DBpedia Archivo (2020) 0.00
    5.8168895E-4 = product of:
      0.0023267558 = sum of:
        0.0023267558 = product of:
          0.0069802674 = sum of:
            0.0069802674 = weight(_text_:a in 53) [ClassicSimilarity], result of:
              0.0069802674 = score(doc=53,freq=16.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.12611452 = fieldWeight in 53, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=53)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Content
    # Community action on individual ontologies We would like to call on all ontology maintainers and consumers to help us increase the average star rating of the web of ontologies by fixing and improving its ontologies. You can easily check an ontology at https://archivo.dbpedia.org/info. If you are an ontology maintainer just release a patched version - archivo will automatically pick it up 8 hours later. If you are a user of an ontology and want your consumed data to become FAIRer, please inform the ontology maintainer about the issues found with Archivo. The star rating is very basic and only requires fixing small things. However, theimpact on technical and legal usability can be immense.
    # Community action on all ontologies (quality, FAIRness, conformity) Archivo is extensible and allows contributions to give consumers a central place to encode their requirements. We envision fostering adherence to standards and strengthening incentives for publishers to build a better (FAIRer) web of ontologies. 1. SHACL (https://www.w3.org/TR/shacl/, co-edited by DBpedia's CTO D. Kontokostas) enables easy testing of ontologies. Archivo offers free SHACL continuous integration testing for ontologies. Anyone can implement their SHACL tests and add them to the SHACL library on Github. We believe that there are many synergies, i.e. SHACL tests for your ontology are helpful for others as well. 2. We are looking for ontology experts to join DBpedia and discuss further validation (e.g. stars) to increase FAIRness and quality of ontologies. We are forming a steering committee and also a PC for the upcoming Vocarnival at SEMANTiCS 2021. Please message hellmann@informatik.uni-leipzig.de <mailto:hellmann@informatik.uni-leipzig.de>if you would like to join. We would like to extend the Archivo platform with relevant visualisations, tests, editing aides, mapping management tools and quality checks.
    # How does Archivo work? Each week Archivo runs several discovery algorithms to scan for new ontologies. Once discovered Archivo checks them every 8 hours. When changes are detected, Archivo downloads and rates and archives the latest snapshot persistently on the DBpedia Databus. # Archivo's mission Archivo's mission is to improve FAIRness (findability, accessibility, interoperability, and reusability) of all available ontologies on the Semantic Web. Archivo is not a guideline, it is fully automated, machine-readable and enforces interoperability with its star rating. - Ontology developers can implement against Archivo until they reach more stars. The stars and tests are designed to guarantee the interoperability and fitness of the ontology. - Ontology users can better find, access and re-use ontologies. Snapshots are persisted in case the original is not reachable anymore adding a layer of reliability to the decentral web of ontologies.