Search (3 results, page 1 of 1)

  • × theme_ss:"Klassifikationssysteme"
  • × type_ss:"el"
  1. ¬The Computer Science Ontology (CSO) (2018) 0.01
    0.005645824 = product of:
      0.045166593 = sum of:
        0.045166593 = weight(_text_:semantic in 4429) [ClassicSimilarity], result of:
          0.045166593 = score(doc=4429,freq=4.0), product of:
            0.13904566 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.033441637 = queryNorm
            0.32483283 = fieldWeight in 4429, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4429)
      0.125 = coord(1/8)
    
    Abstract
    The Computer Science Ontology (CSO) is a large-scale ontology of research areas that was automatically generated using the Klink-2 algorithm on the Rexplore dataset, which consists of about 16 million publications, mainly in the field of Computer Science. The Klink-2 algorithm combines semantic technologies, machine learning, and knowledge from external sources to automatically generate a fully populated ontology of research areas. Some relationships were also revised manually by experts during the preparation of two ontology-assisted surveys in the field of Semantic Web and Software Architecture. The main root of CSO is Computer Science, however, the ontology includes also a few secondary roots, such as Linguistics, Geometry, Semantics, and so on. CSO presents two main advantages over manually crafted categorisations used in Computer Science (e.g., 2012 ACM Classification, Microsoft Academic Search Classification). First, it can characterise higher-level research areas by means of hundreds of sub-topics and related terms, which enables to map very specific terms to higher-level research areas. Secondly, it can be easily updated by running Klink-2 on a set of new publications. A more comprehensive discussion of the advantages of adopting an automatically generated ontology in the scholarly domain can be found in.
  2. Frey, J.; Streitmatter, D.; Götz, F.; Hellmann, S.; Arndt, N.: DBpedia Archivo (2020) 0.00
    0.0027945403 = product of:
      0.022356322 = sum of:
        0.022356322 = weight(_text_:semantic in 53) [ClassicSimilarity], result of:
          0.022356322 = score(doc=53,freq=2.0), product of:
            0.13904566 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.033441637 = queryNorm
            0.16078404 = fieldWeight in 53, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.02734375 = fieldNorm(doc=53)
      0.125 = coord(1/8)
    
    Content
    # How does Archivo work? Each week Archivo runs several discovery algorithms to scan for new ontologies. Once discovered Archivo checks them every 8 hours. When changes are detected, Archivo downloads and rates and archives the latest snapshot persistently on the DBpedia Databus. # Archivo's mission Archivo's mission is to improve FAIRness (findability, accessibility, interoperability, and reusability) of all available ontologies on the Semantic Web. Archivo is not a guideline, it is fully automated, machine-readable and enforces interoperability with its star rating. - Ontology developers can implement against Archivo until they reach more stars. The stars and tests are designed to guarantee the interoperability and fitness of the ontology. - Ontology users can better find, access and re-use ontologies. Snapshots are persisted in case the original is not reachable anymore adding a layer of reliability to the decentral web of ontologies.
  3. Electronic Dewey (1993) 0.00
    0.0015102935 = product of:
      0.012082348 = sum of:
        0.012082348 = product of:
          0.03624704 = sum of:
            0.03624704 = weight(_text_:22 in 1088) [ClassicSimilarity], result of:
              0.03624704 = score(doc=1088,freq=2.0), product of:
                0.117106915 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.033441637 = queryNorm
                0.30952093 = fieldWeight in 1088, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1088)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Footnote
    Rez. in: Cataloging and classification quarterly 19(1994) no.1, S.134-137 (M. Carpenter). - Inzwischen existiert auch eine Windows-Version: 'Electronic Dewey for Windows', vgl. Knowledge organization 22(1995) no.1, S.17