Search (172 results, page 1 of 9)

  • × theme_ss:"Semantische Interoperabilität"
  1. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.31
    0.31446344 = product of:
      0.6289269 = sum of:
        0.045282163 = product of:
          0.13584648 = sum of:
            0.13584648 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.13584648 = score(doc=1000,freq=2.0), product of:
                0.29005435 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03421255 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.13584648 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.13584648 = score(doc=1000,freq=2.0), product of:
            0.29005435 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03421255 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.020129383 = weight(_text_:web in 1000) [ClassicSimilarity], result of:
          0.020129383 = score(doc=1000,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.18028519 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.13584648 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.13584648 = score(doc=1000,freq=2.0), product of:
            0.29005435 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03421255 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.020129383 = weight(_text_:web in 1000) [ClassicSimilarity], result of:
          0.020129383 = score(doc=1000,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.18028519 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.13584648 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.13584648 = score(doc=1000,freq=2.0), product of:
            0.29005435 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03421255 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.13584648 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.13584648 = score(doc=1000,freq=2.0), product of:
            0.29005435 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03421255 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.5 = coord(7/14)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  2. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.29
    0.29433405 = product of:
      0.82413536 = sum of:
        0.06339503 = product of:
          0.19018508 = sum of:
            0.19018508 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.19018508 = score(doc=306,freq=2.0), product of:
                0.29005435 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03421255 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.19018508 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.19018508 = score(doc=306,freq=2.0), product of:
            0.29005435 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03421255 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.19018508 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.19018508 = score(doc=306,freq=2.0), product of:
            0.29005435 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03421255 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.19018508 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.19018508 = score(doc=306,freq=2.0), product of:
            0.29005435 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03421255 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.19018508 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.19018508 = score(doc=306,freq=2.0), product of:
            0.29005435 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03421255 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.35714287 = coord(5/14)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  3. Neubauer, G.: Visualization of typed links in linked data (2017) 0.18
    0.18068199 = product of:
      0.36136398 = sum of:
        0.039488245 = weight(_text_:world in 3912) [ClassicSimilarity], result of:
          0.039488245 = score(doc=3912,freq=4.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.30028677 = fieldWeight in 3912, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
        0.052472483 = weight(_text_:wide in 3912) [ClassicSimilarity], result of:
          0.052472483 = score(doc=3912,freq=4.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.34615302 = fieldWeight in 3912, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
        0.056934495 = weight(_text_:web in 3912) [ClassicSimilarity], result of:
          0.056934495 = score(doc=3912,freq=16.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.5099235 = fieldWeight in 3912, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
        0.039488245 = weight(_text_:world in 3912) [ClassicSimilarity], result of:
          0.039488245 = score(doc=3912,freq=4.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.30028677 = fieldWeight in 3912, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
        0.052472483 = weight(_text_:wide in 3912) [ClassicSimilarity], result of:
          0.052472483 = score(doc=3912,freq=4.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.34615302 = fieldWeight in 3912, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
        0.056934495 = weight(_text_:web in 3912) [ClassicSimilarity], result of:
          0.056934495 = score(doc=3912,freq=16.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.5099235 = fieldWeight in 3912, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
        0.06357355 = weight(_text_:2.0 in 3912) [ClassicSimilarity], result of:
          0.06357355 = score(doc=3912,freq=2.0), product of:
            0.19842365 = queryWeight, product of:
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.03421255 = queryNorm
            0.320393 = fieldWeight in 3912, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
      0.5 = coord(7/14)
    
    Abstract
    Das Themengebiet der Arbeit behandelt Visualisierungen von typisierten Links in Linked Data. Die wissenschaftlichen Gebiete, die im Allgemeinen den Inhalt des Beitrags abgrenzen, sind das Semantic Web, das Web of Data und Informationsvisualisierung. Das Semantic Web, das von Tim Berners Lee 2001 erfunden wurde, stellt eine Erweiterung zum World Wide Web (Web 2.0) dar. Aktuelle Forschungen beziehen sich auf die Verknüpfbarkeit von Informationen im World Wide Web. Um es zu ermöglichen, solche Verbindungen wahrnehmen und verarbeiten zu können sind Visualisierungen die wichtigsten Anforderungen als Hauptteil der Datenverarbeitung. Im Zusammenhang mit dem Sematic Web werden Repräsentationen von zusammenhängenden Informationen anhand von Graphen gehandhabt. Der Grund des Entstehens dieser Arbeit ist in erster Linie die Beschreibung der Gestaltung von Linked Data-Visualisierungskonzepten, deren Prinzipien im Rahmen einer theoretischen Annäherung eingeführt werden. Anhand des Kontexts führt eine schrittweise Erweiterung der Informationen mit dem Ziel, praktische Richtlinien anzubieten, zur Vernetzung dieser ausgearbeiteten Gestaltungsrichtlinien. Indem die Entwürfe zweier alternativer Visualisierungen einer standardisierten Webapplikation beschrieben werden, die Linked Data als Netzwerk visualisiert, konnte ein Test durchgeführt werden, der deren Kompatibilität zum Inhalt hatte. Der praktische Teil behandelt daher die Designphase, die Resultate, und zukünftige Anforderungen des Projektes, die durch die Testung ausgearbeitet wurden.
    Theme
    Semantic Web
  4. Smith, A.: Simple Knowledge Organization System (SKOS) (2022) 0.13
    0.13044944 = product of:
      0.30438203 = sum of:
        0.047385897 = weight(_text_:world in 1094) [ClassicSimilarity], result of:
          0.047385897 = score(doc=1094,freq=4.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.36034414 = fieldWeight in 1094, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
        0.06296697 = weight(_text_:wide in 1094) [ClassicSimilarity], result of:
          0.06296697 = score(doc=1094,freq=4.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.4153836 = fieldWeight in 1094, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
        0.04183814 = weight(_text_:web in 1094) [ClassicSimilarity], result of:
          0.04183814 = score(doc=1094,freq=6.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.37471575 = fieldWeight in 1094, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
        0.047385897 = weight(_text_:world in 1094) [ClassicSimilarity], result of:
          0.047385897 = score(doc=1094,freq=4.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.36034414 = fieldWeight in 1094, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
        0.06296697 = weight(_text_:wide in 1094) [ClassicSimilarity], result of:
          0.06296697 = score(doc=1094,freq=4.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.4153836 = fieldWeight in 1094, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
        0.04183814 = weight(_text_:web in 1094) [ClassicSimilarity], result of:
          0.04183814 = score(doc=1094,freq=6.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.37471575 = fieldWeight in 1094, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
      0.42857143 = coord(6/14)
    
    Abstract
    SKOS (Simple Knowledge Organization System) is a recommendation from the World Wide Web Consortium (W3C) for representing controlled vocabularies, taxonomies, thesauri, classifications, and similar systems for organizing and indexing information as linked data elements in the Semantic Web, using the Resource Description Framework (RDF). The SKOS data model is centered on "concepts", which can have preferred and alternate labels in any language as well as other metadata, and which are identified by addresses on the World Wide Web (URIs). Concepts are grouped into hierarchies through "broader" and "narrower" relations, with "top concepts" at the broadest conceptual level. Concepts are also organized into "concept schemes", also identified by URIs. Other relations, mappings, and groupings are also supported. This article discusses the history of the development of SKOS and provides notes on adoption, uses, and limitations.
  5. Euzenat, J.; Shvaiko, P.: Ontology matching (2010) 0.12
    0.11764928 = product of:
      0.23529856 = sum of:
        0.031590596 = weight(_text_:world in 168) [ClassicSimilarity], result of:
          0.031590596 = score(doc=168,freq=4.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.24022943 = fieldWeight in 168, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.041977983 = weight(_text_:wide in 168) [ClassicSimilarity], result of:
          0.041977983 = score(doc=168,freq=4.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.2769224 = fieldWeight in 168, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.039445374 = weight(_text_:web in 168) [ClassicSimilarity], result of:
          0.039445374 = score(doc=168,freq=12.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.35328537 = fieldWeight in 168, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.031590596 = weight(_text_:world in 168) [ClassicSimilarity], result of:
          0.031590596 = score(doc=168,freq=4.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.24022943 = fieldWeight in 168, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.041977983 = weight(_text_:wide in 168) [ClassicSimilarity], result of:
          0.041977983 = score(doc=168,freq=4.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.2769224 = fieldWeight in 168, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.039445374 = weight(_text_:web in 168) [ClassicSimilarity], result of:
          0.039445374 = score(doc=168,freq=12.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.35328537 = fieldWeight in 168, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.009270656 = product of:
          0.018541312 = sum of:
            0.018541312 = weight(_text_:22 in 168) [ClassicSimilarity], result of:
              0.018541312 = score(doc=168,freq=2.0), product of:
                0.11980651 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03421255 = queryNorm
                0.15476047 = fieldWeight in 168, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=168)
          0.5 = coord(1/2)
      0.5 = coord(7/14)
    
    Abstract
    Ontologies are viewed as the silver bullet for many applications, but in open or evolving systems, different parties can adopt different ontologies. This increases heterogeneity problems rather than reducing heterogeneity. This book proposes ontology matching as a solution to the problem of semantic heterogeneity, offering researchers and practitioners a uniform framework of reference to currently available work. The techniques presented apply to database schema matching, catalog integration, XML schema matching and more. Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko's book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, artificial intelligence. With Ontology Matching, researchers and practitioners will find a reference book which presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can equally be applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a detailed account of matching techniques and matching systems in a systematic way from theoretical, practical and application perspectives.
    Date
    20. 6.2012 19:08:22
    LCSH
    World wide web
    RSWK
    Datenintegration / Informationssystem / Matching / Ontologie <Wissensverarbeitung> / Schema <Informatik> / Semantic Web
    Subject
    Datenintegration / Informationssystem / Matching / Ontologie <Wissensverarbeitung> / Schema <Informatik> / Semantic Web
    World wide web
  6. Lösse, M.; Svensson, L.: "Classification at a Crossroad" : Internationales UDC-Seminar 2009 in Den Haag, Niederlande (2010) 0.11
    0.11201953 = product of:
      0.22403906 = sum of:
        0.03350689 = weight(_text_:world in 4379) [ClassicSimilarity], result of:
          0.03350689 = score(doc=4379,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.25480178 = fieldWeight in 4379, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=4379)
        0.04452437 = weight(_text_:wide in 4379) [ClassicSimilarity], result of:
          0.04452437 = score(doc=4379,freq=2.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.29372054 = fieldWeight in 4379, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4379)
        0.02415526 = weight(_text_:web in 4379) [ClassicSimilarity], result of:
          0.02415526 = score(doc=4379,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.21634221 = fieldWeight in 4379, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4379)
        0.03350689 = weight(_text_:world in 4379) [ClassicSimilarity], result of:
          0.03350689 = score(doc=4379,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.25480178 = fieldWeight in 4379, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=4379)
        0.04452437 = weight(_text_:wide in 4379) [ClassicSimilarity], result of:
          0.04452437 = score(doc=4379,freq=2.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.29372054 = fieldWeight in 4379, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4379)
        0.02415526 = weight(_text_:web in 4379) [ClassicSimilarity], result of:
          0.02415526 = score(doc=4379,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.21634221 = fieldWeight in 4379, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4379)
        0.019666033 = product of:
          0.039332066 = sum of:
            0.039332066 = weight(_text_:22 in 4379) [ClassicSimilarity], result of:
              0.039332066 = score(doc=4379,freq=4.0), product of:
                0.11980651 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03421255 = queryNorm
                0.32829654 = fieldWeight in 4379, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4379)
          0.5 = coord(1/2)
      0.5 = coord(7/14)
    
    Abstract
    Am 29. und 30. Oktober 2009 fand in der Königlichen Bibliothek in Den Haag das zweite internationale UDC-Seminar zum Thema "Classification at a Crossroad" statt. Organisiert wurde diese Konferenz - wie auch die erste Konferenz dieser Art im Jahr 2007 - vom UDC-Konsortium (UDCC). Im Mittelpunkt der diesjährigen Veranstaltung stand die Erschließung des World Wide Web unter besserer Nutzung von Klassifikationen (im Besonderen natürlich der UDC), einschließlich benutzerfreundlicher Repräsentationen von Informationen und Wissen. Standards, neue Technologien und Dienste, semantische Suche und der multilinguale Zugriff spielten ebenfalls eine Rolle. 135 Teilnehmer aus 35 Ländern waren dazu nach Den Haag gekommen. Das Programm umfasste mit 22 Vorträgen aus 14 verschiedenen Ländern eine breite Palette, wobei Großbritannien mit fünf Beiträgen am stärksten vertreten war. Die Tagesschwerpunkte wurden an beiden Konferenztagen durch die Eröffnungsvorträge gesetzt, die dann in insgesamt sechs thematischen Sitzungen weiter vertieft wurden.
    Date
    22. 1.2010 15:06:54
  7. Tang, J.; Liang, B.-Y.; Li, J.-Z.: Toward detecting mapping strategies for ontology interoperability (2005) 0.10
    0.10015771 = product of:
      0.23370132 = sum of:
        0.039488245 = weight(_text_:world in 3367) [ClassicSimilarity], result of:
          0.039488245 = score(doc=3367,freq=4.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.30028677 = fieldWeight in 3367, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3367)
        0.037103646 = weight(_text_:wide in 3367) [ClassicSimilarity], result of:
          0.037103646 = score(doc=3367,freq=2.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.24476713 = fieldWeight in 3367, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3367)
        0.040258765 = weight(_text_:web in 3367) [ClassicSimilarity], result of:
          0.040258765 = score(doc=3367,freq=8.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.36057037 = fieldWeight in 3367, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3367)
        0.039488245 = weight(_text_:world in 3367) [ClassicSimilarity], result of:
          0.039488245 = score(doc=3367,freq=4.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.30028677 = fieldWeight in 3367, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3367)
        0.037103646 = weight(_text_:wide in 3367) [ClassicSimilarity], result of:
          0.037103646 = score(doc=3367,freq=2.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.24476713 = fieldWeight in 3367, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3367)
        0.040258765 = weight(_text_:web in 3367) [ClassicSimilarity], result of:
          0.040258765 = score(doc=3367,freq=8.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.36057037 = fieldWeight in 3367, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3367)
      0.42857143 = coord(6/14)
    
    Abstract
    Ontology mapping is one of the core tasks for ontology interoperability. It is aimed to find semantic relationships between entities (i.e. concept, attribute, and relation) of two ontologies. It benefits many applications, such as integration of ontology based web data sources, interoperability of agents or web services. To reduce the amount of users' effort as much as possible, (semi-) automatic ontology mapping is becoming more and more important to bring it into fruition. In the existing literature, many approaches have found considerable interest by combining several different similar/mapping strategies (namely multi-strategy based mapping). However, experiments show that the multi-strategy based mapping does not always outperform its single-strategy counterpart. In this paper, we mainly aim to deal with two problems: (1) for a new, unseen mapping task, should we select a multi-strategy based algorithm or just one single-strategy based algorithm? (2) if the task is suitable for multi-strategy, then how to select the strategies into the final combined scenario? We propose an approach of multiple strategies detections for ontology mapping. The results obtained so far show that multi-strategy detection improves on precision and recall significantly.
    Content
    Beitrag anlässlich: Workshop on The Semantic Computing Initiative (SeC 2005) --- From Semantic Web to Semantic World --- to be held in conjunction with The 14th Int'l Conf. on World Wide Web (WWW2005); vgl.: http://www.instsec.org/2005ws/.
  8. Piscitelli, F.A.: Library linked data models : library data in the Semantic Web (2019) 0.09
    0.08562101 = product of:
      0.19978234 = sum of:
        0.027922407 = weight(_text_:world in 5478) [ClassicSimilarity], result of:
          0.027922407 = score(doc=5478,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.21233483 = fieldWeight in 5478, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5478)
        0.037103646 = weight(_text_:wide in 5478) [ClassicSimilarity], result of:
          0.037103646 = score(doc=5478,freq=2.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.24476713 = fieldWeight in 5478, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5478)
        0.034865115 = weight(_text_:web in 5478) [ClassicSimilarity], result of:
          0.034865115 = score(doc=5478,freq=6.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.3122631 = fieldWeight in 5478, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5478)
        0.027922407 = weight(_text_:world in 5478) [ClassicSimilarity], result of:
          0.027922407 = score(doc=5478,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.21233483 = fieldWeight in 5478, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5478)
        0.037103646 = weight(_text_:wide in 5478) [ClassicSimilarity], result of:
          0.037103646 = score(doc=5478,freq=2.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.24476713 = fieldWeight in 5478, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5478)
        0.034865115 = weight(_text_:web in 5478) [ClassicSimilarity], result of:
          0.034865115 = score(doc=5478,freq=6.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.3122631 = fieldWeight in 5478, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5478)
      0.42857143 = coord(6/14)
    
    Abstract
    This exploratory study examined Linked Data (LD) schemas/ontologies and data models proposed or in use by libraries around the world using MAchine Readable Cataloging (MARC) as a basis for comparison of the scope and extensibility of these potential new standards. The researchers selected 14 libraries from national libraries, academic libraries, government libraries, public libraries, multi-national libraries, and cultural heritage centers currently developing Library Linked Data (LLD) schemas. The choices of models, schemas, and elements used in each library's LD can create interoperability issues for LD services because of substantial differences between schemas and data models evolving via local decisions. The researchers observed that a wide variety of vocabularies and ontologies were used for LLD including common web schemas such as Dublin Core (DC)/DCTerms, Schema.org and Resource Description Framework (RDF), as well as deprecated schemas such as MarcOnt and rdagroup1elements. A sharp divide existed as well between LLD schemas using variations of the Functional Requirements for Bibliographic Records (FRBR) data model and those with different data models or even with no listed data model. Libraries worldwide are not using the same elements or even the same ontologies, schemas and data models to describe the same materials using the same general concepts.
    Theme
    Semantic Web
  9. Tennis, J.T.: Versioning concept schemes for persistent retrieval (2006) 0.06
    0.0641097 = product of:
      0.14958929 = sum of:
        0.022337925 = weight(_text_:world in 1956) [ClassicSimilarity], result of:
          0.022337925 = score(doc=1956,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.16986786 = fieldWeight in 1956, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=1956)
        0.029682916 = weight(_text_:wide in 1956) [ClassicSimilarity], result of:
          0.029682916 = score(doc=1956,freq=2.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.1958137 = fieldWeight in 1956, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1956)
        0.022773797 = weight(_text_:web in 1956) [ClassicSimilarity], result of:
          0.022773797 = score(doc=1956,freq=4.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.2039694 = fieldWeight in 1956, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1956)
        0.022337925 = weight(_text_:world in 1956) [ClassicSimilarity], result of:
          0.022337925 = score(doc=1956,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.16986786 = fieldWeight in 1956, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=1956)
        0.029682916 = weight(_text_:wide in 1956) [ClassicSimilarity], result of:
          0.029682916 = score(doc=1956,freq=2.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.1958137 = fieldWeight in 1956, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1956)
        0.022773797 = weight(_text_:web in 1956) [ClassicSimilarity], result of:
          0.022773797 = score(doc=1956,freq=4.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.2039694 = fieldWeight in 1956, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1956)
      0.42857143 = coord(6/14)
    
    Abstract
    Things change. Words change, meaning changes and use changes both words and meaning. In information access systems this means concept schemes such as thesauri or classification schemes change. They always have. Concept schemes that have survived have evolved over time, moving from one version, often called an edition, to the next. If we want to manage how words and meanings - and as a consequence use - change in an effective manner, and if we want to be able to search across versions of concept schemes, we have to track these changes. This paper explores how we might expand SKOS, a World Wide Web Consortium (W3C) draft recommendation in order to do that kind of tracking. The Simple Knowledge Organization System (SKOS) Core Guide is sponsored by the Semantic Web Best Practices and Deployment Working Group. The second draft, edited by Alistair Miles and Dan Brickley, was issued in November 2005. SKOS is a "model for expressing the basic structure and content of concept schemes such as thesauri, classification schemes, subject heading lists, taxonomies, folksonomies, other types of controlled vocabulary and also concept schemes embedded in glossaries and terminologies" in RDF. How SKOS handles version in concept schemes is an open issue. The current draft guide suggests using OWL and DCTERMS as mechanisms for concept scheme revision. As it stands an editor of a concept scheme can make notes or declare in OWL that more than one version exists. This paper adds to the SKOS Core by introducing a tracking system for changes in concept schemes. We call this tracking system vocabulary ontogeny. Ontogeny is a biological term for the development of an organism during its lifetime. Here we use the ontogeny metaphor to describe how vocabularies change over their lifetime. Our purpose here is to create a conceptual mechanism that will track these changes and in so doing enhance information retrieval and prevent document loss through versioning, thereby enabling persistent retrieval.
  10. Haslhofer, B.: ¬A Web-based mapping technique for establishing metadata interoperability (2008) 0.06
    0.06306708 = product of:
      0.1471565 = sum of:
        0.013961203 = weight(_text_:world in 3173) [ClassicSimilarity], result of:
          0.013961203 = score(doc=3173,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.10616741 = fieldWeight in 3173, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3173)
        0.026236242 = weight(_text_:wide in 3173) [ClassicSimilarity], result of:
          0.026236242 = score(doc=3173,freq=4.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.17307651 = fieldWeight in 3173, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3173)
        0.033380806 = weight(_text_:web in 3173) [ClassicSimilarity], result of:
          0.033380806 = score(doc=3173,freq=22.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.29896918 = fieldWeight in 3173, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3173)
        0.013961203 = weight(_text_:world in 3173) [ClassicSimilarity], result of:
          0.013961203 = score(doc=3173,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.10616741 = fieldWeight in 3173, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3173)
        0.026236242 = weight(_text_:wide in 3173) [ClassicSimilarity], result of:
          0.026236242 = score(doc=3173,freq=4.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.17307651 = fieldWeight in 3173, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3173)
        0.033380806 = weight(_text_:web in 3173) [ClassicSimilarity], result of:
          0.033380806 = score(doc=3173,freq=22.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.29896918 = fieldWeight in 3173, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3173)
      0.42857143 = coord(6/14)
    
    Abstract
    The integration of metadata from distinct, heterogeneous data sources requires metadata interoperability, which is a qualitative property of metadata information objects that is not given by default. The technique of metadata mapping allows domain experts to establish metadata interoperability in a certain integration scenario. Mapping solutions, as a technical manifestation of this technique, are already available for the intensively studied domain of database system interoperability, but they rarely exist for the Web. If we consider the amount of steadily increasing structured metadata and corresponding metadata schemes on theWeb, we can observe a clear need for a mapping solution that can operate in aWeb-based environment. To achieve that, we first need to build its technical core, which is a mapping model that provides the language primitives to define mapping relationships. Existing SemanticWeb languages such as RDFS and OWL define some basic mapping elements (e.g., owl:equivalentProperty, owl:sameAs), but do not address the full spectrum of semantic and structural heterogeneities that can occur among distinct, incompatible metadata information objects. Furthermore, it is still unclear how to process defined mapping relationships during run-time in order to deliver metadata to the client in a uniform way. As the main contribution of this thesis, we present an abstract mapping model, which reflects the mapping problem on a generic level and provides the means for reconciling incompatible metadata. Instance transformation functions and URIs take a central role in that model. The former cover a broad spectrum of possible structural and semantic heterogeneities, while the latter bind the complete mapping model to the architecture of the Word Wide Web. On the concrete, language-specific level we present a binding of the abstract mapping model for the RDF Vocabulary Description Language (RDFS), which allows us to create mapping specifications among incompatible metadata schemes expressed in RDFS. The mapping model is embedded in a cyclic process that categorises the requirements a mapping solution should fulfil into four subsequent phases: mapping discovery, mapping representation, mapping execution, and mapping maintenance. In this thesis, we mainly focus on mapping representation and on the transformation of mapping specifications into executable SPARQL queries. For mapping discovery support, the model provides an interface for plugging-in schema and ontology matching algorithms. For mapping maintenance we introduce the concept of a simple, but effective mapping registry. Based on the mapping model, we propose aWeb-based mediator wrapper-architecture that allows domain experts to set up mediation endpoints that provide a uniform SPARQL query interface to a set of distributed metadata sources. The involved data sources are encapsulated by wrapper components that expose the contained metadata and the schema definitions on the Web and provide a SPARQL query interface to these metadata. In this thesis, we present the OAI2LOD Server, a wrapper component for integrating metadata that are accessible via the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH). In a case study, we demonstrate how mappings can be created in aWeb environment and how our mediator wrapper architecture can easily be configured in order to integrate metadata from various heterogeneous data sources without the need to install any mapping solution or metadata integration solution in a local system environment.
    Content
    Die Integration von Metadaten aus unterschiedlichen, heterogenen Datenquellen erfordert Metadaten-Interoperabilität, eine Eigenschaft die nicht standardmäßig gegeben ist. Metadaten Mapping Verfahren ermöglichen es Domänenexperten Metadaten-Interoperabilität in einem bestimmten Integrationskontext herzustellen. Mapping Lösungen sollen dabei die notwendige Unterstützung bieten. Während diese für den etablierten Bereich interoperabler Datenbanken bereits existieren, ist dies für Web-Umgebungen nicht der Fall. Betrachtet man das Ausmaß ständig wachsender strukturierter Metadaten und Metadatenschemata im Web, so zeichnet sich ein Bedarf nach Web-basierten Mapping Lösungen ab. Den Kern einer solchen Lösung bildet ein Mappingmodell, das die zur Spezifikation von Mappings notwendigen Sprachkonstrukte definiert. Existierende Semantic Web Sprachen wie beispielsweise RDFS oder OWL bieten zwar grundlegende Mappingelemente (z.B.: owl:equivalentProperty, owl:sameAs), adressieren jedoch nicht das gesamte Sprektrum möglicher semantischer und struktureller Heterogenitäten, die zwischen unterschiedlichen, inkompatiblen Metadatenobjekten auftreten können. Außerdem fehlen technische Lösungsansätze zur Überführung zuvor definierter Mappings in ausfu¨hrbare Abfragen. Als zentraler wissenschaftlicher Beitrag dieser Dissertation, wird ein abstraktes Mappingmodell pr¨asentiert, welches das Mappingproblem auf generischer Ebene reflektiert und Lösungsansätze zum Abgleich inkompatibler Schemata bietet. Instanztransformationsfunktionen und URIs nehmen in diesem Modell eine zentrale Rolle ein. Erstere überbrücken ein breites Spektrum möglicher semantischer und struktureller Heterogenitäten, während letztere das Mappingmodell in die Architektur des World Wide Webs einbinden. Auf einer konkreten, sprachspezifischen Ebene wird die Anbindung des abstrakten Modells an die RDF Vocabulary Description Language (RDFS) präsentiert, wodurch ein Mapping zwischen unterschiedlichen, in RDFS ausgedrückten Metadatenschemata ermöglicht wird. Das Mappingmodell ist in einen zyklischen Mappingprozess eingebunden, der die Anforderungen an Mappinglösungen in vier aufeinanderfolgende Phasen kategorisiert: mapping discovery, mapping representation, mapping execution und mapping maintenance. Im Rahmen dieser Dissertation beschäftigen wir uns hauptsächlich mit der Representation-Phase sowie mit der Transformation von Mappingspezifikationen in ausführbare SPARQL-Abfragen. Zur Unterstützung der Discovery-Phase bietet das Mappingmodell eine Schnittstelle zur Einbindung von Schema- oder Ontologymatching-Algorithmen. Für die Maintenance-Phase präsentieren wir ein einfaches, aber seinen Zweck erfüllendes Mapping-Registry Konzept. Auf Basis des Mappingmodells stellen wir eine Web-basierte Mediator-Wrapper Architektur vor, die Domänenexperten die Möglichkeit bietet, SPARQL-Mediationsschnittstellen zu definieren. Die zu integrierenden Datenquellen müssen dafür durch Wrapper-Komponenen gekapselt werden, welche die enthaltenen Metadaten im Web exponieren und SPARQL-Zugriff ermöglichen. Als beipielhafte Wrapper Komponente präsentieren wir den OAI2LOD Server, mit dessen Hilfe Datenquellen eingebunden werden können, die ihre Metadaten über das Open Archives Initative Protocol for Metadata Harvesting (OAI-PMH) exponieren. Im Rahmen einer Fallstudie zeigen wir, wie Mappings in Web-Umgebungen erstellt werden können und wie unsere Mediator-Wrapper Architektur nach wenigen, einfachen Konfigurationsschritten Metadaten aus unterschiedlichen, heterogenen Datenquellen integrieren kann, ohne dass dadurch die Notwendigkeit entsteht, eine Mapping Lösung in einer lokalen Systemumgebung zu installieren.
  11. Vocht, L. De: Exploring semantic relationships in the Web of Data : Semantische relaties verkennen in data op het web (2017) 0.06
    0.062375817 = product of:
      0.14554358 = sum of:
        0.013961203 = weight(_text_:world in 4232) [ClassicSimilarity], result of:
          0.013961203 = score(doc=4232,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.10616741 = fieldWeight in 4232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
        0.018551823 = weight(_text_:wide in 4232) [ClassicSimilarity], result of:
          0.018551823 = score(doc=4232,freq=2.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.122383565 = fieldWeight in 4232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
        0.040258765 = weight(_text_:web in 4232) [ClassicSimilarity], result of:
          0.040258765 = score(doc=4232,freq=32.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.36057037 = fieldWeight in 4232, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
        0.013961203 = weight(_text_:world in 4232) [ClassicSimilarity], result of:
          0.013961203 = score(doc=4232,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.10616741 = fieldWeight in 4232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
        0.018551823 = weight(_text_:wide in 4232) [ClassicSimilarity], result of:
          0.018551823 = score(doc=4232,freq=2.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.122383565 = fieldWeight in 4232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
        0.040258765 = weight(_text_:web in 4232) [ClassicSimilarity], result of:
          0.040258765 = score(doc=4232,freq=32.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.36057037 = fieldWeight in 4232, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
      0.42857143 = coord(6/14)
    
    Abstract
    After the launch of the World Wide Web, it became clear that searching documentson the Web would not be trivial. Well-known engines to search the web, like Google, focus on search in web documents using keywords. The documents are structured and indexed to ensure keywords match documents as accurately as possible. However, searching by keywords does not always suice. It is oen the case that users do not know exactly how to formulate the search query or which keywords guarantee retrieving the most relevant documents. Besides that, it occurs that users rather want to browse information than looking up something specific. It turned out that there is need for systems that enable more interactivity and facilitate the gradual refinement of search queries to explore the Web. Users expect more from the Web because the short keyword-based queries they pose during search, do not suffice for all cases. On top of that, the Web is changing structurally. The Web comprises, apart from a collection of documents, more and more linked data, pieces of information structured so they can be processed by machines. The consequently applied semantics allow users to exactly indicate machines their search intentions. This is made possible by describing data following controlled vocabularies, concept lists composed by experts, published uniquely identifiable on the Web. Even so, it is still not trivial to explore data on the Web. There is a large variety of vocabularies and various data sources use different terms to identify the same concepts.
    This PhD-thesis describes how to effectively explore linked data on the Web. The main focus is on scenarios where users want to discover relationships between resources rather than finding out more about something specific. Searching for a specific document or piece of information fits in the theoretical framework of information retrieval and is associated with exploratory search. Exploratory search goes beyond 'looking up something' when users are seeking more detailed understanding, further investigation or navigation of the initial search results. The ideas behind exploratory search and querying linked data merge when it comes to the way knowledge is represented and indexed by machines - how data is structured and stored for optimal searchability. Queries and information should be aligned to facilitate that searches also reveal connections between results. This implies that they take into account the same semantic entities, relevant at that moment. To realize this, we research three techniques that are evaluated one by one in an experimental set-up to assess how well they succeed in their goals. In the end, the techniques are applied to a practical use case that focuses on forming a bridge between the Web and the use of digital libraries in scientific research. Our first technique focuses on the interactive visualization of search results. Linked data resources can be brought in relation with each other at will. This leads to complex and diverse graphs structures. Our technique facilitates navigation and supports a workflow starting from a broad overview on the data and allows narrowing down until the desired level of detail to then broaden again. To validate the flow, two visualizations where implemented and presented to test-users. The users judged the usability of the visualizations, how the visualizations fit in the workflow and to which degree their features seemed useful for the exploration of linked data.
    The ideas behind exploratory search and querying linked data merge when it comes to the way knowledge is represented and indexed by machines - how data is structured and stored for optimal searchability. eries and information should be aligned to facilitate that searches also reveal connections between results. This implies that they take into account the same semantic entities, relevant at that moment. To realize this, we research three techniques that are evaluated one by one in an experimental set-up to assess how well they succeed in their goals. In the end, the techniques are applied to a practical use case that focuses on forming a bridge between the Web and the use of digital libraries in scientific research.
    Theme
    Semantic Web
  12. Mao, M.: Ontology mapping : towards semantic interoperability in distributed and heterogeneous environments (2008) 0.06
    0.058392297 = product of:
      0.1362487 = sum of:
        0.022337925 = weight(_text_:world in 4659) [ClassicSimilarity], result of:
          0.022337925 = score(doc=4659,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.16986786 = fieldWeight in 4659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=4659)
        0.029682916 = weight(_text_:wide in 4659) [ClassicSimilarity], result of:
          0.029682916 = score(doc=4659,freq=2.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.1958137 = fieldWeight in 4659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=4659)
        0.016103506 = weight(_text_:web in 4659) [ClassicSimilarity], result of:
          0.016103506 = score(doc=4659,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.14422815 = fieldWeight in 4659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=4659)
        0.022337925 = weight(_text_:world in 4659) [ClassicSimilarity], result of:
          0.022337925 = score(doc=4659,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.16986786 = fieldWeight in 4659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=4659)
        0.029682916 = weight(_text_:wide in 4659) [ClassicSimilarity], result of:
          0.029682916 = score(doc=4659,freq=2.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.1958137 = fieldWeight in 4659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=4659)
        0.016103506 = weight(_text_:web in 4659) [ClassicSimilarity], result of:
          0.016103506 = score(doc=4659,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.14422815 = fieldWeight in 4659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=4659)
      0.42857143 = coord(6/14)
    
    Abstract
    This dissertation studies ontology mapping: the problem of finding semantic correspondences between similar elements of different ontologies. In the dissertation, elements denote classes or properties of ontologies. The goal of this research is to use ontology mapping to make heterogeneous information more accessible. The World Wide Web (WWW) now is widely used as a universal medium for information exchange. Semantic interoperability among different information systems in the WWW is limited due to information heterogeneity, and the non semantic nature of HTML and URLs. Ontologies have been suggested as a way to solve the problem of information heterogeneity by providing formal, explicit definitions of data and reasoning ability over related concepts. Given that no universal ontology exists for the WWW, work has focused on finding semantic correspondences between similar elements of different ontologies, i.e., ontology mapping. Ontology mapping can be done either by hand or using automated tools. Manual mapping becomes impractical as the size and complexity of ontologies increases. Full or semi-automated mapping approaches have been examined by several research studies. Previous full or semiautomated mapping approaches include analyzing linguistic information of elements in ontologies, treating ontologies as structural graphs, applying heuristic rules and machine learning techniques, and using probabilistic and reasoning methods etc. In this paper, two generic ontology mapping approaches are proposed. One is the PRIOR+ approach, which utilizes both information retrieval and artificial intelligence techniques in the context of ontology mapping. The other is the non-instance learning based approach, which experimentally explores machine learning algorithms to solve ontology mapping problem without requesting any instance. The results of the PRIOR+ on different tests at OAEI ontology matching campaign 2007 are encouraging. The non-instance learning based approach has shown potential for solving ontology mapping problem on OAEI benchmark tests.
  13. Krause, J.: Shell Model, Semantic Web and Web Information Retrieval (2006) 0.06
    0.0557096 = product of:
      0.19498359 = sum of:
        0.037103646 = weight(_text_:wide in 6061) [ClassicSimilarity], result of:
          0.037103646 = score(doc=6061,freq=2.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.24476713 = fieldWeight in 6061, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6061)
        0.060388148 = weight(_text_:web in 6061) [ClassicSimilarity], result of:
          0.060388148 = score(doc=6061,freq=18.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.5408555 = fieldWeight in 6061, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6061)
        0.037103646 = weight(_text_:wide in 6061) [ClassicSimilarity], result of:
          0.037103646 = score(doc=6061,freq=2.0), product of:
            0.15158753 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03421255 = queryNorm
            0.24476713 = fieldWeight in 6061, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6061)
        0.060388148 = weight(_text_:web in 6061) [ClassicSimilarity], result of:
          0.060388148 = score(doc=6061,freq=18.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.5408555 = fieldWeight in 6061, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6061)
      0.2857143 = coord(4/14)
    
    Abstract
    The middle of the 1990s are coined by the increased enthusiasm for the possibilities of the WWW, which has only recently deviated - at least in relation to scientific information - for the differentiated measuring of its advantages and disadvantages. Web Information Retrieval originated as a specialized discipline with great commercial significance (for an overview see Lewandowski 2005). Besides the new technological structure that enables the indexing and searching (in seconds) of unimaginable amounts of data worldwide, new assessment processes for the ranking of search results are being developed, which use the link structures of the Web. They are the main innovation with respect to the traditional "mother discipline" of Information Retrieval. From the beginning, link structures of Web pages are applied to commercial search engines in a wide array of variations. From the perspective of scientific information, link topology based approaches were in essence trying to solve a self-created problem: on the one hand, it quickly became clear that the openness of the Web led to an up-tonow unknown increase in available information, but this also caused the quality of the Web pages searched to become a problem - and with it the relevance of the results. The gatekeeper function of traditional information providers, which narrows down every user query to focus on high-quality sources was lacking. Therefore, the recognition of the "authoritativeness" of the Web pages by general search engines such as Google was one of the most important factors for their success.
    Theme
    Semantic Web
  14. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.05
    0.048210833 = product of:
      0.13499033 = sum of:
        0.03350689 = weight(_text_:world in 1967) [ClassicSimilarity], result of:
          0.03350689 = score(doc=1967,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.25480178 = fieldWeight in 1967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.02415526 = weight(_text_:web in 1967) [ClassicSimilarity], result of:
          0.02415526 = score(doc=1967,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.21634221 = fieldWeight in 1967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.03350689 = weight(_text_:world in 1967) [ClassicSimilarity], result of:
          0.03350689 = score(doc=1967,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.25480178 = fieldWeight in 1967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.02415526 = weight(_text_:web in 1967) [ClassicSimilarity], result of:
          0.02415526 = score(doc=1967,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.21634221 = fieldWeight in 1967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.019666033 = product of:
          0.039332066 = sum of:
            0.039332066 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.039332066 = score(doc=1967,freq=4.0), product of:
                0.11980651 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03421255 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.5 = coord(1/2)
      0.35714287 = coord(5/14)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
    Source
    Beyond libraries - subject metadata in the digital environment and semantic web. IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn
  15. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2014) 0.04
    0.04017569 = product of:
      0.112491935 = sum of:
        0.027922407 = weight(_text_:world in 1962) [ClassicSimilarity], result of:
          0.027922407 = score(doc=1962,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.21233483 = fieldWeight in 1962, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1962)
        0.020129383 = weight(_text_:web in 1962) [ClassicSimilarity], result of:
          0.020129383 = score(doc=1962,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.18028519 = fieldWeight in 1962, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1962)
        0.027922407 = weight(_text_:world in 1962) [ClassicSimilarity], result of:
          0.027922407 = score(doc=1962,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.21233483 = fieldWeight in 1962, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1962)
        0.020129383 = weight(_text_:web in 1962) [ClassicSimilarity], result of:
          0.020129383 = score(doc=1962,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.18028519 = fieldWeight in 1962, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1962)
        0.016388359 = product of:
          0.032776717 = sum of:
            0.032776717 = weight(_text_:22 in 1962) [ClassicSimilarity], result of:
              0.032776717 = score(doc=1962,freq=4.0), product of:
                0.11980651 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03421255 = queryNorm
                0.27358043 = fieldWeight in 1962, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1962)
          0.5 = coord(1/2)
      0.35714287 = coord(5/14)
    
    Abstract
    This article reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The article discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and/or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the Dewey Decimal Classification [DDC] (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
    Footnote
    Contribution in a special issue "Beyond libraries: Subject metadata in the digital environment and Semantic Web" - Enthält Beiträge der gleichnamigen IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn.
  16. Kollia, I.; Tzouvaras, V.; Drosopoulos, N.; Stamou, G.: ¬A systemic approach for effective semantic access to cultural content (2012) 0.04
    0.038960673 = product of:
      0.13636234 = sum of:
        0.027922407 = weight(_text_:world in 130) [ClassicSimilarity], result of:
          0.027922407 = score(doc=130,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.21233483 = fieldWeight in 130, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=130)
        0.040258765 = weight(_text_:web in 130) [ClassicSimilarity], result of:
          0.040258765 = score(doc=130,freq=8.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.36057037 = fieldWeight in 130, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=130)
        0.027922407 = weight(_text_:world in 130) [ClassicSimilarity], result of:
          0.027922407 = score(doc=130,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.21233483 = fieldWeight in 130, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=130)
        0.040258765 = weight(_text_:web in 130) [ClassicSimilarity], result of:
          0.040258765 = score(doc=130,freq=8.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.36057037 = fieldWeight in 130, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=130)
      0.2857143 = coord(4/14)
    
    Abstract
    A large on-going activity for digitization, dissemination and preservation of cultural heritage is taking place in Europe, United States and the world, which involves all types of cultural institutions, i.e., galleries, libraries, museums, archives and all types of cultural content. The development of Europeana, as a single point of access to European Cultural Heritage, has probably been the most important result of the activities in the field till now. Semantic interoperability, linked open data, user involvement and user generated content are key issues in these developments. This paper presents a system that provides content providers and users the ability to map, in an effective way, their own metadata schemas to common domain standards and the Europeana (ESE, EDM) data models. The system is currently largely used by many European research projects and the Europeana. Based on these mappings, semantic query answering techniques are proposed as a means for effective access to digital cultural heritage, providing users with content enrichment, linking of data based on their involvement and facilitating content search and retrieval. An experimental study is presented, involving content from national content aggregators, as well as thematic content aggregators and the Europeana, which illustrates the proposed system
    Content
    Beitrag eines Schwerpunktthemas: Semantic Web and Reasoning for Cultural Heritage and Digital Libraries: http://www.semantic-web-journal.net/content/systemic-approach-eff%0Bective-semantic-access-cultural-content http://www.semantic-web-journal.net/sites/default/files/swj147_3.pdf.
    Source
    Semantic Web journal. 3(2012) no.1, S.65-83
  17. Ioannou, E.; Nejdl, W.; Niederée, C.; Velegrakis, Y.: Embracing uncertainty in entity linking (2012) 0.04
    0.038960673 = product of:
      0.13636234 = sum of:
        0.027922407 = weight(_text_:world in 433) [ClassicSimilarity], result of:
          0.027922407 = score(doc=433,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.21233483 = fieldWeight in 433, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=433)
        0.040258765 = weight(_text_:web in 433) [ClassicSimilarity], result of:
          0.040258765 = score(doc=433,freq=8.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.36057037 = fieldWeight in 433, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=433)
        0.027922407 = weight(_text_:world in 433) [ClassicSimilarity], result of:
          0.027922407 = score(doc=433,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.21233483 = fieldWeight in 433, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=433)
        0.040258765 = weight(_text_:web in 433) [ClassicSimilarity], result of:
          0.040258765 = score(doc=433,freq=8.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.36057037 = fieldWeight in 433, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=433)
      0.2857143 = coord(4/14)
    
    Abstract
    The modern Web has grown from a publishing place of well-structured data and HTML pages for companies and experienced users into a vivid publishing and data exchange community in which everyone can participate, both as a data consumer and as a data producer. Unavoidably, the data available on the Web became highly heterogeneous, ranging from highly structured and semistructured to highly unstructured user-generated content, reflecting different perspectives and structuring principles. The full potential of such data can only be realized by combining information from multiple sources. For instance, the knowledge that is typically embedded in monolithic applications can be outsourced and, thus, used also in other applications. Numerous systems nowadays are already actively utilizing existing content from various sources such as WordNet or Wikipedia. Some well-known examples of such systems include DBpedia, Freebase, Spock, and DBLife. A major challenge during combining and querying information from multiple heterogeneous sources is entity linkage, i.e., the ability to detect whether two pieces of information correspond to the same real-world object. This chapter introduces a novel approach for addressing the entity linkage problem for heterogeneous, uncertain, and volatile data.
    Source
    Semantic search over the Web. Eds.: R. De Virgilio, et al
    Theme
    Semantic Web
  18. Schreur, P.E.: ¬The use of Linked Data and artificial intelligence as key elements in the transformation of technical services (2020) 0.04
    0.038441435 = product of:
      0.13454501 = sum of:
        0.03909137 = weight(_text_:world in 125) [ClassicSimilarity], result of:
          0.03909137 = score(doc=125,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.29726875 = fieldWeight in 125, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0546875 = fieldNorm(doc=125)
        0.028181138 = weight(_text_:web in 125) [ClassicSimilarity], result of:
          0.028181138 = score(doc=125,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.25239927 = fieldWeight in 125, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=125)
        0.03909137 = weight(_text_:world in 125) [ClassicSimilarity], result of:
          0.03909137 = score(doc=125,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.29726875 = fieldWeight in 125, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0546875 = fieldNorm(doc=125)
        0.028181138 = weight(_text_:web in 125) [ClassicSimilarity], result of:
          0.028181138 = score(doc=125,freq=2.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.25239927 = fieldWeight in 125, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=125)
      0.2857143 = coord(4/14)
    
    Abstract
    Library Technical Services have benefited from numerous stimuli. Although initially looked at with suspicion, transitions such as the move from catalog cards to the MARC formats have proven enormously helpful to libraries and their patrons. Linked data and Artificial Intelligence (AI) hold the same promise. Through the conversion of metadata surrogates (cataloging) to linked open data, libraries can represent their resources on the Semantic Web. But in order to provide some form of controlled access to unstructured data, libraries must reach beyond traditional cataloging to new tools such as AI to provide consistent access to a growing world of full-text resources.
  19. Haffner, A.: Internationalisierung der GND durch das Semantic Web (2012) 0.04
    0.036630847 = product of:
      0.12820795 = sum of:
        0.019545685 = weight(_text_:world in 318) [ClassicSimilarity], result of:
          0.019545685 = score(doc=318,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.14863437 = fieldWeight in 318, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.02734375 = fieldNorm(doc=318)
        0.04455829 = weight(_text_:web in 318) [ClassicSimilarity], result of:
          0.04455829 = score(doc=318,freq=20.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.39907828 = fieldWeight in 318, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=318)
        0.019545685 = weight(_text_:world in 318) [ClassicSimilarity], result of:
          0.019545685 = score(doc=318,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.14863437 = fieldWeight in 318, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.02734375 = fieldNorm(doc=318)
        0.04455829 = weight(_text_:web in 318) [ClassicSimilarity], result of:
          0.04455829 = score(doc=318,freq=20.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.39907828 = fieldWeight in 318, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=318)
      0.2857143 = coord(4/14)
    
    Abstract
    Seit Bestehen der Menschheit sammelt der Mensch Informationen, seit Bestehen des Internets stellt der Mensch Informationen ins Web, seit Bestehen des Semantic Webs sollen auch Maschinen in die Lage versetzt werden mit diesen Informationen umzugehen. Das Bibliothekswesen ist einer der Sammler. Seit Jahrhunderten werden Kataloge und Bibliografien sowie Inventarnachweise geführt. Mit der Aufgabe des Zettelkatalogs hin zum Onlinekatalog wurde es Benutzern plötzlich möglich in Beständen komfortabel zu suchen. Durch die Bereitstellung von Daten aus dem Bibliothekswesen im Semantic Web sollen nicht nur die eigenen Katalogsysteme Zugriff auf diese Informationen erhalten, sondern jede beliebige Anwendung, die auf das Web zugreifen kann. Darüber hinaus ist die Vorstellung, dass sich die im Web befindenden Daten - in sofern möglich - miteinander verlinken und zu einem gigantischen semantischen Netz werden, das als ein großer Datenpool verwendet werden kann. Die Voraussetzung hierfür ist wie beim Übergang zum Onlinekatalog die Aufbereitung der Daten in einem passenden Format. Normdaten dienen im Bibliothekswesen bereits dazu eine Vernetzung der unterschiedlichen Bestände zu erlauben. Bei der Erschließung eines Buches wird nicht bloß gesagt, dass jemand, der Thomas Mann heißt, der Autor ist - es wird eine Verknüpfung vom Katalogisat zu dem Thomas Mann erzeugt, der am 6. Juni 1875 in Lübeck geboren und am 12. August 1955 in Zürich verstorben ist. Der Vorteil von Normdateneintragungen ist, dass sie zum eindeutigen Nachweis der Verfasserschaft oder Mitwirkung an einem Werk beitragen. Auch stehen Normdateneintragungen bereits allen Bibliotheken für die Nachnutzung bereit - der Schritt ins Semantic Web wäre somit die Öffnung der Normdaten für alle denkbaren Nutzergruppen.
    Die Gemeinsame Normdatei (GND) ist seit April 2012 die Datei, die die im deutschsprachigen Bibliothekswesen verwendeten Normdaten enthält. Folglich muss auf Basis dieser Daten eine Repräsentation für die Darstellung als Linked Data im Semantic Web etabliert werden. Neben der eigentlichen Bereitstellung von GND-Daten im Semantic Web sollen die Daten mit bereits als Linked Data vorhandenen Datenbeständen (DBpedia, VIAF etc.) verknüpft und nach Möglichkeit kompatibel sein, wodurch die GND einem internationalen und spartenübergreifenden Publikum zugänglich gemacht wird. Dieses Dokument dient vor allem zur Beschreibung, wie die GND-Linked-Data-Repräsentation entstand und dem Weg zur Spezifikation einer eignen Ontologie. Hierfür werden nach einer kurzen Einführung in die GND die Grundprinzipien und wichtigsten Standards für die Veröffentlichung von Linked Data im Semantic Web vorgestellt, um darauf aufbauend existierende Vokabulare und Ontologien des Bibliothekswesens betrachten zu können. Anschließend folgt ein Exkurs in das generelle Vorgehen für die Bereitstellung von Linked Data, wobei die so oft zitierte Open World Assumption kritisch hinterfragt und damit verbundene Probleme insbesondere in Hinsicht Interoperabilität und Nachnutzbarkeit aufgedeckt werden. Um Probleme der Interoperabilität zu vermeiden, wird den Empfehlungen der Library Linked Data Incubator Group [LLD11] gefolgt.
    Im Kapitel Anwendungsprofile als Basis für die Ontologieentwicklung wird die Spezifikation von Dublin Core Anwendungsprofilen kritisch betrachtet, um auszumachen wann und in welcher Form sich ihre Verwendung bei dem Vorhaben Bereitstellung von Linked Data anbietet. In den nachfolgenden Abschnitten wird die GND-Ontologie, welche als Standard für die Serialisierung von GND-Daten im Semantic Web dient, samt Modellierungsentscheidungen näher vorgestellt. Dabei wird insbesondere der Technik des Vocabulary Alignment eine prominente Position eingeräumt, da darin ein entscheidender Mechanismus zur Steigerung der Interoperabilität und Nachnutzbarkeit gesehen wird. Auch wird sich mit der Verlinkung zu externen Datensets intensiv beschäftigt. Hierfür wurden ausgewählte Datenbestände hinsichtlich ihrer Qualität und Aktualität untersucht und Empfehlungen für die Implementierung innerhalb des GND-Datenbestandes gegeben. Abschließend werden eine Zusammenfassung und ein Ausblick auf weitere Schritte gegeben.
  20. Isaac, A.; Baker, T.: Linked data practice at different levels of semantic precision : the perspective of libraries, archives and museums (2015) 0.04
    0.035878584 = product of:
      0.12557504 = sum of:
        0.027922407 = weight(_text_:world in 2026) [ClassicSimilarity], result of:
          0.027922407 = score(doc=2026,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.21233483 = fieldWeight in 2026, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2026)
        0.034865115 = weight(_text_:web in 2026) [ClassicSimilarity], result of:
          0.034865115 = score(doc=2026,freq=6.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.3122631 = fieldWeight in 2026, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2026)
        0.027922407 = weight(_text_:world in 2026) [ClassicSimilarity], result of:
          0.027922407 = score(doc=2026,freq=2.0), product of:
            0.13150178 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03421255 = queryNorm
            0.21233483 = fieldWeight in 2026, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2026)
        0.034865115 = weight(_text_:web in 2026) [ClassicSimilarity], result of:
          0.034865115 = score(doc=2026,freq=6.0), product of:
            0.11165301 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03421255 = queryNorm
            0.3122631 = fieldWeight in 2026, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2026)
      0.2857143 = coord(4/14)
    
    Abstract
    Libraries, archives and museums rely on structured schemas and vocabularies to indicate classes in which a resource may belong. In the context of linked data, key organizational components are the RDF data model, element schemas and value vocabularies, with simple ontologies having minimally defined classes and properties in order to facilitate reuse and interoperability. Simplicity over formal semantics is a tenet of the open-world assumption underlying ontology languages central to the Semantic Web, but the result is a lack of constraints, data quality checks and validation capacity. Inconsistent use of vocabularies and ontologies that do not follow formal semantics rules and logical concept hierarchies further complicate the use of Semantic Web technologies. The Simple Knowledge Organization System (SKOS) helps make existing value vocabularies available in the linked data environment, but it exchanges precision for simplicity. Incompatibilities between simple organized vocabularies, Resource Description Framework Schemas and OWL ontologies and even basic notions of subjects and concepts prevent smooth translations and challenge the conversion of cultural institutions' unique legacy vocabularies for linked data. Adopting the linked data vision requires accepting loose semantic interpretations. To avoid semantic inconsistencies and illogical results, cultural organizations following the linked data path must be careful to choose the level of semantics that best suits their domain and needs.
    Theme
    Semantic Web

Years

Languages

  • e 132
  • d 36
  • pt 1
  • More… Less…

Types

  • a 108
  • el 56
  • m 13
  • s 7
  • x 7
  • r 5
  • p 2
  • n 1
  • More… Less…