Search (176 results, page 1 of 9)

  • × theme_ss:"Semantische Interoperabilität"
  1. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.42
    0.42380133 = product of:
      0.7416523 = sum of:
        0.042452075 = product of:
          0.12735622 = sum of:
            0.12735622 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.12735622 = score(doc=1000,freq=2.0), product of:
                0.27192625 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0320743 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.12735622 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.12735622 = score(doc=1000,freq=2.0), product of:
            0.27192625 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0320743 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.12735622 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.12735622 = score(doc=1000,freq=2.0), product of:
            0.27192625 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0320743 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.12735622 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.12735622 = score(doc=1000,freq=2.0), product of:
            0.27192625 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0320743 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.04354783 = weight(_text_:source in 1000) [ClassicSimilarity], result of:
          0.04354783 = score(doc=1000,freq=2.0), product of:
            0.15900996 = queryWeight, product of:
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.0320743 = queryNorm
            0.27386856 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.12735622 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.12735622 = score(doc=1000,freq=2.0), product of:
            0.27192625 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0320743 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.018871317 = weight(_text_:web in 1000) [ClassicSimilarity], result of:
          0.018871317 = score(doc=1000,freq=2.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.18028519 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.12735622 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.12735622 = score(doc=1000,freq=2.0), product of:
            0.27192625 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0320743 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.5714286 = coord(8/14)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  2. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.41
    0.40753993 = product of:
      0.9509265 = sum of:
        0.059432905 = product of:
          0.17829871 = sum of:
            0.17829871 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.17829871 = score(doc=306,freq=2.0), product of:
                0.27192625 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0320743 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.17829871 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.17829871 = score(doc=306,freq=2.0), product of:
            0.27192625 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0320743 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.17829871 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.17829871 = score(doc=306,freq=2.0), product of:
            0.27192625 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0320743 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.17829871 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.17829871 = score(doc=306,freq=2.0), product of:
            0.27192625 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0320743 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.17829871 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.17829871 = score(doc=306,freq=2.0), product of:
            0.27192625 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0320743 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.17829871 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.17829871 = score(doc=306,freq=2.0), product of:
            0.27192625 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0320743 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.42857143 = coord(6/14)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  3. Reasoning Web : Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures (2017) 0.07
    0.0656352 = product of:
      0.22972319 = sum of:
        0.050814766 = weight(_text_:open in 3934) [ClassicSimilarity], result of:
          0.050814766 = score(doc=3934,freq=4.0), product of:
            0.14443703 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0320743 = queryNorm
            0.3518126 = fieldWeight in 3934, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3934)
        0.04354783 = weight(_text_:source in 3934) [ClassicSimilarity], result of:
          0.04354783 = score(doc=3934,freq=2.0), product of:
            0.15900996 = queryWeight, product of:
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.0320743 = queryNorm
            0.27386856 = fieldWeight in 3934, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3934)
        0.06258909 = weight(_text_:web in 3934) [ClassicSimilarity], result of:
          0.06258909 = score(doc=3934,freq=22.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.59793836 = fieldWeight in 3934, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3934)
        0.07277149 = weight(_text_:log in 3934) [ClassicSimilarity], result of:
          0.07277149 = score(doc=3934,freq=2.0), product of:
            0.205552 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0320743 = queryNorm
            0.3540296 = fieldWeight in 3934, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3934)
      0.2857143 = coord(4/14)
    
    Abstract
    This volume contains the lecture notes of the 13th Reasoning Web Summer School, RW 2017, held in London, UK, in July 2017. In 2017, the theme of the school was "Semantic Interoperability on the Web", which encompasses subjects such as data integration, open data management, reasoning over linked data, database to ontology mapping, query answering over ontologies, hybrid reasoning with rules and ontologies, and ontology-based dynamic systems. The papers of this volume focus on these topics and also address foundational reasoning techniques used in answer set programming and ontologies.
    Content
    Neumaier, Sebastian (et al.): Data Integration for Open Data on the Web - Stamou, Giorgos (et al.): Ontological Query Answering over Semantic Data - Calì, Andrea: Ontology Querying: Datalog Strikes Back - Sequeda, Juan F.: Integrating Relational Databases with the Semantic Web: A Reflection - Rousset, Marie-Christine (et al.): Datalog Revisited for Reasoning in Linked Data - Kaminski, Roland (et al.): A Tutorial on Hybrid Answer Set Solving with clingo - Eiter, Thomas (et al.): Answer Set Programming with External Source Access - Lukasiewicz, Thomas: Uncertainty Reasoning for the Semantic Web - Calvanese, Diego (et al.): OBDA for Log Extraction in Process Mining
    RSWK
    Ontologie <Wissensverarbeitung> / Semantic Web
    Series
    Lecture Notes in Computer Scienc;10370 )(Information Systems and Applications, incl. Internet/Web, and HCI
    Subject
    Ontologie <Wissensverarbeitung> / Semantic Web
    Theme
    Semantic Web
  4. Bandholtz, T.; Schulte-Coerne, T.; Glaser, R.; Fock, J.; Keller, T.: iQvoc - open source SKOS(XL) maintenance and publishing tool (2010) 0.04
    0.03938161 = product of:
      0.18378083 = sum of:
        0.07114068 = weight(_text_:open in 604) [ClassicSimilarity], result of:
          0.07114068 = score(doc=604,freq=4.0), product of:
            0.14443703 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0320743 = queryNorm
            0.49253768 = fieldWeight in 604, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0546875 = fieldNorm(doc=604)
        0.08622031 = weight(_text_:source in 604) [ClassicSimilarity], result of:
          0.08622031 = score(doc=604,freq=4.0), product of:
            0.15900996 = queryWeight, product of:
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.0320743 = queryNorm
            0.5422321 = fieldWeight in 604, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.0546875 = fieldNorm(doc=604)
        0.026419844 = weight(_text_:web in 604) [ClassicSimilarity], result of:
          0.026419844 = score(doc=604,freq=2.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.25239927 = fieldWeight in 604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=604)
      0.21428572 = coord(3/14)
    
    Abstract
    iQvoc is a new open source SKOS-XL vocabulary management tool developed by the Federal Environment Agency, Germany, and innoQ Deutschland GmbH. Its immediate purpose is maintaining and publishing reference vocabularies in the upcoming Linked Data cloud of environmental information, but it may be easily adapted to host any SKOS- XL compliant vocabulary. iQvoc is implemented as a Ruby on Rails application running on top of JRuby - the Java implementation of the Ruby Programming Language. To increase the user experience when editing content, iQvoc uses heavily the JavaScript library jQuery.
    Source
    Proceedings of the Sixth Workshop on Scripting and Development for the Semantic Web, Crete, Greece, May 31, 2010, CEUR Workshop Proceedings, SFSW - http://ceur-ws.org/Vol-699/Paper2.pdf
  5. Linked data and user interaction : the road ahead (2015) 0.04
    0.039051592 = product of:
      0.18224077 = sum of:
        0.08149719 = weight(_text_:benutzer in 2552) [ClassicSimilarity], result of:
          0.08149719 = score(doc=2552,freq=4.0), product of:
            0.18291734 = queryWeight, product of:
              5.7029257 = idf(docFreq=400, maxDocs=44218)
              0.0320743 = queryNorm
            0.44554108 = fieldWeight in 2552, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7029257 = idf(docFreq=400, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2552)
        0.050814766 = weight(_text_:open in 2552) [ClassicSimilarity], result of:
          0.050814766 = score(doc=2552,freq=4.0), product of:
            0.14443703 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0320743 = queryNorm
            0.3518126 = fieldWeight in 2552, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2552)
        0.049928814 = weight(_text_:web in 2552) [ClassicSimilarity], result of:
          0.049928814 = score(doc=2552,freq=14.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.47698978 = fieldWeight in 2552, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2552)
      0.21428572 = coord(3/14)
    
    Abstract
    This collection of research papers provides extensive information on deploying services, concepts, and approaches for using open linked data from libraries and other cultural heritage institutions. With a special emphasis on how libraries and other cultural heritage institutions can create effective end user interfaces using open, linked data or other datasets. These papers are essential reading for any one interesting in user interface design or the semantic web.
    Content
    H. Frank Cervone: Linked data and user interaction : an introduction -- Paola Di Maio: Linked Data Beyond Libraries Towards Universal Interfaces and Knowledge Unification -- Emmanuelle Bermes: Following the user's flow in the Digital Pompidou -- Patrick Le Bceuf: Customized OPACs on the Semantic Web : the OpenCat prototype -- Ryan Shaw, Patrick Golden and Michael Buckland: Using linked library data in working research notes -- Timm Heuss, Bernhard Humm.Tilman Deuschel, Torsten Frohlich, Thomas Herth and Oliver Mitesser: Semantically guided, situation-aware literature research -- Niklas Lindstrom and Martin Malmsten: Building interfaces on a networked graph -- Natasha Simons, Arve Solland and Jan Hettenhausen: Griffith Research Hub. Vgl.: http://d-nb.info/1032799889.
    LCSH
    Semantic Web
    RSWK
    Bibliothek / Linked Data / Benutzer / Mensch-Maschine-Kommunikation / Recherche / Suchverfahren / Aufsatzsammlung
    Linked Data / Online-Katalog / Semantic Web / Benutzeroberfläche / Kongress / Singapur <2013>
    Subject
    Bibliothek / Linked Data / Benutzer / Mensch-Maschine-Kommunikation / Recherche / Suchverfahren / Aufsatzsammlung
    Linked Data / Online-Katalog / Semantic Web / Benutzeroberfläche / Kongress / Singapur <2013>
    Semantic Web
    Theme
    Semantic Web
  6. Shaw, R.; Rabinowitz, A.; Golden, P.; Kansa, E.: Report on and demonstration of the PeriodO period gazetteer (2015) 0.03
    0.029928526 = product of:
      0.13966645 = sum of:
        0.043117758 = weight(_text_:open in 2249) [ClassicSimilarity], result of:
          0.043117758 = score(doc=2249,freq=2.0), product of:
            0.14443703 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0320743 = queryNorm
            0.2985229 = fieldWeight in 2249, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046875 = fieldNorm(doc=2249)
        0.07390312 = weight(_text_:source in 2249) [ClassicSimilarity], result of:
          0.07390312 = score(doc=2249,freq=4.0), product of:
            0.15900996 = queryWeight, product of:
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.0320743 = queryNorm
            0.46477038 = fieldWeight in 2249, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.046875 = fieldNorm(doc=2249)
        0.02264558 = weight(_text_:web in 2249) [ClassicSimilarity], result of:
          0.02264558 = score(doc=2249,freq=2.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.21634221 = fieldWeight in 2249, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2249)
      0.21428572 = coord(3/14)
    
    Abstract
    The PeriodO period gazetteer documents definitions of historical period names. Each entry of the gazetteer identifies the definition of a single period. To be included in the gazetteer, a definition must a) give the period a name, b) impose some temporal bounds on the period, c) have some implicit or explicit association with a geographical region, and d) have been formally or informally published in some citable source. Much care has been put into giving period definitions stable identifiers that can be resolved to RDF representations of period definitions. Anyone can propose additions of new definitions to PeriodO, and we have implemented an open source web service and browser-based client for distributed versioning and collaborative maintenance of the gazetteer.
  7. Tudhope, D.; Binding, C.: Mapping between linked data vocabularies in ARIADNE (2015) 0.03
    0.02593943 = product of:
      0.12105067 = sum of:
        0.050814766 = weight(_text_:open in 2250) [ClassicSimilarity], result of:
          0.050814766 = score(doc=2250,freq=4.0), product of:
            0.14443703 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0320743 = queryNorm
            0.3518126 = fieldWeight in 2250, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2250)
        0.04354783 = weight(_text_:source in 2250) [ClassicSimilarity], result of:
          0.04354783 = score(doc=2250,freq=2.0), product of:
            0.15900996 = queryWeight, product of:
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.0320743 = queryNorm
            0.27386856 = fieldWeight in 2250, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2250)
        0.026688073 = weight(_text_:web in 2250) [ClassicSimilarity], result of:
          0.026688073 = score(doc=2250,freq=4.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.25496176 = fieldWeight in 2250, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2250)
      0.21428572 = coord(3/14)
    
    Abstract
    Semantic Enrichment Enabling Sustainability of Archaeological Links (SENESCHAL) was a project coordinated by the Hypermedia Research Unit at the University of South Wales. The project aims included widening access to key vocabulary resources. National cultural heritage thesauri and vocabularies are used by both national organizations and local authority Historic Environment Records and could potentially act as vocabulary hubs for the Web of Data. Following completion, a set of prominent UK archaeological thesauri and vocabularies is now freely available as Linked Open Data (LOD) via http://www.heritagedata.org - together with open source web services and user interface controls. This presentation will reflect on work done to date for the ARIADNE FP7 infrastructure project (http://www.ariadne-infrastructure.eu) mapping between archaeological vocabularies in different languages and the utility of a hub architecture. The poly-hierarchical structure of the Getty Art & Architecture Thesaurus (AAT) was extracted for use as an example mediating structure to interconnect various multilingual vocabularies originating from ARIADNE data providers. Vocabulary resources were first converted to a common concept-based format (SKOS) and the concepts were then manually mapped to nodes of the extracted AAT structure using some judgement on the meaning of terms and scope notes. Results are presented along with reflections on the wider application to existing European archaeological vocabularies and associated online datasets.
  8. Ledl, A.: Demonstration of the BAsel Register of Thesauri, Ontologies & Classifications (BARTOC) (2015) 0.03
    0.025290156 = product of:
      0.11802073 = sum of:
        0.043117758 = weight(_text_:open in 2038) [ClassicSimilarity], result of:
          0.043117758 = score(doc=2038,freq=2.0), product of:
            0.14443703 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0320743 = queryNorm
            0.2985229 = fieldWeight in 2038, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046875 = fieldNorm(doc=2038)
        0.052257393 = weight(_text_:source in 2038) [ClassicSimilarity], result of:
          0.052257393 = score(doc=2038,freq=2.0), product of:
            0.15900996 = queryWeight, product of:
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.0320743 = queryNorm
            0.32864225 = fieldWeight in 2038, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.046875 = fieldNorm(doc=2038)
        0.02264558 = weight(_text_:web in 2038) [ClassicSimilarity], result of:
          0.02264558 = score(doc=2038,freq=2.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.21634221 = fieldWeight in 2038, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2038)
      0.21428572 = coord(3/14)
    
    Abstract
    The BAsel Register of Thesauri, Ontologies & Classifications (BARTOC, http://bartoc.org) is a bibliographic database aiming to record metadata of as many Knowledge Organization Systems as possible. It has a facetted, responsive web design search interface in 20 EU languages. With more than 1'300 interdisciplinary items in 77 languages, BARTOC is the largest database of its kind, multilingual both by content and features, and it is still growing. This being said, the demonstration of BARTOC would be suitable for topic nr. 10 [Multilingual and Interdisciplinary KOS applications and tools]. BARTOC has been developed by the University Library of Basel, Switzerland. It is rooted in the tradition of library and information science of collecting bibliographic records of controlled and structured vocabularies, yet in a more contemporary manner. BARTOC is based on the open source content management system Drupal 7.
  9. Neumaier, S.: Data integration for open data on the Web (2017) 0.02
    0.022106012 = product of:
      0.15474208 = sum of:
        0.095065735 = weight(_text_:open in 3923) [ClassicSimilarity], result of:
          0.095065735 = score(doc=3923,freq=14.0), product of:
            0.14443703 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0320743 = queryNorm
            0.6581812 = fieldWeight in 3923, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3923)
        0.059676345 = weight(_text_:web in 3923) [ClassicSimilarity], result of:
          0.059676345 = score(doc=3923,freq=20.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.5701118 = fieldWeight in 3923, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3923)
      0.14285715 = coord(2/14)
    
    Abstract
    In this lecture we will discuss and introduce challenges of integrating openly available Web data and how to solve them. Firstly, while we will address this topic from the viewpoint of Semantic Web research, not all data is readily available as RDF or Linked Data, so we will give an introduction to different data formats prevalent on the Web, namely, standard formats for publishing and exchanging tabular, tree-shaped, and graph data. Secondly, not all Open Data is really completely open, so we will discuss and address issues around licences, terms of usage associated with Open Data, as well as documentation of data provenance. Thirdly, we will discuss issues connected with (meta-)data quality issues associated with Open Data on the Web and how Semantic Web techniques and vocabularies can be used to describe and remedy them. Fourth, we will address issues about searchability and integration of Open Data and discuss in how far semantic search can help to overcome these. We close with briefly summarizing further issues not covered explicitly herein, such as multi-linguality, temporal aspects (archiving, evolution, temporal querying), as well as how/whether OWL and RDFS reasoning on top of integrated open data could be help.
    Series
    Lecture Notes in Computer Scienc;10370) (Information Systems and Applications, incl. Internet/Web, and HCI
    Source
    Reasoning Web: Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures. Eds.: Ianni, G. et al
    Theme
    Semantic Web
  10. Ahmed, M.; Mukhopadhyay, M.; Mukhopadhyay, P.: Automated knowledge organization : AI ML based subject indexing system for libraries (2023) 0.02
    0.020275876 = product of:
      0.14193113 = sum of:
        0.080345206 = weight(_text_:open in 977) [ClassicSimilarity], result of:
          0.080345206 = score(doc=977,freq=10.0), product of:
            0.14443703 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0320743 = queryNorm
            0.5562646 = fieldWeight in 977, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=977)
        0.06158593 = weight(_text_:source in 977) [ClassicSimilarity], result of:
          0.06158593 = score(doc=977,freq=4.0), product of:
            0.15900996 = queryWeight, product of:
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.0320743 = queryNorm
            0.38730863 = fieldWeight in 977, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.9575505 = idf(docFreq=844, maxDocs=44218)
              0.0390625 = fieldNorm(doc=977)
      0.14285715 = coord(2/14)
    
    Abstract
    The research study as reported here is an attempt to explore the possibilities of an AI/ML-based semi-automated indexing system in a library setup to handle large volumes of documents. It uses the Python virtual environment to install and configure an open source AI environment (named Annif) to feed the LOD (Linked Open Data) dataset of Library of Congress Subject Headings (LCSH) as a standard KOS (Knowledge Organisation System). The framework deployed the Turtle format of LCSH after cleaning the file with Skosify, applied an array of backend algorithms (namely TF-IDF, Omikuji, and NN-Ensemble) to measure relative performance, and selected Snowball as an analyser. The training of Annif was conducted with a large set of bibliographic records populated with subject descriptors (MARC tag 650$a) and indexed by trained LIS professionals. The training dataset is first treated with MarcEdit to export it in a format suitable for OpenRefine, and then in OpenRefine it undergoes many steps to produce a bibliographic record set suitable to train Annif. The framework, after training, has been tested with a bibliographic dataset to measure indexing efficiencies, and finally, the automated indexing framework is integrated with data wrangling software (OpenRefine) to produce suggested headings on a mass scale. The entire framework is based on open-source software, open datasets, and open standards.
  11. Rölke, H.; Weichselbraun, A.: Ontologien und Linked Open Data (2023) 0.02
    0.020154415 = product of:
      0.09405393 = sum of:
        0.039251152 = weight(_text_:medien in 788) [ClassicSimilarity], result of:
          0.039251152 = score(doc=788,freq=2.0), product of:
            0.15096188 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.0320743 = queryNorm
            0.26000705 = fieldWeight in 788, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.0390625 = fieldNorm(doc=788)
        0.035931468 = weight(_text_:open in 788) [ClassicSimilarity], result of:
          0.035931468 = score(doc=788,freq=2.0), product of:
            0.14443703 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0320743 = queryNorm
            0.24876907 = fieldWeight in 788, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=788)
        0.018871317 = weight(_text_:web in 788) [ClassicSimilarity], result of:
          0.018871317 = score(doc=788,freq=2.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.18028519 = fieldWeight in 788, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=788)
      0.21428572 = coord(3/14)
    
    Abstract
    Der Begriff Ontologie stammt ursprünglich aus der Metaphysik, einem Teilbereich der Philosophie, welcher sich um die Erkenntnis der Grundstruktur und Prinzipien der Wirklichkeit bemüht. Ontologien befassen sich dabei mit der Frage, welche Dinge auf der fundamentalsten Ebene existieren, wie sich diese strukturieren lassen und in welchen Beziehungen diese zueinanderstehen. In der Informationswissenschaft hingegen werden Ontologien verwendet, um das Vokabular für die Beschreibung von Wissensbereichen zu formalisieren. Ziel ist es, dass alle Akteure, die in diesen Bereichen tätig sind, die gleichen Konzepte und Begrifflichkeiten verwenden, um eine reibungslose Zusammenarbeit ohne Missverständnisse zu ermöglichen. So definierte zum Beispiel die Dublin Core Metadaten Initiative 15 Kernelemente, die zur Beschreibung von elektronischen Ressourcen und Medien verwendet werden können. Jedes Element wird durch eine eindeutige Bezeichnung (zum Beispiel identifier) und eine zugehörige Konzeption, welche die Bedeutung dieser Bezeichnung möglichst exakt festlegt, beschrieben. Ein Identifier muss zum Beispiel laut der Dublin Core Ontologie ein Dokument basierend auf einem zugehörigen Katalog eindeutig identifizieren. Je nach Katalog kämen daher zum Beispiel eine ISBN (Katalog von Büchern), ISSN (Katalog von Zeitschriften), URL (Web), DOI (Publikationsdatenbank) etc. als Identifier in Frage.
  12. Metadata and semantics research : 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings (2016) 0.02
    0.019700054 = product of:
      0.091933586 = sum of:
        0.05030405 = weight(_text_:open in 3283) [ClassicSimilarity], result of:
          0.05030405 = score(doc=3283,freq=2.0), product of:
            0.14443703 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0320743 = queryNorm
            0.3482767 = fieldWeight in 3283, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3283)
        0.026419844 = weight(_text_:web in 3283) [ClassicSimilarity], result of:
          0.026419844 = score(doc=3283,freq=2.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.25239927 = fieldWeight in 3283, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3283)
        0.015209687 = product of:
          0.030419374 = sum of:
            0.030419374 = weight(_text_:22 in 3283) [ClassicSimilarity], result of:
              0.030419374 = score(doc=3283,freq=2.0), product of:
                0.11231873 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0320743 = queryNorm
                0.2708308 = fieldWeight in 3283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3283)
          0.5 = coord(1/2)
      0.21428572 = coord(3/14)
    
    Abstract
    This book constitutes the refereed proceedings of the 10th Metadata and Semantics Research Conference, MTSR 2016, held in Göttingen, Germany, in November 2016. The 26 full papers and 6 short papers presented were carefully reviewed and selected from 67 submissions. The papers are organized in several sessions and tracks: Digital Libraries, Information Retrieval, Linked and Social Data, Metadata and Semantics for Open Repositories, Research Information Systems and Data Infrastructures, Metadata and Semantics for Agriculture, Food and Environment, Metadata and Semantics for Cultural Collections and Applications, European and National Projects.
    Theme
    Semantic Web
  13. Latif, A.: Understanding linked open data : for linked data discovery, consumption, triplification and application development (2011) 0.02
    0.019376794 = product of:
      0.13563755 = sum of:
        0.09641425 = weight(_text_:open in 128) [ClassicSimilarity], result of:
          0.09641425 = score(doc=128,freq=10.0), product of:
            0.14443703 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0320743 = queryNorm
            0.66751754 = fieldWeight in 128, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046875 = fieldNorm(doc=128)
        0.0392233 = weight(_text_:web in 128) [ClassicSimilarity], result of:
          0.0392233 = score(doc=128,freq=6.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.37471575 = fieldWeight in 128, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=128)
      0.14285715 = coord(2/14)
    
    Abstract
    Linked Open Data initiative has played a vital role in the realization of the Semantic Web at a global scale by publishing and interlinking diverse data sources on the Web. Access to this huge amount of Linked Data presents exciting benefits and opportunities. However, the inherent complexity attached to Linked Data understanding, lack of potential use cases and applications which can consume Linked Data hinders its full exploitation by naïve web users and developers. This book aims to address these core limitations of Linked Open Data and contributes by presenting: (i) Conceptual model for fundamental understanding of Linked Open Data sphere, (ii) Linked Data application to search, consume and aggregate various Linked Data resources, (iii) Semantification and interlinking technique for conversion of legacy data, and (iv) Potential application areas of Linked Open Data.
  14. Euzenat, J.; Shvaiko, P.: Ontology matching (2010) 0.02
    0.018497817 = product of:
      0.08632314 = sum of:
        0.040651817 = weight(_text_:open in 168) [ClassicSimilarity], result of:
          0.040651817 = score(doc=168,freq=4.0), product of:
            0.14443703 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0320743 = queryNorm
            0.2814501 = fieldWeight in 168, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.036980078 = weight(_text_:web in 168) [ClassicSimilarity], result of:
          0.036980078 = score(doc=168,freq=12.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.35328537 = fieldWeight in 168, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.008691249 = product of:
          0.017382499 = sum of:
            0.017382499 = weight(_text_:22 in 168) [ClassicSimilarity], result of:
              0.017382499 = score(doc=168,freq=2.0), product of:
                0.11231873 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0320743 = queryNorm
                0.15476047 = fieldWeight in 168, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=168)
          0.5 = coord(1/2)
      0.21428572 = coord(3/14)
    
    Abstract
    Ontologies are viewed as the silver bullet for many applications, but in open or evolving systems, different parties can adopt different ontologies. This increases heterogeneity problems rather than reducing heterogeneity. This book proposes ontology matching as a solution to the problem of semantic heterogeneity, offering researchers and practitioners a uniform framework of reference to currently available work. The techniques presented apply to database schema matching, catalog integration, XML schema matching and more. Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko's book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, artificial intelligence. With Ontology Matching, researchers and practitioners will find a reference book which presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can equally be applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a detailed account of matching techniques and matching systems in a systematic way from theoretical, practical and application perspectives.
    Date
    20. 6.2012 19:08:22
    LCSH
    World wide web
    RSWK
    Datenintegration / Informationssystem / Matching / Ontologie <Wissensverarbeitung> / Schema <Informatik> / Semantic Web
    Subject
    Datenintegration / Informationssystem / Matching / Ontologie <Wissensverarbeitung> / Schema <Informatik> / Semantic Web
    World wide web
  15. Heel, F.: Abbildungen zwischen der Dewey-Dezimalklassifikation (DDC), der Regensburger Verbundklassifikation (RVK) und der Schlagwortnormdatei (SWD) für die Recherche in heterogen erschlossenen Datenbeständen : Möglichkeiten und Problembereiche (2007) 0.02
    0.016162392 = product of:
      0.11313673 = sum of:
        0.055509515 = weight(_text_:medien in 4434) [ClassicSimilarity], result of:
          0.055509515 = score(doc=4434,freq=4.0), product of:
            0.15096188 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.0320743 = queryNorm
            0.36770552 = fieldWeight in 4434, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4434)
        0.057627216 = weight(_text_:benutzer in 4434) [ClassicSimilarity], result of:
          0.057627216 = score(doc=4434,freq=2.0), product of:
            0.18291734 = queryWeight, product of:
              5.7029257 = idf(docFreq=400, maxDocs=44218)
              0.0320743 = queryNorm
            0.31504512 = fieldWeight in 4434, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7029257 = idf(docFreq=400, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4434)
      0.14285715 = coord(2/14)
    
    Abstract
    Eine einheitliche Sacherschließung in Deutschland wird durch die Vielzahl an vorhandenen und verwendeten Erschließungssystemen, Universal-, Fachklassifikationen und Fachthesauri erschwert. Den Benutzern von Bibliothekskatalogen oder Datenbanken fällt es daher schwer, themenspezifische Recherchen in heterogen erschlossenen Datenbeständen durchzuführen. In diesem Fall müssen die Nutzer derzeit nämlich den Umgang mit mehreren Erschließungsinstrumenten erlernen und verschiedene Suchanfragen anwenden, um das gewünschte Rechercheergebnis datenbankübergreifend zu erreichen. Um dem Benutzer einen einheitlichen Zugang zu heterogen erschlossenen Datenbeständen zu gewährleisten und gleichzeitig auch den Arbeitsaufwand für die Bibliothekare zu reduzieren, ist die Erstellung eines so genannten "Integrierten Retrievals" sinnvoll. Durch die Verknüpfung der unterschiedlichen Sacherschließungssysteme mit Hilfe von Konkordanzen wird es dem Nutzer ermöglicht, mit einem ihm vertrauten Vokabular eine sachliche Recherche in unterschiedlich erschlossenen Datenbeständen durchzuführen, ohne die spezifischen Besonderheiten der verschiedenen Erschließungsinstrumente kennen zu müssen. In dieser Arbeit sind exemplarisch drei Abbildungen für den Fachbereich der Bibliotheks- und Informationswissenschaften zwischen den für Deutschland wichtigsten Sacherschließungssystemen Dewey-Dezimalklassifikation (DDC), Regensburger Verbundklassifikation (RVK) und Schlagwortnormdatei (SWD) erstellt worden. Die Ergebnisse dieser Arbeit sollen einen ersten Überblick über spezifische Problemfelder und Möglichkeiten der hier erstellten Konkordanzen DDC - RVK, SWD - DDC und SWD - RVK liefern, um damit die Erstellung eines zukünftigen Recherchetools (und gegebenenfalls einer Klassifizierungshilfe) voranzutreiben. Die erstellten Konkordanzen liegen der Arbeit als Anhang bei.
    Content
    Bachelorarbeit im Studiengang Bibliotheks- und Informationsmanagement, Fakultät Information und Kommunikation, Hochschule der Medien Stuttgart
    Imprint
    Stuttgart : Hochschule der Medien / Fakultät Information und Kommunikation
  16. Victorino, M.; Terto de Holanda, M.; Ishikawa, E.; Costa Oliveira, E.; Chhetri, S.: Transforming open data to linked open data using ontologies for information organization in big data environments of the Brazilian Government : the Brazilian database Government Open Linked Data - DBgoldbr (2018) 0.02
    0.015269297 = product of:
      0.106885076 = sum of:
        0.08801376 = weight(_text_:open in 4532) [ClassicSimilarity], result of:
          0.08801376 = score(doc=4532,freq=12.0), product of:
            0.14443703 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0320743 = queryNorm
            0.6093573 = fieldWeight in 4532, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4532)
        0.018871317 = weight(_text_:web in 4532) [ClassicSimilarity], result of:
          0.018871317 = score(doc=4532,freq=2.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.18028519 = fieldWeight in 4532, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4532)
      0.14285715 = coord(2/14)
    
    Abstract
    The Brazilian Government has made a massive volume of structured, semi-structured and non-structured public data available on the web to ensure that the administration is as transparent as possible. Subsequently, providing applications with enough capability to handle this "big data environment" so that vital and decisive information is readily accessible, has become a tremendous challenge. In this environment, data processing is done via new approaches in the area of information and computer science, involving technologies and processes for collecting, representing, storing and disseminating information. Along these lines, this paper presents a conceptual model, the technical architecture and the prototype implementation of a tool, denominated DBgoldbr, designed to classify government public information with the help of ontologies, by transforming open data into open linked data. To achieve this objective, we used "soft system methodology" to identify problems, to collect users needs and to design solutions according to the objectives of specific groups. The DBgoldbr tool was designed to facilitate the search for open data made available by many Brazilian government institutions, so that this data can be reused to support the evaluation and monitoring of social programs, in order to support the design and management of public policies.
  17. Carbonaro, A.; Santandrea, L.: ¬A general Semantic Web approach for data analysis on graduates statistics 0.01
    0.013560174 = product of:
      0.09492122 = sum of:
        0.06223513 = weight(_text_:open in 5309) [ClassicSimilarity], result of:
          0.06223513 = score(doc=5309,freq=6.0), product of:
            0.14443703 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0320743 = queryNorm
            0.43088073 = fieldWeight in 5309, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5309)
        0.03268608 = weight(_text_:web in 5309) [ClassicSimilarity], result of:
          0.03268608 = score(doc=5309,freq=6.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.3122631 = fieldWeight in 5309, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5309)
      0.14285715 = coord(2/14)
    
    Abstract
    Currently, several datasets released in a Linked Open Data format are available at a national and international level, but the lack of shared strategies concerning the definition of concepts related to the statistical publishing community makes difficult a comparison among given facts starting from different data sources. In order to guarantee a shared representation framework for what concerns the dissemination of statistical concepts about graduates, we developed SW4AL, an ontology-based system for graduate's surveys domain. The developed system transforms low-level data into an enriched information model and is based on the AlmaLaurea surveys covering more than 90% of Italian graduates. SW4AL: i) semantically describes the different peculiarities of the graduates; ii) promotes the structured definition of the AlmaLaurea data and the following publication in the Linked Open Data context; iii) provides their reuse in the open data scope; iv) enables logical reasoning about knowledge representation. SW4AL establishes a common semantic for addressing the concept of graduate's surveys domain by proposing the creation of a SPARQL endpoint and a Web based interface for the query and the visualization of the structured data.
    Theme
    Semantic Web
  18. Candela, G.: ¬An automatic data quality approach to assess semantic data from cultural heritage institutions (2023) 0.01
    0.012335767 = product of:
      0.08635037 = sum of:
        0.07114068 = weight(_text_:open in 997) [ClassicSimilarity], result of:
          0.07114068 = score(doc=997,freq=4.0), product of:
            0.14443703 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0320743 = queryNorm
            0.49253768 = fieldWeight in 997, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0546875 = fieldNorm(doc=997)
        0.015209687 = product of:
          0.030419374 = sum of:
            0.030419374 = weight(_text_:22 in 997) [ClassicSimilarity], result of:
              0.030419374 = score(doc=997,freq=2.0), product of:
                0.11231873 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0320743 = queryNorm
                0.2708308 = fieldWeight in 997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=997)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Abstract
    In recent years, cultural heritage institutions have been exploring the benefits of applying Linked Open Data to their catalogs and digital materials. Innovative and creative methods have emerged to publish and reuse digital contents to promote computational access, such as the concepts of Labs and Collections as Data. Data quality has become a requirement for researchers and training methods based on artificial intelligence and machine learning. This article explores how the quality of Linked Open Data made available by cultural heritage institutions can be automatically assessed. The results obtained can be useful for other institutions who wish to publish and assess their collections.
    Date
    22. 6.2023 18:23:31
  19. Schneider, R.: Web 3.0 ante portas? : Integration von Social Web und Semantic Web (2008) 0.01
    0.012158576 = product of:
      0.08511003 = sum of:
        0.06990034 = weight(_text_:web in 4184) [ClassicSimilarity], result of:
          0.06990034 = score(doc=4184,freq=14.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.6677857 = fieldWeight in 4184, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4184)
        0.015209687 = product of:
          0.030419374 = sum of:
            0.030419374 = weight(_text_:22 in 4184) [ClassicSimilarity], result of:
              0.030419374 = score(doc=4184,freq=2.0), product of:
                0.11231873 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0320743 = queryNorm
                0.2708308 = fieldWeight in 4184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4184)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Abstract
    Das Medium Internet ist im Wandel, und mit ihm ändern sich seine Publikations- und Rezeptionsbedingungen. Welche Chancen bieten die momentan parallel diskutierten Zukunftsentwürfe von Social Web und Semantic Web? Zur Beantwortung dieser Frage beschäftigt sich der Beitrag mit den Grundlagen beider Modelle unter den Aspekten Anwendungsbezug und Technologie, beleuchtet darüber hinaus jedoch auch deren Unzulänglichkeiten sowie den Mehrwert einer mediengerechten Kombination. Am Beispiel des grammatischen Online-Informationssystems grammis wird eine Strategie zur integrativen Nutzung der jeweiligen Stärken skizziert.
    Date
    22. 1.2011 10:38:28
    Source
    Kommunikation, Partizipation und Wirkungen im Social Web, Band 1. Hrsg.: A. Zerfaß u.a
    Theme
    Semantic Web
  20. Widhalm, R.; Mueck, T.A.: Merging topics in well-formed XML topic maps (2003) 0.01
    0.011946187 = product of:
      0.083623305 = sum of:
        0.060977723 = weight(_text_:open in 2186) [ClassicSimilarity], result of:
          0.060977723 = score(doc=2186,freq=4.0), product of:
            0.14443703 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0320743 = queryNorm
            0.42217514 = fieldWeight in 2186, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046875 = fieldNorm(doc=2186)
        0.02264558 = weight(_text_:web in 2186) [ClassicSimilarity], result of:
          0.02264558 = score(doc=2186,freq=2.0), product of:
            0.10467481 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0320743 = queryNorm
            0.21634221 = fieldWeight in 2186, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2186)
      0.14285715 = coord(2/14)
    
    Abstract
    Topic Maps are a standardized modelling approach for the semantic annotation and description of WWW resources. They enable an improved search and navigational access on information objects stored in semi-structured information spaces like the WWW. However, the according standards ISO 13250 and XTM (XML Topic Maps) lack formal semantics, several questions concerning e.g. subclassing, inheritance or merging of topics are left open. The proposed TMUML meta model, directly derived from the well known UML meta model, is a meta model for Topic Maps which enables semantic constraints to be formulated in OCL (object constraint language) in order to answer such open questions and overcome possible inconsistencies in Topic Map repositories. We will examine the XTM merging conditions and show, in several examples, how the TMUML meta model enables semantic constraints for Topic Map merging to be formulated in OCL. Finally, we will show how the TM validation process, i.e., checking if a Topic Map is well formed, includes our merging conditions.
    Source
    The Semantic Web - ISWC 2003. Eds. D. Fensel et al

Years

Languages

  • e 131
  • d 40
  • pt 1
  • More… Less…

Types

  • a 109
  • el 57
  • m 14
  • x 9
  • s 7
  • r 4
  • p 2
  • n 1
  • More… Less…