Search (124 results, page 1 of 7)

  • × theme_ss:"Semantische Interoperabilität"
  1. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.17
    0.17252529 = product of:
      0.34505057 = sum of:
        0.08626264 = product of:
          0.25878793 = sum of:
            0.25878793 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.25878793 = score(doc=306,freq=2.0), product of:
                0.39468166 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046553567 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.25878793 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.25878793 = score(doc=306,freq=2.0), product of:
            0.39468166 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046553567 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.5 = coord(2/4)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  2. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.12
    0.12323235 = product of:
      0.2464647 = sum of:
        0.061616175 = product of:
          0.18484852 = sum of:
            0.18484852 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.18484852 = score(doc=1000,freq=2.0), product of:
                0.39468166 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046553567 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.18484852 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.18484852 = score(doc=1000,freq=2.0), product of:
            0.39468166 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046553567 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.5 = coord(2/4)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  3. Candela, G.: ¬An automatic data quality approach to assess semantic data from cultural heritage institutions (2023) 0.09
    0.09438495 = product of:
      0.1887699 = sum of:
        0.103255644 = weight(_text_:open in 997) [ClassicSimilarity], result of:
          0.103255644 = score(doc=997,freq=4.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.49253768 = fieldWeight in 997, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0546875 = fieldNorm(doc=997)
        0.08551426 = sum of:
          0.0413627 = weight(_text_:access in 997) [ClassicSimilarity], result of:
            0.0413627 = score(doc=997,freq=2.0), product of:
              0.15778996 = queryWeight, product of:
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.046553567 = queryNorm
              0.2621377 = fieldWeight in 997, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.0546875 = fieldNorm(doc=997)
          0.04415156 = weight(_text_:22 in 997) [ClassicSimilarity], result of:
            0.04415156 = score(doc=997,freq=2.0), product of:
              0.16302267 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046553567 = queryNorm
              0.2708308 = fieldWeight in 997, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=997)
      0.5 = coord(2/4)
    
    Abstract
    In recent years, cultural heritage institutions have been exploring the benefits of applying Linked Open Data to their catalogs and digital materials. Innovative and creative methods have emerged to publish and reuse digital contents to promote computational access, such as the concepts of Labs and Collections as Data. Data quality has become a requirement for researchers and training methods based on artificial intelligence and machine learning. This article explores how the quality of Linked Open Data made available by cultural heritage institutions can be automatically assessed. The results obtained can be useful for other institutions who wish to publish and assess their collections.
    Date
    22. 6.2023 18:23:31
  4. Latif, A.: Understanding linked open data : for linked data discovery, consumption, triplification and application development (2011) 0.08
    0.07883265 = product of:
      0.1576653 = sum of:
        0.13993843 = weight(_text_:open in 128) [ClassicSimilarity], result of:
          0.13993843 = score(doc=128,freq=10.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.66751754 = fieldWeight in 128, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046875 = fieldNorm(doc=128)
        0.01772687 = product of:
          0.03545374 = sum of:
            0.03545374 = weight(_text_:access in 128) [ClassicSimilarity], result of:
              0.03545374 = score(doc=128,freq=2.0), product of:
                0.15778996 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046553567 = queryNorm
                0.22468945 = fieldWeight in 128, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046875 = fieldNorm(doc=128)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Linked Open Data initiative has played a vital role in the realization of the Semantic Web at a global scale by publishing and interlinking diverse data sources on the Web. Access to this huge amount of Linked Data presents exciting benefits and opportunities. However, the inherent complexity attached to Linked Data understanding, lack of potential use cases and applications which can consume Linked Data hinders its full exploitation by naïve web users and developers. This book aims to address these core limitations of Linked Open Data and contributes by presenting: (i) Conceptual model for fundamental understanding of Linked Open Data sphere, (ii) Linked Data application to search, consume and aggregate various Linked Data resources, (iii) Semantification and interlinking technique for conversion of legacy data, and (iv) Potential application areas of Linked Open Data.
  5. Widhalm, R.; Mueck, T.A.: Merging topics in well-formed XML topic maps (2003) 0.05
    0.053115852 = product of:
      0.106231704 = sum of:
        0.088504836 = weight(_text_:open in 2186) [ClassicSimilarity], result of:
          0.088504836 = score(doc=2186,freq=4.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.42217514 = fieldWeight in 2186, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046875 = fieldNorm(doc=2186)
        0.01772687 = product of:
          0.03545374 = sum of:
            0.03545374 = weight(_text_:access in 2186) [ClassicSimilarity], result of:
              0.03545374 = score(doc=2186,freq=2.0), product of:
                0.15778996 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046553567 = queryNorm
                0.22468945 = fieldWeight in 2186, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2186)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Topic Maps are a standardized modelling approach for the semantic annotation and description of WWW resources. They enable an improved search and navigational access on information objects stored in semi-structured information spaces like the WWW. However, the according standards ISO 13250 and XTM (XML Topic Maps) lack formal semantics, several questions concerning e.g. subclassing, inheritance or merging of topics are left open. The proposed TMUML meta model, directly derived from the well known UML meta model, is a meta model for Topic Maps which enables semantic constraints to be formulated in OCL (object constraint language) in order to answer such open questions and overcome possible inconsistencies in Topic Map repositories. We will examine the XTM merging conditions and show, in several examples, how the TMUML meta model enables semantic constraints for Topic Map merging to be formulated in OCL. Finally, we will show how the TM validation process, i.e., checking if a Topic Map is well formed, includes our merging conditions.
  6. Schreur, P.E.: ¬The use of Linked Data and artificial intelligence as key elements in the transformation of technical services (2020) 0.05
    0.051130302 = product of:
      0.102260605 = sum of:
        0.07301276 = weight(_text_:open in 125) [ClassicSimilarity], result of:
          0.07301276 = score(doc=125,freq=2.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.3482767 = fieldWeight in 125, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0546875 = fieldNorm(doc=125)
        0.029247843 = product of:
          0.058495685 = sum of:
            0.058495685 = weight(_text_:access in 125) [ClassicSimilarity], result of:
              0.058495685 = score(doc=125,freq=4.0), product of:
                0.15778996 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046553567 = queryNorm
                0.3707187 = fieldWeight in 125, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=125)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Library Technical Services have benefited from numerous stimuli. Although initially looked at with suspicion, transitions such as the move from catalog cards to the MARC formats have proven enormously helpful to libraries and their patrons. Linked data and Artificial Intelligence (AI) hold the same promise. Through the conversion of metadata surrogates (cataloging) to linked open data, libraries can represent their resources on the Semantic Web. But in order to provide some form of controlled access to unstructured data, libraries must reach beyond traditional cataloging to new tools such as AI to provide consistent access to a growing world of full-text resources.
  7. Metadata and semantics research : 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings (2016) 0.05
    0.04754427 = product of:
      0.09508854 = sum of:
        0.07301276 = weight(_text_:open in 3283) [ClassicSimilarity], result of:
          0.07301276 = score(doc=3283,freq=2.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.3482767 = fieldWeight in 3283, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3283)
        0.02207578 = product of:
          0.04415156 = sum of:
            0.04415156 = weight(_text_:22 in 3283) [ClassicSimilarity], result of:
              0.04415156 = score(doc=3283,freq=2.0), product of:
                0.16302267 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046553567 = queryNorm
                0.2708308 = fieldWeight in 3283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3283)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This book constitutes the refereed proceedings of the 10th Metadata and Semantics Research Conference, MTSR 2016, held in Göttingen, Germany, in November 2016. The 26 full papers and 6 short papers presented were carefully reviewed and selected from 67 submissions. The papers are organized in several sessions and tracks: Digital Libraries, Information Retrieval, Linked and Social Data, Metadata and Semantics for Open Repositories, Research Information Systems and Data Infrastructures, Metadata and Semantics for Agriculture, Food and Environment, Metadata and Semantics for Cultural Collections and Applications, European and National Projects.
  8. Metadata and semantics research : 8th Research Conference, MTSR 2014, Karlsruhe, Germany, November 27-29, 2014, Proceedings (2014) 0.05
    0.04732267 = product of:
      0.09464534 = sum of:
        0.07375403 = weight(_text_:open in 2192) [ClassicSimilarity], result of:
          0.07375403 = score(doc=2192,freq=4.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.3518126 = fieldWeight in 2192, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2192)
        0.020891316 = product of:
          0.041782632 = sum of:
            0.041782632 = weight(_text_:access in 2192) [ClassicSimilarity], result of:
              0.041782632 = score(doc=2192,freq=4.0), product of:
                0.15778996 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046553567 = queryNorm
                0.26479906 = fieldWeight in 2192, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2192)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This book constitutes the refereed proceedings of the 8th Metadata and Semantics Research Conference, MTSR 2014, held in Karlsruhe, Germany, in November 2014. The 23 full papers and 9 short papers presented were carefully reviewed and selected from 57 submissions. The papers are organized in several sessions and tracks. They cover the following topics: metadata and linked data: tools and models; (meta) data quality assessment and curation; semantic interoperability, ontology-based data access and representation; big data and digital libraries in health, science and technology; metadata and semantics for open repositories, research information systems and data infrastructure; metadata and semantics for cultural collections and applications; semantics for agriculture, food and environment.
    Content
    Metadata and linked data.- Tools and models.- (Meta)data quality assessment and curation.- Semantic interoperability, ontology-based data access and representation.- Big data and digital libraries in health, science and technology.- Metadata and semantics for open repositories, research information systems and data infrastructure.- Metadata and semantics for cultural collections and applications.- Semantics for agriculture, food and environment.
  9. Tudhope, D.; Binding, C.: Mapping between linked data vocabularies in ARIADNE (2015) 0.04
    0.04426321 = product of:
      0.08852642 = sum of:
        0.07375403 = weight(_text_:open in 2250) [ClassicSimilarity], result of:
          0.07375403 = score(doc=2250,freq=4.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.3518126 = fieldWeight in 2250, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2250)
        0.014772392 = product of:
          0.029544784 = sum of:
            0.029544784 = weight(_text_:access in 2250) [ClassicSimilarity], result of:
              0.029544784 = score(doc=2250,freq=2.0), product of:
                0.15778996 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046553567 = queryNorm
                0.18724121 = fieldWeight in 2250, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2250)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Semantic Enrichment Enabling Sustainability of Archaeological Links (SENESCHAL) was a project coordinated by the Hypermedia Research Unit at the University of South Wales. The project aims included widening access to key vocabulary resources. National cultural heritage thesauri and vocabularies are used by both national organizations and local authority Historic Environment Records and could potentially act as vocabulary hubs for the Web of Data. Following completion, a set of prominent UK archaeological thesauri and vocabularies is now freely available as Linked Open Data (LOD) via http://www.heritagedata.org - together with open source web services and user interface controls. This presentation will reflect on work done to date for the ARIADNE FP7 infrastructure project (http://www.ariadne-infrastructure.eu) mapping between archaeological vocabularies in different languages and the utility of a hub architecture. The poly-hierarchical structure of the Getty Art & Architecture Thesaurus (AAT) was extracted for use as an example mediating structure to interconnect various multilingual vocabularies originating from ARIADNE data providers. Vocabulary resources were first converted to a common concept-based format (SKOS) and the concepts were then manually mapped to nodes of the extracted AAT structure using some judgement on the meaning of terms and scope notes. Results are presented along with reflections on the wider application to existing European archaeological vocabularies and associated online datasets.
  10. Reasoning Web : Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures (2017) 0.04
    0.04426321 = product of:
      0.08852642 = sum of:
        0.07375403 = weight(_text_:open in 3934) [ClassicSimilarity], result of:
          0.07375403 = score(doc=3934,freq=4.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.3518126 = fieldWeight in 3934, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3934)
        0.014772392 = product of:
          0.029544784 = sum of:
            0.029544784 = weight(_text_:access in 3934) [ClassicSimilarity], result of:
              0.029544784 = score(doc=3934,freq=2.0), product of:
                0.15778996 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046553567 = queryNorm
                0.18724121 = fieldWeight in 3934, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3934)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This volume contains the lecture notes of the 13th Reasoning Web Summer School, RW 2017, held in London, UK, in July 2017. In 2017, the theme of the school was "Semantic Interoperability on the Web", which encompasses subjects such as data integration, open data management, reasoning over linked data, database to ontology mapping, query answering over ontologies, hybrid reasoning with rules and ontologies, and ontology-based dynamic systems. The papers of this volume focus on these topics and also address foundational reasoning techniques used in answer set programming and ontologies.
    Content
    Neumaier, Sebastian (et al.): Data Integration for Open Data on the Web - Stamou, Giorgos (et al.): Ontological Query Answering over Semantic Data - Calì, Andrea: Ontology Querying: Datalog Strikes Back - Sequeda, Juan F.: Integrating Relational Databases with the Semantic Web: A Reflection - Rousset, Marie-Christine (et al.): Datalog Revisited for Reasoning in Linked Data - Kaminski, Roland (et al.): A Tutorial on Hybrid Answer Set Solving with clingo - Eiter, Thomas (et al.): Answer Set Programming with External Source Access - Lukasiewicz, Thomas: Uncertainty Reasoning for the Semantic Web - Calvanese, Diego (et al.): OBDA for Log Extraction in Process Mining
  11. Kollia, I.; Tzouvaras, V.; Drosopoulos, N.; Stamou, G.: ¬A systemic approach for effective semantic access to cultural content (2012) 0.04
    0.040848378 = product of:
      0.081696756 = sum of:
        0.05215197 = weight(_text_:open in 130) [ClassicSimilarity], result of:
          0.05215197 = score(doc=130,freq=2.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.24876907 = fieldWeight in 130, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=130)
        0.029544784 = product of:
          0.059089568 = sum of:
            0.059089568 = weight(_text_:access in 130) [ClassicSimilarity], result of:
              0.059089568 = score(doc=130,freq=8.0), product of:
                0.15778996 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046553567 = queryNorm
                0.37448242 = fieldWeight in 130, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=130)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    A large on-going activity for digitization, dissemination and preservation of cultural heritage is taking place in Europe, United States and the world, which involves all types of cultural institutions, i.e., galleries, libraries, museums, archives and all types of cultural content. The development of Europeana, as a single point of access to European Cultural Heritage, has probably been the most important result of the activities in the field till now. Semantic interoperability, linked open data, user involvement and user generated content are key issues in these developments. This paper presents a system that provides content providers and users the ability to map, in an effective way, their own metadata schemas to common domain standards and the Europeana (ESE, EDM) data models. The system is currently largely used by many European research projects and the Europeana. Based on these mappings, semantic query answering techniques are proposed as a means for effective access to digital cultural heritage, providing users with content enrichment, linking of data based on their involvement and facilitating content search and retrieval. An experimental study is presented, involving content from national content aggregators, as well as thematic content aggregators and the Europeana, which illustrates the proposed system
    Content
    Beitrag eines Schwerpunktthemas: Semantic Web and Reasoning for Cultural Heritage and Digital Libraries: http://www.semantic-web-journal.net/content/systemic-approach-eff%0Bective-semantic-access-cultural-content http://www.semantic-web-journal.net/sites/default/files/swj147_3.pdf.
  12. Vizine-Goetz, D.; Hickey, C.; Houghton, A.; Thompson, R.: Vocabulary mapping for terminology services (2004) 0.04
    0.040154617 = product of:
      0.080309235 = sum of:
        0.062582366 = weight(_text_:open in 918) [ClassicSimilarity], result of:
          0.062582366 = score(doc=918,freq=2.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.2985229 = fieldWeight in 918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046875 = fieldNorm(doc=918)
        0.01772687 = product of:
          0.03545374 = sum of:
            0.03545374 = weight(_text_:access in 918) [ClassicSimilarity], result of:
              0.03545374 = score(doc=918,freq=2.0), product of:
                0.15778996 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046553567 = queryNorm
                0.22468945 = fieldWeight in 918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046875 = fieldNorm(doc=918)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The paper describes a project to add value to controlled vocabularies by making inter-vocabulary associations. A methodology for mapping terms from one vocabulary to another is presented in the form of a case study applying the approach to the Educational Resources Information Center (ERIC) Thesaurus and the Library of Congress Subject Headings (LCSH). Our approach to mapping involves encoding vocabularies according to Machine-Readable Cataloging (MARC) standards, machine matching of vocabulary terms, and categorizing candidate mappings by likelihood of valid mapping. Mapping data is then stored as machine links. Vocabularies with associations to other schemes will be a key component of Web-based terminology services. The paper briefly describes how the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) is used to provide access to a vocabulary with mappings.
  13. Soergel, D.: Towards a relation ontology for the Semantic Web (2011) 0.04
    0.040154617 = product of:
      0.080309235 = sum of:
        0.062582366 = weight(_text_:open in 4342) [ClassicSimilarity], result of:
          0.062582366 = score(doc=4342,freq=2.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.2985229 = fieldWeight in 4342, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046875 = fieldNorm(doc=4342)
        0.01772687 = product of:
          0.03545374 = sum of:
            0.03545374 = weight(_text_:access in 4342) [ClassicSimilarity], result of:
              0.03545374 = score(doc=4342,freq=2.0), product of:
                0.15778996 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046553567 = queryNorm
                0.22468945 = fieldWeight in 4342, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4342)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The Semantic Web consists of data structured for use by computer programs, such as data sets made available under the Linked Open Data initiative. Much of this data is structured following the entity-relationship model encoded in RDF for syntactic interoperability. For semantic interoperability, the semantics of the relationships used in any given dataset needs to be made explicit. Ultimately this requires an inventory of these relationships structured around a relation ontology. This talk will outline a blueprint for such an inventory, including a format for the description/definition of binary and n-ary relations, drawing on ideas put forth in the classification and thesaurus community over the last 60 years, upper level ontologies, systems like FrameNet, the Buffalo Relation Ontology, and an analysis of linked data sets.
    Source
    Classification and ontology: formal approaches and access to knowledge: proceedings of the International UDC Seminar, 19-20 September 2011, The Hague, The Netherlands. Eds.: A. Slavic u. E. Civallero
  14. Dini, L.: CACAO : multilingual access to bibliographic records (2007) 0.04
    0.036648966 = product of:
      0.14659587 = sum of:
        0.14659587 = sum of:
          0.07090748 = weight(_text_:access in 126) [ClassicSimilarity], result of:
            0.07090748 = score(doc=126,freq=2.0), product of:
              0.15778996 = queryWeight, product of:
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.046553567 = queryNorm
              0.4493789 = fieldWeight in 126, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.09375 = fieldNorm(doc=126)
          0.07568839 = weight(_text_:22 in 126) [ClassicSimilarity], result of:
            0.07568839 = score(doc=126,freq=2.0), product of:
              0.16302267 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046553567 = queryNorm
              0.46428138 = fieldWeight in 126, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=126)
      0.25 = coord(1/4)
    
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  15. Euzenat, J.; Shvaiko, P.: Ontology matching (2010) 0.04
    0.035808977 = product of:
      0.07161795 = sum of:
        0.059003223 = weight(_text_:open in 168) [ClassicSimilarity], result of:
          0.059003223 = score(doc=168,freq=4.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.2814501 = fieldWeight in 168, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.012614732 = product of:
          0.025229463 = sum of:
            0.025229463 = weight(_text_:22 in 168) [ClassicSimilarity], result of:
              0.025229463 = score(doc=168,freq=2.0), product of:
                0.16302267 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046553567 = queryNorm
                0.15476047 = fieldWeight in 168, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=168)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Ontologies are viewed as the silver bullet for many applications, but in open or evolving systems, different parties can adopt different ontologies. This increases heterogeneity problems rather than reducing heterogeneity. This book proposes ontology matching as a solution to the problem of semantic heterogeneity, offering researchers and practitioners a uniform framework of reference to currently available work. The techniques presented apply to database schema matching, catalog integration, XML schema matching and more. Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko's book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, artificial intelligence. With Ontology Matching, researchers and practitioners will find a reference book which presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can equally be applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a detailed account of matching techniques and matching systems in a systematic way from theoretical, practical and application perspectives.
    Date
    20. 6.2012 19:08:22
  16. Neumaier, S.: Data integration for open data on the Web (2017) 0.03
    0.03449529 = product of:
      0.13798116 = sum of:
        0.13798116 = weight(_text_:open in 3923) [ClassicSimilarity], result of:
          0.13798116 = score(doc=3923,freq=14.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.6581812 = fieldWeight in 3923, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3923)
      0.25 = coord(1/4)
    
    Abstract
    In this lecture we will discuss and introduce challenges of integrating openly available Web data and how to solve them. Firstly, while we will address this topic from the viewpoint of Semantic Web research, not all data is readily available as RDF or Linked Data, so we will give an introduction to different data formats prevalent on the Web, namely, standard formats for publishing and exchanging tabular, tree-shaped, and graph data. Secondly, not all Open Data is really completely open, so we will discuss and address issues around licences, terms of usage associated with Open Data, as well as documentation of data provenance. Thirdly, we will discuss issues connected with (meta-)data quality issues associated with Open Data on the Web and how Semantic Web techniques and vocabularies can be used to describe and remedy them. Fourth, we will address issues about searchability and integration of Open Data and discuss in how far semantic search can help to overcome these. We close with briefly summarizing further issues not covered explicitly herein, such as multi-linguality, temporal aspects (archiving, evolution, temporal querying), as well as how/whether OWL and RDFS reasoning on top of integrated open data could be help.
  17. Haslhofer, B.: Uniform SPARQL access to interlinked (digital library) sources (2007) 0.03
    0.033083957 = product of:
      0.13233583 = sum of:
        0.13233583 = sum of:
          0.081876904 = weight(_text_:access in 541) [ClassicSimilarity], result of:
            0.081876904 = score(doc=541,freq=6.0), product of:
              0.15778996 = queryWeight, product of:
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.046553567 = queryNorm
              0.51889807 = fieldWeight in 541, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.0625 = fieldNorm(doc=541)
          0.050458927 = weight(_text_:22 in 541) [ClassicSimilarity], result of:
            0.050458927 = score(doc=541,freq=2.0), product of:
              0.16302267 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046553567 = queryNorm
              0.30952093 = fieldWeight in 541, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=541)
      0.25 = coord(1/4)
    
    Abstract
    In this presentation, we therefore focus on a solution for providing uniform access to Digital Libraries and other online services. In order to enable uniform query access to heterogeneous sources, we must provide metadata interoperability in a way that a query language - in this case SPARQL - can cope with the incompatibility of the metadata in various sources without changing their already existing information models.
    Date
    26.12.2011 13:22:46
  18. Victorino, M.; Terto de Holanda, M.; Ishikawa, E.; Costa Oliveira, E.; Chhetri, S.: Transforming open data to linked open data using ontologies for information organization in big data environments of the Brazilian Government : the Brazilian database Government Open Linked Data - DBgoldbr (2018) 0.03
    0.03193643 = product of:
      0.12774572 = sum of:
        0.12774572 = weight(_text_:open in 4532) [ClassicSimilarity], result of:
          0.12774572 = score(doc=4532,freq=12.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.6093573 = fieldWeight in 4532, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4532)
      0.25 = coord(1/4)
    
    Abstract
    The Brazilian Government has made a massive volume of structured, semi-structured and non-structured public data available on the web to ensure that the administration is as transparent as possible. Subsequently, providing applications with enough capability to handle this "big data environment" so that vital and decisive information is readily accessible, has become a tremendous challenge. In this environment, data processing is done via new approaches in the area of information and computer science, involving technologies and processes for collecting, representing, storing and disseminating information. Along these lines, this paper presents a conceptual model, the technical architecture and the prototype implementation of a tool, denominated DBgoldbr, designed to classify government public information with the help of ontologies, by transforming open data into open linked data. To achieve this objective, we used "soft system methodology" to identify problems, to collect users needs and to design solutions according to the objectives of specific groups. The DBgoldbr tool was designed to facilitate the search for open data made available by many Brazilian government institutions, so that this data can be reused to support the evaluation and monitoring of social programs, in order to support the design and management of public policies.
  19. Landry, P.: MACS: multilingual access to subject and link management : Extending the Multilingual Capacity of TEL in the EDL Project (2007) 0.03
    0.030540805 = product of:
      0.12216322 = sum of:
        0.12216322 = sum of:
          0.059089568 = weight(_text_:access in 1287) [ClassicSimilarity], result of:
            0.059089568 = score(doc=1287,freq=2.0), product of:
              0.15778996 = queryWeight, product of:
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.046553567 = queryNorm
              0.37448242 = fieldWeight in 1287, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.078125 = fieldNorm(doc=1287)
          0.06307366 = weight(_text_:22 in 1287) [ClassicSimilarity], result of:
            0.06307366 = score(doc=1287,freq=2.0), product of:
              0.16302267 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046553567 = queryNorm
              0.38690117 = fieldWeight in 1287, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=1287)
      0.25 = coord(1/4)
    
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  20. Isaac, A.: Aligning thesauri for an integrated access to Cultural Heritage Resources (2007) 0.03
    0.029814415 = product of:
      0.05962883 = sum of:
        0.03650638 = weight(_text_:open in 553) [ClassicSimilarity], result of:
          0.03650638 = score(doc=553,freq=2.0), product of:
            0.20964009 = queryWeight, product of:
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.046553567 = queryNorm
            0.17413835 = fieldWeight in 553, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.5032015 = idf(docFreq=1330, maxDocs=44218)
              0.02734375 = fieldNorm(doc=553)
        0.023122448 = product of:
          0.046244897 = sum of:
            0.046244897 = weight(_text_:access in 553) [ClassicSimilarity], result of:
              0.046244897 = score(doc=553,freq=10.0), product of:
                0.15778996 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046553567 = queryNorm
                0.29307884 = fieldWeight in 553, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=553)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Currently, a number of efforts are being carried out to integrate collections from different institutions and containing heterogeneous material. Examples of such projects are The European Library [1] and the Memory of the Netherlands [2]. A crucial point for the success of these is the availability to provide a unified access on top of the different collections, e.g. using one single vocabulary for querying or browsing the objects they contain. This is made difficult by the fact that the objects from different collections are often described using different vocabularies - thesauri, classification schemes - and are therefore not interoperable at the semantic level. To solve this problem, one can turn to semantic links - mappings - between the elements of the different vocabularies. If one knows that a concept C from a vocabulary V is semantically equivalent to a concept to a concept D from vocabulary W, then an appropriate search engine can return all the objects that were indexed against D for a query for objects described using C. We thus have an access to other collections, using a single one vocabulary. This is however an ideal situation, and hard alignment work is required to reach it. Several projects in the past have tried to implement such a solution, like MACS [3] and Renardus [4]. They have demonstrated very interesting results, but also highlighted the difficulty of aligning manually all the different vocabularies involved in practical cases, which sometimes contain hundreds of thousands of concepts. To alleviate this problem, a number of tools have been proposed in order to provide with candidate mappings between two input vocabularies, making alignment a (semi-) automatic task. Recently, the Semantic Web community has produced a lot of these alignment tools'. Several techniques are found, depending on the material they exploit: labels of concepts, structure of vocabularies, collection objects and external knowledge sources. Throughout our presentation, we will present a concrete heterogeneity case where alignment techniques have been applied to build a (pilot) browser, developed in the context of the STITCH project [5]. This browser enables a unified access to two collections of illuminated manuscripts, using the description vocabulary used in the first collection, Mandragore [6], or the one used by the second, Iconclass [7]. In our talk, we will also make the point for using unified representations the vocabulary semantic and lexical information. Additionally to ease the use of the alignment tools that have these vocabularies as input, turning to a standard representation format helps designing applications that are more generic, like the browser we demonstrate. We give pointers to SKOS [8], an open and web-enabled format currently developed by the Semantic Web community.
    Content
    Präsentation anlässlich des 'UDC Seminar: Information Access for the Global Community, The Hague, 4-5 June 2007'

Languages

  • e 105
  • d 17

Types

  • a 82
  • el 39
  • m 11
  • s 6
  • x 3
  • r 2
  • More… Less…