Search (33 results, page 2 of 2)

  • × theme_ss:"Semantic Web"
  • × theme_ss:"Semantische Interoperabilität"
  1. Smith, D.A.: Exploratory and faceted browsing over heterogeneous and cross-domain data sources. (2011) 0.00
    0.0012668705 = product of:
      0.0063343523 = sum of:
        0.0063343523 = weight(_text_:a in 4839) [ClassicSimilarity], result of:
          0.0063343523 = score(doc=4839,freq=6.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.13239266 = fieldWeight in 4839, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4839)
      0.2 = coord(1/5)
    
    Abstract
    Exploration of heterogeneous data sources increases the value of information by allowing users to answer questions through exploration across multiple sources; Users can use information that has been posted across the Web to answer questions and learn about new domains. We have conducted research that lowers the interrogation time of faceted data, by combining related information from different sources. The work contributes methodologies in combining heterogenous sources, and how to deliver that data to a user interface scalably, with enough performance to support rapid interrogation of the knowledge by the user. The work also contributes how to combine linked data sources so that users can create faceted browsers that target the information facets of their needs. The work is grounded and proven in a number of experiments and test cases that study the contributions in domain research work.
    Footnote
    A thesis submitted in partial fulfillment for the degree of Doctor of Philosophy. June 2011.
  2. Krause, J.: Semantic heterogeneity : comparing new semantic web approaches with those of digital libraries (2008) 0.00
    0.0012190467 = product of:
      0.006095233 = sum of:
        0.006095233 = weight(_text_:a in 1908) [ClassicSimilarity], result of:
          0.006095233 = score(doc=1908,freq=8.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.12739488 = fieldWeight in 1908, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1908)
      0.2 = coord(1/5)
    
    Abstract
    Purpose - To demonstrate that newer developments in the semantic web community, particularly those based on ontologies (simple knowledge organization system and others) mitigate common arguments from the digital library (DL) community against participation in the Semantic web. Design/methodology/approach - The approach is a semantic web discussion focusing on the weak structure of the Web and the lack of consideration given to the semantic content during indexing. Findings - The points criticised by the semantic web and ontology approaches are the same as those of the DL "Shell model approach" from the mid-1990s, with emphasis on the centrality of its heterogeneity components (used, for example, in vascoda). The Shell model argument began with the "invisible web", necessitating the restructuring of DL approaches. The conclusion is that both approaches fit well together and that the Shell model, with its semantic heterogeneity components, can be reformulated on the semantic web basis. Practical implications - A reinterpretation of the DL approaches of semantic heterogeneity and adapting to standards and tools supported by the W3C should be the best solution. It is therefore recommended that - although most of the semantic web standards are not technologically refined for commercial applications at present - all individual DL developments should be checked for their adaptability to the W3C standards of the semantic web. Originality/value - A unique conceptual analysis of the parallel developments emanating from the digital library and semantic web communities.
    Type
    a
  3. Mayr, P.; Mutschke, P.; Petras, V.: Reducing semantic complexity in distributed digital libraries : Treatment of term vagueness and document re-ranking (2008) 0.00
    0.0012190467 = product of:
      0.006095233 = sum of:
        0.006095233 = weight(_text_:a in 1909) [ClassicSimilarity], result of:
          0.006095233 = score(doc=1909,freq=8.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.12739488 = fieldWeight in 1909, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1909)
      0.2 = coord(1/5)
    
    Abstract
    Purpose - The general science portal "vascoda" merges structured, high-quality information collections from more than 40 providers on the basis of search engine technology (FAST) and a concept which treats semantic heterogeneity between different controlled vocabularies. First experiences with the portal show some weaknesses of this approach which come out in most metadata-driven Digital Libraries (DLs) or subject specific portals. The purpose of the paper is to propose models to reduce the semantic complexity in heterogeneous DLs. The aim is to introduce value-added services (treatment of term vagueness and document re-ranking) that gain a certain quality in DLs if they are combined with heterogeneity components established in the project "Competence Center Modeling and Treatment of Semantic Heterogeneity". Design/methodology/approach - Two methods, which are derived from scientometrics and network analysis, will be implemented with the objective to re-rank result sets by the following structural properties: the ranking of the results by core journals (so-called Bradfordizing) and ranking by centrality of authors in co-authorship networks. Findings - The methods, which will be implemented, focus on the query and on the result side of a search and are designed to positively influence each other. Conceptually, they will improve the search quality and guarantee that the most relevant documents in result sets will be ranked higher. Originality/value - The central impact of the paper focuses on the integration of three structural value-adding methods, which aim at reducing the semantic complexity represented in distributed DLs at several stages in the information retrieval process: query construction, search and ranking and re-ranking.
    Type
    a
  4. Piscitelli, F.A.: Library linked data models : library data in the Semantic Web (2019) 0.00
    0.0012190467 = product of:
      0.006095233 = sum of:
        0.006095233 = weight(_text_:a in 5478) [ClassicSimilarity], result of:
          0.006095233 = score(doc=5478,freq=8.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.12739488 = fieldWeight in 5478, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5478)
      0.2 = coord(1/5)
    
    Abstract
    This exploratory study examined Linked Data (LD) schemas/ontologies and data models proposed or in use by libraries around the world using MAchine Readable Cataloging (MARC) as a basis for comparison of the scope and extensibility of these potential new standards. The researchers selected 14 libraries from national libraries, academic libraries, government libraries, public libraries, multi-national libraries, and cultural heritage centers currently developing Library Linked Data (LLD) schemas. The choices of models, schemas, and elements used in each library's LD can create interoperability issues for LD services because of substantial differences between schemas and data models evolving via local decisions. The researchers observed that a wide variety of vocabularies and ontologies were used for LLD including common web schemas such as Dublin Core (DC)/DCTerms, Schema.org and Resource Description Framework (RDF), as well as deprecated schemas such as MarcOnt and rdagroup1elements. A sharp divide existed as well between LLD schemas using variations of the Functional Requirements for Bibliographic Records (FRBR) data model and those with different data models or even with no listed data model. Libraries worldwide are not using the same elements or even the same ontologies, schemas and data models to describe the same materials using the same general concepts.
    Type
    a
  5. Binding, C.; Gnoli, C.; Tudhope, D.: Migrating a complex classification scheme to the semantic web : expressing the Integrative Levels Classification using SKOS RDF (2021) 0.00
    0.0012190467 = product of:
      0.006095233 = sum of:
        0.006095233 = weight(_text_:a in 600) [ClassicSimilarity], result of:
          0.006095233 = score(doc=600,freq=8.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.12739488 = fieldWeight in 600, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=600)
      0.2 = coord(1/5)
    
    Abstract
    Purpose The Integrative Levels Classification (ILC) is a comprehensive "freely faceted" knowledge organization system not previously expressed as SKOS (Simple Knowledge Organization System). This paper reports and reflects on work converting the ILC to SKOS representation. Design/methodology/approach The design of the ILC representation and the various steps in the conversion to SKOS are described and located within the context of previous work considering the representation of complex classification schemes in SKOS. Various issues and trade-offs emerging from the conversion are discussed. The conversion implementation employed the STELETO transformation tool. Findings The ILC conversion captures some of the ILC facet structure by a limited extension beyond the SKOS standard. SPARQL examples illustrate how this extension could be used to create faceted, compound descriptors when indexing or cataloguing. Basic query patterns are provided that might underpin search systems. Possible routes for reducing complexity are discussed. Originality/value Complex classification schemes, such as the ILC, have features which are not straight forward to represent in SKOS and which extend beyond the functionality of the SKOS standard. The ILC's facet indicators are modelled as rdf:Property sub-hierarchies that accompany the SKOS RDF statements. The ILC's top-level fundamental facet relationships are modelled by extensions of the associative relationship - specialised sub-properties of skos:related. An approach for representing faceted compound descriptions in ILC and other faceted classification schemes is proposed.
    Type
    a
  6. Isaac, A.; Schlobach, S.; Matthezing, H.; Zinn, C.: Integrated access to cultural heritage resources through representation and alignment of controlled vocabularies (2008) 0.00
    9.752372E-4 = product of:
      0.004876186 = sum of:
        0.004876186 = weight(_text_:a in 3398) [ClassicSimilarity], result of:
          0.004876186 = score(doc=3398,freq=8.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.10191591 = fieldWeight in 3398, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=3398)
      0.2 = coord(1/5)
    
    Abstract
    Purpose - To show how semantic web techniques can help address semantic interoperability issues in the broad cultural heritage domain, allowing users an integrated and seamless access to heterogeneous collections. Design/methodology/approach - This paper presents the heterogeneity problems to be solved. It introduces semantic web techniques that can help in solving them, focusing on the representation of controlled vocabularies and their semantic alignment. It gives pointers to some previous projects and experiments that have tried to address the problems discussed. Findings - Semantic web research provides practical technical and methodological approaches to tackle the different issues. Two contributions of interest are the simple knowledge organisation system model and automatic vocabulary alignment methods and tools. These contributions were demonstrated to be usable for enabling semantic search and navigation across collections. Research limitations/implications - The research aims at designing different representation and alignment methods for solving interoperability problems in the context of controlled subject vocabularies. Given the variety and technical richness of current research in the semantic web field, it is impossible to provide an in-depth account or an exhaustive list of references. Every aspect of the paper is, however, given one or several pointers for further reading. Originality/value - This article provides a general and practical introduction to relevant semantic web techniques. It is of specific value for the practitioners in the cultural heritage and digital library domains who are interested in applying these methods in practice.
    Content
    This paper is based on a talk given at "Information Access for the Global Community, An International Seminar on the Universal Decimal Classification" held on 4-5 June 2007 in The Hague, The Netherlands. An abstract of this talk will be published in Extensions and Corrections to the UDC, an annual publication of the UDC consortium. Beitrag eines Themenheftes "Digital libraries and the semantic web: context, applications and research".
    Type
    a
  7. Siwecka, D.: Knowledge organization systems used in European national libraries towards interoperability of the semantic Web (2018) 0.00
    9.752372E-4 = product of:
      0.004876186 = sum of:
        0.004876186 = weight(_text_:a in 4815) [ClassicSimilarity], result of:
          0.004876186 = score(doc=4815,freq=2.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.10191591 = fieldWeight in 4815, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=4815)
      0.2 = coord(1/5)
    
    Type
    a
  8. Linked data and user interaction : the road ahead (2015) 0.00
    8.6199614E-4 = product of:
      0.0043099807 = sum of:
        0.0043099807 = weight(_text_:a in 2552) [ClassicSimilarity], result of:
          0.0043099807 = score(doc=2552,freq=4.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.090081796 = fieldWeight in 2552, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2552)
      0.2 = coord(1/5)
    
    Abstract
    This collection of research papers provides extensive information on deploying services, concepts, and approaches for using open linked data from libraries and other cultural heritage institutions. With a special emphasis on how libraries and other cultural heritage institutions can create effective end user interfaces using open, linked data or other datasets. These papers are essential reading for any one interesting in user interface design or the semantic web.
    Content
    H. Frank Cervone: Linked data and user interaction : an introduction -- Paola Di Maio: Linked Data Beyond Libraries Towards Universal Interfaces and Knowledge Unification -- Emmanuelle Bermes: Following the user's flow in the Digital Pompidou -- Patrick Le Bceuf: Customized OPACs on the Semantic Web : the OpenCat prototype -- Ryan Shaw, Patrick Golden and Michael Buckland: Using linked library data in working research notes -- Timm Heuss, Bernhard Humm.Tilman Deuschel, Torsten Frohlich, Thomas Herth and Oliver Mitesser: Semantically guided, situation-aware literature research -- Niklas Lindstrom and Martin Malmsten: Building interfaces on a networked graph -- Natasha Simons, Arve Solland and Jan Hettenhausen: Griffith Research Hub. Vgl.: http://d-nb.info/1032799889.
  9. Reasoning Web : Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures (2017) 0.00
    8.6199614E-4 = product of:
      0.0043099807 = sum of:
        0.0043099807 = weight(_text_:a in 3934) [ClassicSimilarity], result of:
          0.0043099807 = score(doc=3934,freq=4.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.090081796 = fieldWeight in 3934, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3934)
      0.2 = coord(1/5)
    
    Content
    Neumaier, Sebastian (et al.): Data Integration for Open Data on the Web - Stamou, Giorgos (et al.): Ontological Query Answering over Semantic Data - Calì, Andrea: Ontology Querying: Datalog Strikes Back - Sequeda, Juan F.: Integrating Relational Databases with the Semantic Web: A Reflection - Rousset, Marie-Christine (et al.): Datalog Revisited for Reasoning in Linked Data - Kaminski, Roland (et al.): A Tutorial on Hybrid Answer Set Solving with clingo - Eiter, Thomas (et al.): Answer Set Programming with External Source Access - Lukasiewicz, Thomas: Uncertainty Reasoning for the Semantic Web - Calvanese, Diego (et al.): OBDA for Log Extraction in Process Mining
  10. Sakr, S.; Wylot, M.; Mutharaju, R.; Le-Phuoc, D.; Fundulaki, I.: Linked data : storing, querying, and reasoning (2018) 0.00
    8.445803E-4 = product of:
      0.0042229015 = sum of:
        0.0042229015 = weight(_text_:a in 5329) [ClassicSimilarity], result of:
          0.0042229015 = score(doc=5329,freq=6.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.088261776 = fieldWeight in 5329, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=5329)
      0.2 = coord(1/5)
    
    Abstract
    This book describes efficient and effective techniques for harnessing the power of Linked Data by tackling the various aspects of managing its growing volume: storing, querying, reasoning, provenance management and benchmarking. To this end, Chapter 1 introduces the main concepts of the Semantic Web and Linked Data and provides a roadmap for the book. Next, Chapter 2 briefly presents the basic concepts underpinning Linked Data technologies that are discussed in the book. Chapter 3 then offers an overview of various techniques and systems for centrally querying RDF datasets, and Chapter 4 outlines various techniques and systems for efficiently querying large RDF datasets in distributed environments. Subsequently, Chapter 5 explores how streaming requirements are addressed in current, state-of-the-art RDF stream data processing. Chapter 6 covers performance and scaling issues of distributed RDF reasoning systems, while Chapter 7 details benchmarks for RDF query engines and instance matching systems. Chapter 8 addresses the provenance management for Linked Data and presents the different provenance models developed. Lastly, Chapter 9 offers a brief summary, highlighting and providing insights into some of the open challenges and research directions. Providing an updated overview of methods, technologies and systems related to Linked Data this book is mainly intended for students and researchers who are interested in the Linked Data domain. It enables students to gain an understanding of the foundations and underpinning technologies and standards for Linked Data, while researchers benefit from the in-depth coverage of the emerging and ongoing advances in Linked Data storing, querying, reasoning, and provenance management systems. Further, it serves as a starting point to tackle the next research challenges in the domain of Linked Data management.
  11. Borst, T.: Repositorien auf ihrem Weg in das Semantic Web : semantisch hergeleitete Interoperabilität als Zielstellung für künftige Repository-Entwicklungen (2014) 0.00
    7.31428E-4 = product of:
      0.0036571398 = sum of:
        0.0036571398 = weight(_text_:a in 1555) [ClassicSimilarity], result of:
          0.0036571398 = score(doc=1555,freq=2.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.07643694 = fieldWeight in 1555, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1555)
      0.2 = coord(1/5)
    
    Type
    a
  12. Neubauer, G.: Visualization of typed links in linked data (2017) 0.00
    6.095233E-4 = product of:
      0.0030476165 = sum of:
        0.0030476165 = weight(_text_:a in 3912) [ClassicSimilarity], result of:
          0.0030476165 = score(doc=3912,freq=2.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.06369744 = fieldWeight in 3912, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
      0.2 = coord(1/5)
    
    Type
    a
  13. Neumaier, S.: Data integration for open data on the Web (2017) 0.00
    6.095233E-4 = product of:
      0.0030476165 = sum of:
        0.0030476165 = weight(_text_:a in 3923) [ClassicSimilarity], result of:
          0.0030476165 = score(doc=3923,freq=2.0), product of:
            0.047845192 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.041494574 = queryNorm
            0.06369744 = fieldWeight in 3923, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3923)
      0.2 = coord(1/5)
    
    Type
    a

Years

Languages

  • e 30
  • d 3

Types

  • a 21
  • el 7
  • m 6
  • s 3
  • x 2
  • n 1
  • More… Less…