Search (10 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Semantic Web"
  • × theme_ss:"Semantische Interoperabilität"
  1. Stamou, G.; Chortaras, A.: Ontological query answering over semantic data (2017) 0.01
    0.010102809 = product of:
      0.050514046 = sum of:
        0.050514046 = weight(_text_:7 in 3926) [ClassicSimilarity], result of:
          0.050514046 = score(doc=3926,freq=2.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.2928101 = fieldWeight in 3926, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.0625 = fieldNorm(doc=3926)
      0.2 = coord(1/5)
    
    Source
    Reasoning Web: Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures. Eds.: Ianni, G. et al
  2. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.01
    0.009877752 = product of:
      0.04938876 = sum of:
        0.04938876 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
          0.04938876 = score(doc=759,freq=2.0), product of:
            0.18236019 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052075688 = queryNorm
            0.2708308 = fieldWeight in 759, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=759)
      0.2 = coord(1/5)
    
    Date
    11. 5.2013 19:22:18
  3. Metadata and semantics research : 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings (2016) 0.01
    0.009877752 = product of:
      0.04938876 = sum of:
        0.04938876 = weight(_text_:22 in 3283) [ClassicSimilarity], result of:
          0.04938876 = score(doc=3283,freq=2.0), product of:
            0.18236019 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052075688 = queryNorm
            0.2708308 = fieldWeight in 3283, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3283)
      0.2 = coord(1/5)
    
  4. Veltman, K.H.: Syntactic and semantic interoperability : new approaches to knowledge and the Semantic Web (2001) 0.01
    0.0071437657 = product of:
      0.03571883 = sum of:
        0.03571883 = weight(_text_:7 in 3883) [ClassicSimilarity], result of:
          0.03571883 = score(doc=3883,freq=4.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.20704803 = fieldWeight in 3883, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.03125 = fieldNorm(doc=3883)
      0.2 = coord(1/5)
    
    Abstract
    At VVWW-7 (Brisbane, 1997), Tim Berners-Lee outlined his vision of a global reasoning web. At VVWW- 8 (Toronto, May 1998), he developed this into a vision of a semantic web, where one Gould search not just for isolated words, but for meaning in the form of logically provable claims. In the past four years this vision has spread with amazing speed. The semantic web has been adopted by the European Commission as one of the important goals of the Sixth Framework Programme. In the United States it has become linked with the Defense Advanced Research Projects Agency (DARPA). While this quest to achieve a semantic web is new, the quest for meaning in language has a history that is almost as old as language itself. Accordingly this paper opens with a survey of the historical background. The contributions of the Dublin Core are reviewed briefly. To achieve a semantic web requires both syntactic and semantic interoperability. These challenges are outlined. A basic contention of this paper is that semantic interoperability requires much more than a simple agreement concerning the static meaning of a term. Different levels of agreement (local, regional, national and international) are involved and these levels have their own history. Hence, one of the larger challenges is to create new systems of knowledge organization, which identify and connect these different levels. With respect to meaning or semantics, early twentieth century pioneers such as Wüster were hopeful that it might be sufficient to limit oneself to isolated terms and words without reference to the larger grammatical context: to concept systems rather than to propositional logic. While a fascination with concept systems implicitly dominates many contemporary discussions, this paper suggests why this approach is not sufficient. The final section of this paper explores how an approach using propositional logic could lead to a new approach to universals and particulars. This points to a re-organization of knowledge, and opens the way for a vision of a semantic web with all the historical and cultural richness and complexity of language itself.
    Source
    New review of information networking. 7(2001) no.xx, S.xx-xx
  5. Sakr, S.; Wylot, M.; Mutharaju, R.; Le-Phuoc, D.; Fundulaki, I.: Linked data : storing, querying, and reasoning (2018) 0.01
    0.0071437657 = product of:
      0.03571883 = sum of:
        0.03571883 = weight(_text_:7 in 5329) [ClassicSimilarity], result of:
          0.03571883 = score(doc=5329,freq=4.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.20704803 = fieldWeight in 5329, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.03125 = fieldNorm(doc=5329)
      0.2 = coord(1/5)
    
    Abstract
    This book describes efficient and effective techniques for harnessing the power of Linked Data by tackling the various aspects of managing its growing volume: storing, querying, reasoning, provenance management and benchmarking. To this end, Chapter 1 introduces the main concepts of the Semantic Web and Linked Data and provides a roadmap for the book. Next, Chapter 2 briefly presents the basic concepts underpinning Linked Data technologies that are discussed in the book. Chapter 3 then offers an overview of various techniques and systems for centrally querying RDF datasets, and Chapter 4 outlines various techniques and systems for efficiently querying large RDF datasets in distributed environments. Subsequently, Chapter 5 explores how streaming requirements are addressed in current, state-of-the-art RDF stream data processing. Chapter 6 covers performance and scaling issues of distributed RDF reasoning systems, while Chapter 7 details benchmarks for RDF query engines and instance matching systems. Chapter 8 addresses the provenance management for Linked Data and presents the different provenance models developed. Lastly, Chapter 9 offers a brief summary, highlighting and providing insights into some of the open challenges and research directions. Providing an updated overview of methods, technologies and systems related to Linked Data this book is mainly intended for students and researchers who are interested in the Linked Data domain. It enables students to gain an understanding of the foundations and underpinning technologies and standards for Linked Data, while researchers benefit from the in-depth coverage of the emerging and ongoing advances in Linked Data storing, querying, reasoning, and provenance management systems. Further, it serves as a starting point to tackle the next research challenges in the domain of Linked Data management.
    Date
    7. 7.2019 11:59:21
  6. Krause, J.: Semantic heterogeneity : comparing new semantic web approaches with those of digital libraries (2008) 0.01
    0.0063142553 = product of:
      0.031571276 = sum of:
        0.031571276 = weight(_text_:7 in 1908) [ClassicSimilarity], result of:
          0.031571276 = score(doc=1908,freq=2.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.18300632 = fieldWeight in 1908, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1908)
      0.2 = coord(1/5)
    
    Date
    7. 6.2008 18:49:12
  7. Mayr, P.; Mutschke, P.; Petras, V.: Reducing semantic complexity in distributed digital libraries : Treatment of term vagueness and document re-ranking (2008) 0.01
    0.0063142553 = product of:
      0.031571276 = sum of:
        0.031571276 = weight(_text_:7 in 1909) [ClassicSimilarity], result of:
          0.031571276 = score(doc=1909,freq=2.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.18300632 = fieldWeight in 1909, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1909)
      0.2 = coord(1/5)
    
    Date
    7. 6.2008 18:47:24
  8. Neumaier, S.: Data integration for open data on the Web (2017) 0.01
    0.0063142553 = product of:
      0.031571276 = sum of:
        0.031571276 = weight(_text_:7 in 3923) [ClassicSimilarity], result of:
          0.031571276 = score(doc=3923,freq=2.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.18300632 = fieldWeight in 3923, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3923)
      0.2 = coord(1/5)
    
    Source
    Reasoning Web: Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures. Eds.: Ianni, G. et al
  9. Reasoning Web : Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures (2017) 0.01
    0.0063142553 = product of:
      0.031571276 = sum of:
        0.031571276 = weight(_text_:7 in 3934) [ClassicSimilarity], result of:
          0.031571276 = score(doc=3934,freq=2.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.18300632 = fieldWeight in 3934, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3934)
      0.2 = coord(1/5)
    
  10. Isaac, A.: Aligning thesauri for an integrated access to Cultural Heritage Resources (2007) 0.01
    0.0062507945 = product of:
      0.03125397 = sum of:
        0.03125397 = weight(_text_:7 in 553) [ClassicSimilarity], result of:
          0.03125397 = score(doc=553,freq=4.0), product of:
            0.17251469 = queryWeight, product of:
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.052075688 = queryNorm
            0.18116702 = fieldWeight in 553, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3127685 = idf(docFreq=4376, maxDocs=44218)
              0.02734375 = fieldNorm(doc=553)
      0.2 = coord(1/5)
    
    Abstract
    Currently, a number of efforts are being carried out to integrate collections from different institutions and containing heterogeneous material. Examples of such projects are The European Library [1] and the Memory of the Netherlands [2]. A crucial point for the success of these is the availability to provide a unified access on top of the different collections, e.g. using one single vocabulary for querying or browsing the objects they contain. This is made difficult by the fact that the objects from different collections are often described using different vocabularies - thesauri, classification schemes - and are therefore not interoperable at the semantic level. To solve this problem, one can turn to semantic links - mappings - between the elements of the different vocabularies. If one knows that a concept C from a vocabulary V is semantically equivalent to a concept to a concept D from vocabulary W, then an appropriate search engine can return all the objects that were indexed against D for a query for objects described using C. We thus have an access to other collections, using a single one vocabulary. This is however an ideal situation, and hard alignment work is required to reach it. Several projects in the past have tried to implement such a solution, like MACS [3] and Renardus [4]. They have demonstrated very interesting results, but also highlighted the difficulty of aligning manually all the different vocabularies involved in practical cases, which sometimes contain hundreds of thousands of concepts. To alleviate this problem, a number of tools have been proposed in order to provide with candidate mappings between two input vocabularies, making alignment a (semi-) automatic task. Recently, the Semantic Web community has produced a lot of these alignment tools'. Several techniques are found, depending on the material they exploit: labels of concepts, structure of vocabularies, collection objects and external knowledge sources. Throughout our presentation, we will present a concrete heterogeneity case where alignment techniques have been applied to build a (pilot) browser, developed in the context of the STITCH project [5]. This browser enables a unified access to two collections of illuminated manuscripts, using the description vocabulary used in the first collection, Mandragore [6], or the one used by the second, Iconclass [7]. In our talk, we will also make the point for using unified representations the vocabulary semantic and lexical information. Additionally to ease the use of the alignment tools that have these vocabularies as input, turning to a standard representation format helps designing applications that are more generic, like the browser we demonstrate. We give pointers to SKOS [8], an open and web-enabled format currently developed by the Semantic Web community.
    References [1] http:// www.theeuropeanlibrary.org [2] http://www.geheugenvannederland.nl [3] http://macs.cenl.org [4] Day, M., Koch, T., Neuroth, H.: Searching and browsing multiple subject gateways in the Renardus service. In Proceedings of the RC33 Sixth International Conference on Social Science Methodology, Amsterdam , 2005. [5] http://stitch.cs.vu.nl [6] http://mandragore.bnf.fr [7] http://www.iconclass.nl [8] www.w3.org/2004/02/skos/ 1 The Semantic Web vision supposes sharing data using different conceptualizations (ontologies), and therefore implies to tackle the semantic interoperability problem