Search (138 results, page 1 of 7)

  • × theme_ss:"Semantic Web"
  1. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.11
    0.11115356 = product of:
      0.2964095 = sum of:
        0.042344213 = product of:
          0.12703264 = sum of:
            0.12703264 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.12703264 = score(doc=701,freq=2.0), product of:
                0.33904418 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039991006 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.12703264 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.12703264 = score(doc=701,freq=2.0), product of:
            0.33904418 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039991006 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.12703264 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.12703264 = score(doc=701,freq=2.0), product of:
            0.33904418 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039991006 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.375 = coord(3/8)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  2. Leskinen, P.; Hyvönen, E.: Extracting genealogical networks of linked data from biographical texts (2019) 0.04
    0.03688775 = product of:
      0.147551 = sum of:
        0.041306987 = weight(_text_:computer in 5798) [ClassicSimilarity], result of:
          0.041306987 = score(doc=5798,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.28263903 = fieldWeight in 5798, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5798)
        0.10624401 = weight(_text_:network in 5798) [ClassicSimilarity], result of:
          0.10624401 = score(doc=5798,freq=6.0), product of:
            0.17809492 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.039991006 = queryNorm
            0.59655833 = fieldWeight in 5798, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5798)
      0.25 = coord(2/8)
    
    Abstract
    This paper presents the idea and our work of extracting and reassembling a genealogical network automatically from a collection of biographies. The network can be used as a tool for network analysis of historical persons. The data has been published as Linked Data and as an interactive online service as part of the in-use data service and semantic portal BiographySampo - Finnish Biographies on the Semantic Web.
    Series
    Lecture notes in computer science; vol.11762
  3. Maltese, V.; Farazi, F.: Towards the integration of knowledge organization systems with the linked data cloud (2011) 0.04
    0.035300575 = product of:
      0.09413487 = sum of:
        0.029504994 = weight(_text_:computer in 602) [ClassicSimilarity], result of:
          0.029504994 = score(doc=602,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.20188503 = fieldWeight in 602, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=602)
        0.04381429 = weight(_text_:network in 602) [ClassicSimilarity], result of:
          0.04381429 = score(doc=602,freq=2.0), product of:
            0.17809492 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.039991006 = queryNorm
            0.2460165 = fieldWeight in 602, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.0390625 = fieldNorm(doc=602)
        0.02081558 = product of:
          0.04163116 = sum of:
            0.04163116 = weight(_text_:resources in 602) [ClassicSimilarity], result of:
              0.04163116 = score(doc=602,freq=4.0), product of:
                0.14598069 = queryWeight, product of:
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.039991006 = queryNorm
                0.28518265 = fieldWeight in 602, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=602)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    In representing the shared view of all the people involved, building a Knowledge Organization System (KOS) from scratch is extremely costly, and it is therefore fundamental to reuse existing resources. This can be done by progressively extending the KOS with knowledge coming from similar KOS and by promoting interoperability among them. The linked data initiative is indeed fostering people to share and integrate their datasets into a giant network of interconnected resources. This enables different applications to interoperate and share their data. However, the integration should take into account the purpose of the datasets and make explicit the semantics. In fact, the difference in the purpose is reflected in the difference in the semantics. With this paper we (a) highlight the potential problems that may arise by not taking into account purpose and semantics, (b) make clear how the difference in the purpose is reflected in totally different semantics and (c) provide an algorithm to translate from one semantic into another as a preliminary step towards the integration of ontologies designed for different purposes. This will allow reusing the ontologies even in contexts different from those in which they were designed.
    Imprint
    Trento : University of Trento / Department of Information engineering and Computer Science
  4. Faaborg, A.; Lagoze, C.: Semantic browsing (2003) 0.03
    0.030111834 = product of:
      0.12044734 = sum of:
        0.041306987 = weight(_text_:computer in 1026) [ClassicSimilarity], result of:
          0.041306987 = score(doc=1026,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.28263903 = fieldWeight in 1026, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1026)
        0.07914035 = sum of:
          0.041212745 = weight(_text_:resources in 1026) [ClassicSimilarity], result of:
            0.041212745 = score(doc=1026,freq=2.0), product of:
              0.14598069 = queryWeight, product of:
                3.650338 = idf(docFreq=3122, maxDocs=44218)
                0.039991006 = queryNorm
              0.28231642 = fieldWeight in 1026, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.650338 = idf(docFreq=3122, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1026)
          0.037927605 = weight(_text_:22 in 1026) [ClassicSimilarity], result of:
            0.037927605 = score(doc=1026,freq=2.0), product of:
              0.1400417 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.039991006 = queryNorm
              0.2708308 = fieldWeight in 1026, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1026)
      0.25 = coord(2/8)
    
    Abstract
    We have created software applications that allow users to both author and use Semantic Web metadata. To create and use a layer of semantic content on top of the existing Web, we have (1) implemented a user interface that expedites the task of attributing metadata to resources on the Web, and (2) augmented a Web browser to leverage this semantic metadata to provide relevant information and tasks to the user. This project provides a framework for annotating and reorganizing existing files, pages, and sites on the Web that is similar to Vannevar Bushrsquos original concepts of trail blazing and associative indexing.
    Series
    Lecture notes in computer science; vol.2769
    Source
    Research and advanced technology for digital libraries : 7th European Conference, proceedings / ECDL 2003, Trondheim, Norway, August 17-22, 2003
  5. Rousset, M.-C.; Atencia, M.; David, J.; Jouanot, F.; Ulliana, F.; Palombi, O.: Datalog revisited for reasoning in linked data (2017) 0.03
    0.02954556 = product of:
      0.11818224 = sum of:
        0.08867725 = weight(_text_:property in 3936) [ClassicSimilarity], result of:
          0.08867725 = score(doc=3936,freq=2.0), product of:
            0.25336683 = queryWeight, product of:
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.039991006 = queryNorm
            0.3499955 = fieldWeight in 3936, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3936)
        0.029504994 = weight(_text_:computer in 3936) [ClassicSimilarity], result of:
          0.029504994 = score(doc=3936,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.20188503 = fieldWeight in 3936, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3936)
      0.25 = coord(2/8)
    
    Abstract
    Linked Data provides access to huge, continuously growing amounts of open data and ontologies in RDF format that describe entities, links and properties on those entities. Equipping Linked Data with inference paves the way to make the Semantic Web a reality. In this survey, we describe a unifying framework for RDF ontologies and databases that we call deductive RDF triplestores. It consists in equipping RDF triplestores with Datalog inference rules. This rule language allows to capture in a uniform manner OWL constraints that are useful in practice, such as property transitivity or symmetry, but also domain-specific rules with practical relevance for users in many domains of interest. The expressivity and the genericity of this framework is illustrated for modeling Linked Data applications and for developing inference algorithms. In particular, we show how it allows to model the problem of data linkage in Linked Data as a reasoning problem on possibly decentralized data. We also explain how it makes possible to efficiently extract expressive modules from Semantic Web ontologies and databases with formal guarantees, whilst effectively controlling their succinctness. Experiments conducted on real-world datasets have demonstrated the feasibility of this approach and its usefulness in practice for data integration and information extraction.
    Series
    Lecture Notes in Computer Scienc;10370) (Information Systems and Applications, incl. Internet/Web, and HCI
  6. Tillett, B.B.: AACR2 and metadata : library opportunities in the global semantic Web (2003) 0.03
    0.029005053 = product of:
      0.11602021 = sum of:
        0.0983576 = weight(_text_:europe in 5510) [ClassicSimilarity], result of:
          0.0983576 = score(doc=5510,freq=2.0), product of:
            0.24358861 = queryWeight, product of:
              6.091085 = idf(docFreq=271, maxDocs=44218)
              0.039991006 = queryNorm
            0.4037857 = fieldWeight in 5510, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.091085 = idf(docFreq=271, maxDocs=44218)
              0.046875 = fieldNorm(doc=5510)
        0.017662605 = product of:
          0.03532521 = sum of:
            0.03532521 = weight(_text_:resources in 5510) [ClassicSimilarity], result of:
              0.03532521 = score(doc=5510,freq=2.0), product of:
                0.14598069 = queryWeight, product of:
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.039991006 = queryNorm
                0.2419855 = fieldWeight in 5510, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5510)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Explores the opportunities for libraries to contribute to the proposed global "Semantic Web." Library name and subject authority files, including work that IFLA has done related to a new view of "Universal Bibliographic Control" in the Internet environment and the work underway in the U.S. and Europe, are making a reality of the virtual international authority file on the Web. The bibliographic and authority records created according to AACR2 reflect standards for metadata that libraries have provided for years. New opportunities for using these records in the digital world are described (interoperability), including mapping with Dublin Core metadata. AACR2 recently updated Chapter 9 on Electronic Resources. That process and highlights of the changes are described, including Library of Congress' rule interpretations.
  7. Knitting the semantic Web (2007) 0.03
    0.026318114 = product of:
      0.07018164 = sum of:
        0.029208453 = weight(_text_:computer in 1397) [ClassicSimilarity], result of:
          0.029208453 = score(doc=1397,freq=4.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.19985598 = fieldWeight in 1397, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1397)
        0.030670002 = weight(_text_:network in 1397) [ClassicSimilarity], result of:
          0.030670002 = score(doc=1397,freq=2.0), product of:
            0.17809492 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.039991006 = queryNorm
            0.17221154 = fieldWeight in 1397, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1397)
        0.010303186 = product of:
          0.020606373 = sum of:
            0.020606373 = weight(_text_:resources in 1397) [ClassicSimilarity], result of:
              0.020606373 = score(doc=1397,freq=2.0), product of:
                0.14598069 = queryWeight, product of:
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.039991006 = queryNorm
                0.14115821 = fieldWeight in 1397, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1397)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    The Semantic Web, the extension that goes beyond the current Web, better enables computers and people to effectively work together by giving information well-defined meaning. Knitting the Semantic Web explains the interdisciplinary efforts underway to build a more library-like Web through "semantic knitting." The book examines tagging information with standardized semantic metadata to result in a network able to support computational activities and provide people with services efficiently. Leaders in library and information science, computer science, and information intensive domains provide insight and inspiration to give readers a greater understanding in the development, growth, and maintenance of the Semantic Web. Librarians are uniquely qualified to play a major role in the development and maintenance of the Semantic Web. Knitting the Semantic Web closely examines this crucial relationship in detail. This single source reviews the foundations, standards, and tools of the Semantic Web, as well as discussions on projects and perspectives. Many chapters include figures to illustrate concepts and ideas, and the entire text is extensively referenced. Topics in Knitting the Semantic Web include: - RDF, its expressive power, and its ability to underlie the new Library catalog card for the coming century - the value and application for controlled vocabularies - SKOS (Simple Knowledge Organization System), the newest Semantic Web language - managing scheme versioning in the Semantic Web - Physnet portal service for physics - Semantic Web technologies in biomedicine - developing the United Nations Food and Agriculture ontology - Friend Of A Friend (FOAF) vocabulary specification-with a real world case study at a university - and more Knitting the Semantic Web is a stimulating resource for professionals, researchers, educators, and students in library and information science, computer science, information architecture, Web design, and Web services.
    Content
    Enthält die Beiträge: Greenberg, J., E.M. Méndez Rodríguez: Introduction: toward a more library-like Web via semantic knitting (S.1-8). - Campbell, D.G.: The birth of the new Web: a Foucauldian reading (S.9-20). - McCathieNevile, C., E.M. Méndez Rodríguez: Library cards for the 21st century (S.21-45). - Harper, C.A., B.B. Tillett: Library of Congress controlled vocabularies and their application to the Semantic Web (S.47-68). - Miles, A., J.R. Pérez-Agüera: SKOS: Simple Knowledge Organisation for the Web (S.69-83). - Tennis, J.T.: Scheme versioning in the Semantic Web (S.85-104). - Rogers, G.P.: Roles for semantic technologies and tools in libraries (S.105-125). - Severiens, T., C. Thiemann: RDF database for PhysNet and similar portals (S.127-147). - Michon, J.: Biomedicine and the Semantic Web: a knowledge model for visual phenotype (S.149-160). - Liang, A., G. Salokhe u. M. Sini u.a.: Towards an infrastructure for semantic applications: methodologies for semantic integration of heterogeneous resources (S.161-189). - Graves, M., A. Constabaris u. D. Brickley: FOAF: connecting people on the Semantic Web (S.191-202). - Greenberg, J.: Advancing Semantic Web via library functions (S.203-225). - Weibel, S.L.: Social Bibliography: a personal perspective on libraries and the Semantic Web (S.227-236)
  8. Gendt, M. van; Isaac, I.; Meij, L. van der; Schlobach, S.: Semantic Web techniques for multiple views on heterogeneous collections : a case study (2006) 0.03
    0.025810145 = product of:
      0.10324058 = sum of:
        0.035405993 = weight(_text_:computer in 2418) [ClassicSimilarity], result of:
          0.035405993 = score(doc=2418,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.24226204 = fieldWeight in 2418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=2418)
        0.067834586 = sum of:
          0.03532521 = weight(_text_:resources in 2418) [ClassicSimilarity], result of:
            0.03532521 = score(doc=2418,freq=2.0), product of:
              0.14598069 = queryWeight, product of:
                3.650338 = idf(docFreq=3122, maxDocs=44218)
                0.039991006 = queryNorm
              0.2419855 = fieldWeight in 2418, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.650338 = idf(docFreq=3122, maxDocs=44218)
                0.046875 = fieldNorm(doc=2418)
          0.032509375 = weight(_text_:22 in 2418) [ClassicSimilarity], result of:
            0.032509375 = score(doc=2418,freq=2.0), product of:
              0.1400417 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.039991006 = queryNorm
              0.23214069 = fieldWeight in 2418, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2418)
      0.25 = coord(2/8)
    
    Abstract
    Integrated digital access to multiple collections is a prominent issue for many Cultural Heritage institutions. The metadata describing diverse collections must be interoperable, which requires aligning the controlled vocabularies that are used to annotate objects from these collections. In this paper, we present an experiment where we match the vocabularies of two collections by applying the Knowledge Representation techniques established in recent Semantic Web research. We discuss the steps that are required for such matching, namely formalising the initial resources using Semantic Web languages, and running ontology mapping tools on the resulting representations. In addition, we present a prototype that enables the user to browse the two collections using the obtained alignment while still providing her with the original vocabulary structures.
    Series
    Lecture notes in computer science; vol.4172
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
  9. Miller, E.; Schloss. B.; Lassila, O.; Swick, R.R.: Resource Description Framework (RDF) : model and syntax (1997) 0.02
    0.021827906 = product of:
      0.087311625 = sum of:
        0.062074073 = weight(_text_:property in 5903) [ClassicSimilarity], result of:
          0.062074073 = score(doc=5903,freq=2.0), product of:
            0.25336683 = queryWeight, product of:
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.039991006 = queryNorm
            0.24499685 = fieldWeight in 5903, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5903)
        0.02523755 = product of:
          0.0504751 = sum of:
            0.0504751 = weight(_text_:resources in 5903) [ClassicSimilarity], result of:
              0.0504751 = score(doc=5903,freq=12.0), product of:
                0.14598069 = queryWeight, product of:
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.039991006 = queryNorm
                0.3457656 = fieldWeight in 5903, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5903)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    RDF - the Resource Description Framework - is a foundation for processing metadata; it provides interoperability between applications that exchange machine-understandable information on the Web. RDF emphasizes facilities to enable automated processing of Web resources. RDF metadata can be used in a variety of application areas; for example: in resource discovery to provide better search engine capabilities; in cataloging for describing the content and content relationships available at a particular Web site, page, or digital library; by intelligent software agents to facilitate knowledge sharing and exchange; in content rating; in describing collections of pages that represent a single logical "document"; for describing intellectual property rights of Web pages, and in many others. RDF with digital signatures will be key to building the "Web of Trust" for electronic commerce, collaboration, and other applications. Metadata is "data about data" or specifically in the context of RDF "data describing web resources." The distinction between "data" and "metadata" is not an absolute one; it is a distinction created primarily by a particular application. Many times the same resource will be interpreted in both ways simultaneously. RDF encourages this view by using XML as the encoding syntax for the metadata. The resources being described by RDF are, in general, anything that can be named via a URI. The broad goal of RDF is to define a mechanism for describing resources that makes no assumptions about a particular application domain, nor defines the semantics of any application domain. The definition of the mechanism should be domain neutral, yet the mechanism should be suitable for describing information about any domain. This document introduces a model for representing RDF metadata and one syntax for expressing and transporting this metadata in a manner that maximizes the interoperability of independently developed web servers and clients. The syntax described in this document is best considered as a "serialization syntax" for the underlying RDF representation model. The serialization syntax is XML, XML being the W3C's work-in-progress to define a richer Web syntax for a variety of applications. RDF and XML are complementary; there will be alternate ways to represent the same RDF data model, some more suitable for direct human authoring. Future work may lead to including such alternatives in this document.
    Content
    RDF Data Model At the core of RDF is a model for representing named properties and their values. These properties serve both to represent attributes of resources (and in this sense correspond to usual attribute-value-pairs) and to represent relationships between resources. The RDF data model is a syntax-independent way of representing RDF statements. RDF statements that are syntactically very different could mean the same thing. This concept of equivalence in meaning is very important when performing queries, aggregation and a number of other tasks at which RDF is aimed. The equivalence is defined in a clean machine understandable way. Two pieces of RDF are equivalent if and only if their corresponding data model representations are the same. Table of contents 1. Introduction 2. RDF Data Model 3. RDF Grammar 4. Signed RDF 5. Examples 6. Appendix A: Brief Explanation of XML Namespaces
  10. Papadakis, I. et al.: Highlighting timely information in libraries through social and semantic Web technologies (2016) 0.02
    0.021525284 = product of:
      0.08610114 = sum of:
        0.059009988 = weight(_text_:computer in 2090) [ClassicSimilarity], result of:
          0.059009988 = score(doc=2090,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.40377006 = fieldWeight in 2090, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.078125 = fieldNorm(doc=2090)
        0.027091147 = product of:
          0.054182295 = sum of:
            0.054182295 = weight(_text_:22 in 2090) [ClassicSimilarity], result of:
              0.054182295 = score(doc=2090,freq=2.0), product of:
                0.1400417 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039991006 = queryNorm
                0.38690117 = fieldWeight in 2090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2090)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Series
    Communications in computer and information science; 672
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  11. Campbell, D.G.: Derrida, logocentrism, and the concept of warrant on the Semantic Web (2008) 0.02
    0.0191704 = product of:
      0.0766816 = sum of:
        0.061962765 = weight(_text_:network in 2507) [ClassicSimilarity], result of:
          0.061962765 = score(doc=2507,freq=4.0), product of:
            0.17809492 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.039991006 = queryNorm
            0.34791988 = fieldWeight in 2507, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2507)
        0.014718837 = product of:
          0.029437674 = sum of:
            0.029437674 = weight(_text_:resources in 2507) [ClassicSimilarity], result of:
              0.029437674 = score(doc=2507,freq=2.0), product of:
                0.14598069 = queryWeight, product of:
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.039991006 = queryNorm
                0.20165458 = fieldWeight in 2507, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2507)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Content
    The highly-structured data standards of the Semantic Web contain a promising venue for the migration of library subject access standards onto the World Wide Web. The new functionalities of the Web, however, along with the anticipated capabilities of intelligent Web agents, suggest that information on the Semantic Web will have much more flexibility, diversity and mutability. We need, therefore, a method for recognizing and assessing the principles whereby Semantic Web information can combine together in productive and useful ways. This paper will argue that the concept of warrant in traditional library science, can provide a useful means of translating library knowledge structures into Web-based knowledge structures. Using Derrida's concept of logocentrism, this paper suggests that what while "warrant" in library science traditionally alludes to the principles by which concepts are admitted into the design of a classification or access system, "warrant" on the Semantic Web alludes to the principles by which Web resources can be admitted into a network of information uses. Furthermore, library information practice suggests a far more complex network of warrant concepts that provide a subtlety and richness to knowledge organization that the Semantic Web has not yet attained.
  12. Maltese, V.; Farazi, F.: Towards the integration of knowledge organization systems with the linked data cloud (2011) 0.02
    0.016157467 = product of:
      0.06462987 = sum of:
        0.04381429 = weight(_text_:network in 4815) [ClassicSimilarity], result of:
          0.04381429 = score(doc=4815,freq=2.0), product of:
            0.17809492 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.039991006 = queryNorm
            0.2460165 = fieldWeight in 4815, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4815)
        0.02081558 = product of:
          0.04163116 = sum of:
            0.04163116 = weight(_text_:resources in 4815) [ClassicSimilarity], result of:
              0.04163116 = score(doc=4815,freq=4.0), product of:
                0.14598069 = queryWeight, product of:
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.039991006 = queryNorm
                0.28518265 = fieldWeight in 4815, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4815)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    In representing the shared view of all the people involved, building a knowledge organization system (KOS) from scratch is extremely costly, and it is therefore fundamental to reuse existing resources. This can be done by progressively extending the KOS with knowledge coming from similar KOSs and by promoting interoperability among them. The linked data initiative is indeed encouraging people to share and integrate their datasets into a giant network of interconnected resources. This enables different applications to interoperate and share their data. The integration should take into account the purpose of the datasets, however, and make explicit the semantics. In fact, the difference in the purpose is reflected in the difference in the semantics. With this paper we (a) highlight the potential problems that may arise by not taking into account purpose and semantics; (b) make clear how the difference in the purpose is reflected in totally different semantics and (c) provide an algorithm to translate from one semantics into another as a preliminary step towards the integration of ontologies designed for different purposes. This will allow reusing the ontologies even in contexts different from those in which they were designed.
  13. Metadata and semantics research : 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings (2016) 0.02
    0.0150676975 = product of:
      0.06027079 = sum of:
        0.041306987 = weight(_text_:computer in 3283) [ClassicSimilarity], result of:
          0.041306987 = score(doc=3283,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.28263903 = fieldWeight in 3283, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3283)
        0.018963803 = product of:
          0.037927605 = sum of:
            0.037927605 = weight(_text_:22 in 3283) [ClassicSimilarity], result of:
              0.037927605 = score(doc=3283,freq=2.0), product of:
                0.1400417 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039991006 = queryNorm
                0.2708308 = fieldWeight in 3283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3283)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Series
    Communications in computer and information science; 672
  14. Nagenborg, M..: Privacy im Social Semantic Web (2009) 0.01
    0.014343816 = product of:
      0.11475053 = sum of:
        0.11475053 = weight(_text_:europe in 4876) [ClassicSimilarity], result of:
          0.11475053 = score(doc=4876,freq=2.0), product of:
            0.24358861 = queryWeight, product of:
              6.091085 = idf(docFreq=271, maxDocs=44218)
              0.039991006 = queryNorm
            0.4710833 = fieldWeight in 4876, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.091085 = idf(docFreq=271, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4876)
      0.125 = coord(1/8)
    
    Abstract
    Der Schwerpunkt dieses Beitrages liegt auf dem Design von Infrastrukturen, welche es ermöglichen sollen, private Daten kontrolliert preiszugeben und auszutauschen. Zunächst wird daran erinnert, dass rechtliche und technische Maßnahmen zum Datenschutz stets auch dazu dienen, den Austausch von Daten zu ermöglichen. Die grundlegende Herausforderung besteht darin, der sozialen und politischen Bedeutung des Privaten Rechnung zu tragen. Privatheit wird aus der Perspektive der Informationsethik dabei als ein normatives, handlungsleitendes Konzept verstanden. Als Maßstab für die Gestaltung der entsprechenden Infrastrukturen wird auf Helen Nissenbaums Konzept der "privacy as contextual integrity" zurückgegriffen, um u. a. die Ansätze der "end-to-end information accountability" und des "Privacy Identity Management for Europe"- Projektes zu diskutieren.
  15. Menzel, C.: Knowledge representation, the World Wide Web, and the evolution of logic (2011) 0.01
    0.013301588 = product of:
      0.1064127 = sum of:
        0.1064127 = weight(_text_:property in 761) [ClassicSimilarity], result of:
          0.1064127 = score(doc=761,freq=2.0), product of:
            0.25336683 = queryWeight, product of:
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.039991006 = queryNorm
            0.4199946 = fieldWeight in 761, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.046875 = fieldNorm(doc=761)
      0.125 = coord(1/8)
    
    Abstract
    In this paper, I have traced a series of evolutionary adaptations of FOL motivated entirely by its use by knowledge engineers to represent and share information on the Web culminating in the development of Common Logic. While the primary goal in this paper has been to document this evolution, it is arguable, I think that CL's syntactic and semantic egalitarianism better realizes the goal "topic neutrality" that a logic should ideally exemplify - understood, at least in part, as the idea that logic should as far as possible not itself embody any metaphysical presuppositions. Instead of retaining the traditional metaphysical divisions of FOL that reflect its Fregean origins, CL begins as it were with a single, metaphysically homogeneous domain in which, potentially, anything can play the traditional roles of object, property, relation, and function. Note that the effect of this is not to destroy traditional metaphysical divisions. Rather, it simply to refrain from building those divisions explicitly into one's logic; instead, such divisions are left to the user to introduce and enforce axiomatically in an explicit metaphysical theory.
  16. Breslin, J.G.: Social semantic information spaces (2009) 0.01
    0.012580143 = product of:
      0.050320573 = sum of:
        0.029504994 = weight(_text_:computer in 3377) [ClassicSimilarity], result of:
          0.029504994 = score(doc=3377,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.20188503 = fieldWeight in 3377, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3377)
        0.02081558 = product of:
          0.04163116 = sum of:
            0.04163116 = weight(_text_:resources in 3377) [ClassicSimilarity], result of:
              0.04163116 = score(doc=3377,freq=4.0), product of:
                0.14598069 = queryWeight, product of:
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.039991006 = queryNorm
                0.28518265 = fieldWeight in 3377, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3377)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The structural and syntactic web put in place in the early 90s is still much the same as what we use today: resources (web pages, files, etc.) connected by untyped hyperlinks. By untyped, we mean that there is no easy way for a computer to figure out what a link between two pages means - for example, on the W3C website, there are hundreds of links to the various organisations that are registered members of the association, but there is nothing explicitly saying that the link is to an organisation that is a "member of" the W3C or what type of organisation is represented by the link. On John's work page, he links to many papers he has written, but it does not explicitly say that he is the author of those papers or that he wrote such-and-such when he was working at a particular university. In fact, the Web was envisaged to be much more, as one can see from the image in Fig. 1 which is taken from Tim Berners Lee's original outline for the Web in 1989, entitled "Information Management: A Proposal". In this, all the resources are connected by links describing the type of relationships, e.g. "wrote", "describe", "refers to", etc. This is a precursor to the Semantic Web which we will come back to later.
  17. Malmsten, M.: Making a library catalogue part of the Semantic Web (2008) 0.01
    0.012026403 = product of:
      0.096211225 = sum of:
        0.096211225 = sum of:
          0.058283623 = weight(_text_:resources in 2640) [ClassicSimilarity], result of:
            0.058283623 = score(doc=2640,freq=4.0), product of:
              0.14598069 = queryWeight, product of:
                3.650338 = idf(docFreq=3122, maxDocs=44218)
                0.039991006 = queryNorm
              0.39925572 = fieldWeight in 2640, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.650338 = idf(docFreq=3122, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2640)
          0.037927605 = weight(_text_:22 in 2640) [ClassicSimilarity], result of:
            0.037927605 = score(doc=2640,freq=2.0), product of:
              0.1400417 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.039991006 = queryNorm
              0.2708308 = fieldWeight in 2640, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2640)
      0.125 = coord(1/8)
    
    Abstract
    Library catalogues contain an enormous amount of structured, high-quality data, however, this data is generally not made available to semantic web applications. In this paper we describe the tools and techniques used to make the Swedish Union Catalogue (LIBRIS) part of the Semantic Web and Linked Data. The focus is on links to and between resources and the mechanisms used to make data available, rather than perfect description of the individual resources. We also present a method of creating links between records of the same work.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  18. Jacobs, I.: From chaos, order: W3C standard helps organize knowledge : SKOS Connects Diverse Knowledge Organization Systems to Linked Data (2009) 0.01
    0.011310227 = product of:
      0.04524091 = sum of:
        0.030670002 = weight(_text_:network in 3062) [ClassicSimilarity], result of:
          0.030670002 = score(doc=3062,freq=2.0), product of:
            0.17809492 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.039991006 = queryNorm
            0.17221154 = fieldWeight in 3062, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3062)
        0.014570906 = product of:
          0.029141812 = sum of:
            0.029141812 = weight(_text_:resources in 3062) [ClassicSimilarity], result of:
              0.029141812 = score(doc=3062,freq=4.0), product of:
                0.14598069 = queryWeight, product of:
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.039991006 = queryNorm
                0.19962786 = fieldWeight in 3062, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3062)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Content
    SKOS Adapts to the Diversity of Knowledge Organization Systems A useful starting point for understanding the role of SKOS is the set of subject headings published by the US Library of Congress (LOC) for categorizing books, videos, and other library resources. These headings can be used to broaden or narrow queries for discovering resources. For instance, one can narrow a query about books on "Chinese literature" to "Chinese drama," or further still to "Chinese children's plays." Library of Congress subject headings have evolved within a community of practice over a period of decades. By now publishing these subject headings in SKOS, the Library of Congress has made them available to the linked data community, which benefits from a time-tested set of concepts to re-use in their own data. This re-use adds value ("the network effect") to the collection. When people all over the Web re-use the same LOC concept for "Chinese drama," or a concept from some other vocabulary linked to it, this creates many new routes to the discovery of information, and increases the chances that relevant items will be found. As an example of mapping one vocabulary to another, a combined effort from the STITCH, TELplus and MACS Projects provides links between LOC concepts and RAMEAU, a collection of French subject headings used by the Bibliothèque Nationale de France and other institutions. SKOS can be used for subject headings but also many other approaches to organizing knowledge. Because different communities are comfortable with different organization schemes, SKOS is designed to port diverse knowledge organization systems to the Web. "Active participation from the library and information science community in the development of SKOS over the past seven years has been key to ensuring that SKOS meets a variety of needs," said Thomas Baker, co-chair of the Semantic Web Deployment Working Group, which published SKOS. "One goal in creating SKOS was to provide new uses for well-established knowledge organization systems by providing a bridge to the linked data cloud." SKOS is part of the Semantic Web technology stack. Like the Web Ontology Language (OWL), SKOS can be used to define vocabularies. But the two technologies were designed to meet different needs. SKOS is a simple language with just a few features, tuned for sharing and linking knowledge organization systems such as thesauri and classification schemes. OWL offers a general and powerful framework for knowledge representation, where additional "rigor" can afford additional benefits (for instance, business rule processing). To get started with SKOS, see the SKOS Primer.
  19. Davies, J.; Duke, A.; Stonkus, A.: OntoShare: evolving ontologies in a knowledge sharing system (2004) 0.01
    0.011310227 = product of:
      0.04524091 = sum of:
        0.030670002 = weight(_text_:network in 4409) [ClassicSimilarity], result of:
          0.030670002 = score(doc=4409,freq=2.0), product of:
            0.17809492 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.039991006 = queryNorm
            0.17221154 = fieldWeight in 4409, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4409)
        0.014570906 = product of:
          0.029141812 = sum of:
            0.029141812 = weight(_text_:resources in 4409) [ClassicSimilarity], result of:
              0.029141812 = score(doc=4409,freq=4.0), product of:
                0.14598069 = queryWeight, product of:
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.039991006 = queryNorm
                0.19962786 = fieldWeight in 4409, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4409)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    We saw in the introduction how the Semantic Web makes possible a new generation of knowledge management tools. We now turn our attention more specifically to Semantic Web based support for virtual communities of practice. The notion of communities of practice has attracted much attention in the field of knowledge management. Communities of practice are groups within (or sometimes across) organizations who share a common set of information needs or problems. They are typically not a formal organizational unit but an informal network, each sharing in part a common agenda and shared interests or issues. In one example it was found that a lot of knowledge sharing among copier engineers took place through informal exchanges, often around a water cooler. As well as local, geographically based communities, trends towards flexible working and globalisation have led to interest in supporting dispersed communities using Internet technology. The challenge for organizations is to support such communities and make them effective. Provided with an ontology meeting the needs of a particular community of practice, knowledge management tools can arrange knowledge assets into the predefined conceptual classes of the ontology, allowing more natural and intuitive access to knowledge. Knowledge management tools must give users the ability to organize information into a controllable asset. Building an intranet-based store of information is not sufficient for knowledge management; the relationships within the stored information are vital. These relationships cover such diverse issues as relative importance, context, sequence, significance, causality and association. The potential for knowledge management tools is vast; not only can they make better use of the raw information already available, but they can sift, abstract and help to share new information, and present it to users in new and compelling ways.
    In this chapter, we describe the OntoShare system which facilitates and encourages the sharing of information between communities of practice within (or perhaps across) organizations and which encourages people - who may not previously have known of each other's existence in a large organization - to make contact where there are mutual concerns or interests. As users contribute information to the community, a knowledge resource annotated with meta-data is created. Ontologies defined using the resource description framework (RDF) and RDF Schema (RDFS) are used in this process. RDF is a W3C recommendation for the formulation of meta-data for WWW resources. RDF(S) extends this standard with the means to specify domain vocabulary and object structures - that is, concepts and the relationships that hold between them. In the next section, we describe in detail the way in which OntoShare can be used to share and retrieve knowledge and how that knowledge is represented in an RDF-based ontology. We then proceed to discuss in Section 10.3 how the ontologies in OntoShare evolve over time based on user interaction with the system and motivate our approach to user-based creation of RDF-annotated information resources. The way in which OntoShare can help to locate expertise within an organization is then described, followed by a discussion of the sociotechnical issues of deploying such a tool. Finally, a planned evaluation exercise and avenues for further research are outlined.
  20. Luo, Y.; Picalausa, F.; Fletcher, G.H.L.; Hidders, J.; Vansummeren, S.: Storing and indexing massive RDF datasets (2012) 0.01
    0.011084656 = product of:
      0.08867725 = sum of:
        0.08867725 = weight(_text_:property in 414) [ClassicSimilarity], result of:
          0.08867725 = score(doc=414,freq=2.0), product of:
            0.25336683 = queryWeight, product of:
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.039991006 = queryNorm
            0.3499955 = fieldWeight in 414, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.0390625 = fieldNorm(doc=414)
      0.125 = coord(1/8)
    
    Abstract
    The resource description framework (RDF for short) provides a flexible method for modeling information on the Web [34,40]. All data items in RDF are uniformly represented as triples of the form (subject, predicate, object), sometimes also referred to as (subject, property, value) triples. As a running example for this chapter, a small fragment of an RDF dataset concerning music and music fans is given in Fig. 2.1. Spurred by efforts like the Linking Open Data project, increasingly large volumes of data are being published in RDF. Notable contributors in this respect include areas as diverse as the government, the life sciences, Web 2.0 communities, and so on. To give an idea of the volumes of RDF data concerned, as of September 2012, there are 31,634,213,770 triples in total published by data sources participating in the Linking Open Data project. Many individual data sources (like, e.g., PubMed, DBpedia, MusicBrainz) contain hundreds of millions of triples (797, 672, and 179 millions, respectively). These large volumes of RDF data motivate the need for scalable native RDF data management solutions capabable of efficiently storing, indexing, and querying RDF data. In this chapter, we present a general and up-to-date survey of the current state of the art in RDF storage and indexing.

Authors

Years

Languages

  • e 120
  • d 18

Types

  • a 80
  • el 34
  • m 31
  • s 19
  • n 4
  • x 4
  • r 1
  • More… Less…

Subjects

Classifications