Search (40 results, page 1 of 2)

  • × theme_ss:"Semantic Web"
  • × type_ss:"a"
  1. Shaw, R.; Buckland, M.: Open identification and linking of the four Ws (2008) 0.03
    0.029773992 = product of:
      0.08932197 = sum of:
        0.08932197 = sum of:
          0.0655203 = weight(_text_:publishing in 2665) [ClassicSimilarity], result of:
            0.0655203 = score(doc=2665,freq=4.0), product of:
              0.24522576 = queryWeight, product of:
                4.885643 = idf(docFreq=907, maxDocs=44218)
                0.05019314 = queryNorm
              0.2671836 = fieldWeight in 2665, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.885643 = idf(docFreq=907, maxDocs=44218)
                0.02734375 = fieldNorm(doc=2665)
          0.023801671 = weight(_text_:22 in 2665) [ClassicSimilarity], result of:
            0.023801671 = score(doc=2665,freq=2.0), product of:
              0.17576782 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05019314 = queryNorm
              0.1354154 = fieldWeight in 2665, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=2665)
      0.33333334 = coord(1/3)
    
    Abstract
    Platforms for social computing connect users via shared references to people with whom they have relationships, events attended, places lived in or traveled to, and topics such as favorite books or movies. Since free text is insufficient for expressing such references precisely and unambiguously, many social computing platforms coin identifiers for topics, places, events, and people and provide interfaces for finding and selecting these identifiers from controlled lists. Using these interfaces, users collaboratively construct a web of links among entities. This model needn't be limited to social networking sites. Understanding an item in a digital library or museum requires context: information about the topics, places, events, and people to which the item is related. Students, journalists and investigators traditionally discover this kind of context by asking "the four Ws": what, where, when and who. The DCMI Kernel Metadata Community has recognized the four Ws as fundamental elements of descriptions (Kunze & Turner, 2007). Making better use of metadata to answer these questions via links to appropriate contextual resources has been our focus in a series of research projects over the past few years. Currently we are building a system for enabling readers of any text to relate any topic, place, event or person mentioned in the text to the best explanatory resources available. This system is being developed with two different corpora: a diverse variety of biographical texts characterized by very rich and dense mentions of people, events, places and activities, and a large collection of newly-scanned books, journals and manuscripts relating to Irish culture and history. Like a social computing platform, our system consists of tools for referring to topics, places, events or people, disambiguating these references by linking them to unique identifiers, and using the disambiguated references to provide useful information in context and to link to related resources. Yet current social computing platforms, while usually amenable to importing and exporting data, tend to mint proprietary identifiers and expect links to be traversed using their own interfaces. We take a different approach, using identifiers from both established and emerging naming authorities, representing relationships using standardized metadata vocabularies, and publishing those representations using standard protocols so that links can be stored and traversed anywhere. Central to our strategy is to move from appearances in a text to naming authorities to the the construction of links for searching or querying trusted resources. Using identifiers from naming authorities, rather than literal values (as in the DCMI Kernel) or keys from a proprietary database, makes it more likely that links constructed using our system will continue to be useful in the future. WorldCat Identities URIs (http://worldcat.org/identities/) linked to Library of Congress and Deutsche Nationalbibliothek authority files for persons and organizations and Geonames (http://geonames.org/) URIs for places are stable identifiers attached to a wealth of useful metadata. Yet no naming authority can be totally comprehensive, so our system can be extended to use new sources of identifiers as needed. For example, we are experimenting with using Freebase (http://freebase.com/) URIs to identify historical events, for which no established naming authority currently exists. Stable identifiers (URIs), standardized hyperlinked data formats (XML), and uniform publishing protocols (HTTP) are key ingredients of the web's open architecture. Our system provides an example of how this open architecture can be exploited to build flexible and useful tools for connecting resources via shared references to topics, places, events, and people.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  2. Berners-Lee, T.; Hendler, J.: Publishing on the semantic Web (2001) 0.03
    0.0264742 = product of:
      0.0794226 = sum of:
        0.0794226 = product of:
          0.1588452 = sum of:
            0.1588452 = weight(_text_:publishing in 3358) [ClassicSimilarity], result of:
              0.1588452 = score(doc=3358,freq=2.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.64775085 = fieldWeight in 3358, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3358)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  3. Tillett, B.B.: AACR2 and metadata : library opportunities in the global semantic Web (2003) 0.02
    0.02397386 = product of:
      0.07192158 = sum of:
        0.07192158 = weight(_text_:electronic in 5510) [ClassicSimilarity], result of:
          0.07192158 = score(doc=5510,freq=4.0), product of:
            0.19623034 = queryWeight, product of:
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.05019314 = queryNorm
            0.3665161 = fieldWeight in 5510, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.046875 = fieldNorm(doc=5510)
      0.33333334 = coord(1/3)
    
    Abstract
    Explores the opportunities for libraries to contribute to the proposed global "Semantic Web." Library name and subject authority files, including work that IFLA has done related to a new view of "Universal Bibliographic Control" in the Internet environment and the work underway in the U.S. and Europe, are making a reality of the virtual international authority file on the Web. The bibliographic and authority records created according to AACR2 reflect standards for metadata that libraries have provided for years. New opportunities for using these records in the digital world are described (interoperability), including mapping with Dublin Core metadata. AACR2 recently updated Chapter 9 on Electronic Resources. That process and highlights of the changes are described, including Library of Congress' rule interpretations.
    Content
    Beitrag in einem Themenheft "Electronic cataloging: AACR2 and metadata for serials and monographs"
  4. Stamou, G.; Chortaras, A.: Ontological query answering over semantic data (2017) 0.02
    0.017649466 = product of:
      0.052948397 = sum of:
        0.052948397 = product of:
          0.10589679 = sum of:
            0.10589679 = weight(_text_:publishing in 3926) [ClassicSimilarity], result of:
              0.10589679 = score(doc=3926,freq=2.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.4318339 = fieldWeight in 3926, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3926)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Imprint
    Cham : Springer International Publishing
  5. Neumaier, S.: Data integration for open data on the Web (2017) 0.02
    0.015600072 = product of:
      0.046800215 = sum of:
        0.046800215 = product of:
          0.09360043 = sum of:
            0.09360043 = weight(_text_:publishing in 3923) [ClassicSimilarity], result of:
              0.09360043 = score(doc=3923,freq=4.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.38169086 = fieldWeight in 3923, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3923)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In this lecture we will discuss and introduce challenges of integrating openly available Web data and how to solve them. Firstly, while we will address this topic from the viewpoint of Semantic Web research, not all data is readily available as RDF or Linked Data, so we will give an introduction to different data formats prevalent on the Web, namely, standard formats for publishing and exchanging tabular, tree-shaped, and graph data. Secondly, not all Open Data is really completely open, so we will discuss and address issues around licences, terms of usage associated with Open Data, as well as documentation of data provenance. Thirdly, we will discuss issues connected with (meta-)data quality issues associated with Open Data on the Web and how Semantic Web techniques and vocabularies can be used to describe and remedy them. Fourth, we will address issues about searchability and integration of Open Data and discuss in how far semantic search can help to overcome these. We close with briefly summarizing further issues not covered explicitly herein, such as multi-linguality, temporal aspects (archiving, evolution, temporal querying), as well as how/whether OWL and RDFS reasoning on top of integrated open data could be help.
    Imprint
    Cham : Springer International Publishing
  6. Hyvönen, E.; Leskinen, P.; Tamper, M.; Keravuori, K.; Rantala, H.; Ikkala, E.; Tuominen, J.: BiographySampo - publishing and enriching biographies on the Semantic Web for digital humanities research (2019) 0.02
    0.015600072 = product of:
      0.046800215 = sum of:
        0.046800215 = product of:
          0.09360043 = sum of:
            0.09360043 = weight(_text_:publishing in 5799) [ClassicSimilarity], result of:
              0.09360043 = score(doc=5799,freq=4.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.38169086 = fieldWeight in 5799, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5799)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper argues for making a paradigm shift in publishing and using biographical dictionaries on the web, based on Linked Data. The idea is to provide the user with enhanced reading experience of biographies by enriching contents with data linking and reasoning. In addition, versatile tooling for 1) biographical research of individual persons as well as for 2) prosopographical research on groups of people are provided. To demonstrate and evaluate the new possibilities,we present the semantic portal "BiographySampo - Finnish Biographies on theSemantic Web". The system is based on a knowledge graph extracted automatically from a collection of 13.100 textual biographies, enriched with data linking to 16 external data sources, and by harvesting external collection data from libraries, museums, and archives. The portal was released in September 2018 for free public use at: http://biografiasampo.fi.
  7. Miles, A.; Pérez-Agüera, J.R.: SKOS: Simple Knowledge Organisation for the Web (2006) 0.02
    0.015443282 = product of:
      0.046329845 = sum of:
        0.046329845 = product of:
          0.09265969 = sum of:
            0.09265969 = weight(_text_:publishing in 504) [ClassicSimilarity], result of:
              0.09265969 = score(doc=504,freq=2.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.37785465 = fieldWeight in 504, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=504)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This article introduces the Simple Knowledge Organisation System (SKOS), a Semantic Web language for representing controlled structured vocabularies, including thesauri, classification schemes, subject heading systems and taxonomies. SKOS provides a framework for publishing thesauri, classification schemes, and subject indexes on the Web, and for applying these systems to resource collections that are part of the SemanticWeb. SemanticWeb applications may harvest and merge SKOS data, to integrate and enhances retrieval service across multiple collections (e.g. libraries). This article also describes some alternatives for integrating Semantic Web services based on the Resource Description Framework (RDF) and SKOS into a distributed enterprise architecture.
  8. Gibbins, N.; Shadbolt, N.: Resource Description Framework (RDF) (2009) 0.02
    0.015443282 = product of:
      0.046329845 = sum of:
        0.046329845 = product of:
          0.09265969 = sum of:
            0.09265969 = weight(_text_:publishing in 4695) [ClassicSimilarity], result of:
              0.09265969 = score(doc=4695,freq=2.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.37785465 = fieldWeight in 4695, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4695)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The Resource Description Framework (RDF) is the standard knowledge representation language for the Semantic Web, an evolution of the World Wide Web that aims to provide a well-founded infrastructure for publishing, sharing and querying structured data. This entry provides an introduction to RDF and its related vocabulary definition language RDF Schema, and explains its relationship with the OWL Web Ontology Language. Finally, it provides an overview of the historical development of RDF and related languages for Web metadata.
  9. Corcho, O.; Poveda-Villalón, M.; Gómez-Pérez, A.: Ontology engineering in the era of linked data (2015) 0.02
    0.015443282 = product of:
      0.046329845 = sum of:
        0.046329845 = product of:
          0.09265969 = sum of:
            0.09265969 = weight(_text_:publishing in 3293) [ClassicSimilarity], result of:
              0.09265969 = score(doc=3293,freq=2.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.37785465 = fieldWeight in 3293, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3293)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Ontology engineering encompasses the method, tools and techniques used to develop ontologies. Without requiring ontologies, linked data is driving a paradigm shift, bringing benefits and drawbacks to the publishing world. Ontologies may be heavyweight, supporting deep understanding of a domain, or lightweight, suited to simple classification of concepts and more adaptable for linked data. They also vary in domain specificity, usability and reusabilty. Hybrid vocabularies drawing elements from diverse sources often suffer from internally incompatible semantics. To serve linked data purposes, ontology engineering teams require a range of skills in philosophy, computer science, web development, librarianship and domain expertise.
  10. Lukasiewicz, T.: Uncertainty reasoning for the Semantic Web (2017) 0.02
    0.015443282 = product of:
      0.046329845 = sum of:
        0.046329845 = product of:
          0.09265969 = sum of:
            0.09265969 = weight(_text_:publishing in 3939) [ClassicSimilarity], result of:
              0.09265969 = score(doc=3939,freq=2.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.37785465 = fieldWeight in 3939, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3939)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Imprint
    Cham : Springer International Publishing
  11. Cali, A.: Ontology querying : datalog strikes back (2017) 0.01
    0.0132371 = product of:
      0.0397113 = sum of:
        0.0397113 = product of:
          0.0794226 = sum of:
            0.0794226 = weight(_text_:publishing in 3928) [ClassicSimilarity], result of:
              0.0794226 = score(doc=3928,freq=2.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.32387543 = fieldWeight in 3928, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3928)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Imprint
    Cham : Springer International Publishing
  12. Sequeda, J.F.: Integrating relational databases with the Semantic Web : a reflection (2017) 0.01
    0.0132371 = product of:
      0.0397113 = sum of:
        0.0397113 = product of:
          0.0794226 = sum of:
            0.0794226 = weight(_text_:publishing in 3935) [ClassicSimilarity], result of:
              0.0794226 = score(doc=3935,freq=2.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.32387543 = fieldWeight in 3935, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3935)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Imprint
    Cham : Springer International Publishing
  13. Engels, R.H.P.; Lech, T.Ch.: Generating ontologies for the Semantic Web : OntoBuilder (2004) 0.01
    0.012480058 = product of:
      0.037440173 = sum of:
        0.037440173 = product of:
          0.07488035 = sum of:
            0.07488035 = weight(_text_:publishing in 4404) [ClassicSimilarity], result of:
              0.07488035 = score(doc=4404,freq=4.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.3053527 = fieldWeight in 4404, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4404)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Significant progress has been made in technologies for publishing and distributing knowledge and information on the web. However, much of the published information is not organized, and it is hard to find answers to questions that require more than a keyword search. In general, one can say that the web is organizing itself. Information is often published in relatively ad hoc fashion. Typically, concern about the presentation of content has been limited to purely layout issues. This, combined with the fact that the representation language used on the World Wide Web (HTML) is mainly format-oriented, makes publishing on the WWW easy, giving it an enormous expressiveness. People add private, educational or organizational content to the web that is of an immensely diverse nature. Content on the web is growing closer to a real universal knowledge base, with one problem relatively undefined; the problem of the interpretation of its contents. Although widely acknowledged for its general and universal advantages, the increasing popularity of the web also shows us some major drawbacks. The developments of the information content on the web during the last year alone, clearly indicates the need for some changes. Perhaps one of the most significant problems with the web as a distributed information system is the difficulty of finding and comparing information.
  14. Papadakis, I. et al.: Highlighting timely information in libraries through social and semantic Web technologies (2016) 0.01
    0.011334131 = product of:
      0.03400239 = sum of:
        0.03400239 = product of:
          0.06800478 = sum of:
            0.06800478 = weight(_text_:22 in 2090) [ClassicSimilarity], result of:
              0.06800478 = score(doc=2090,freq=2.0), product of:
                0.17576782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05019314 = queryNorm
                0.38690117 = fieldWeight in 2090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2090)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  15. Lassalle, E.; Lassalle, E.: Semantic models in information retrieval (2012) 0.01
    0.011030916 = product of:
      0.03309275 = sum of:
        0.03309275 = product of:
          0.0661855 = sum of:
            0.0661855 = weight(_text_:publishing in 97) [ClassicSimilarity], result of:
              0.0661855 = score(doc=97,freq=2.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.26989618 = fieldWeight in 97, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=97)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Imprint
    Hershey, PA : IGI Publishing
  16. Ghorbel, H.; Bahri, A.; Bouaziz, R.: Fuzzy ontologies building platform for Semantic Web : FOB platform (2012) 0.01
    0.011030916 = product of:
      0.03309275 = sum of:
        0.03309275 = product of:
          0.0661855 = sum of:
            0.0661855 = weight(_text_:publishing in 98) [ClassicSimilarity], result of:
              0.0661855 = score(doc=98,freq=2.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.26989618 = fieldWeight in 98, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=98)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Imprint
    Hershey, PA : IGI Publishing
  17. Djioua, B.; Desclés, J.-P.; Alrahabi, M.: Searching and mining with semantic categories (2012) 0.01
    0.011030916 = product of:
      0.03309275 = sum of:
        0.03309275 = product of:
          0.0661855 = sum of:
            0.0661855 = weight(_text_:publishing in 99) [ClassicSimilarity], result of:
              0.0661855 = score(doc=99,freq=2.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.26989618 = fieldWeight in 99, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=99)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Imprint
    Hershey, PA : IGI Publishing
  18. Auer, S.; Lehmann, J.: Making the Web a data washing machine : creating knowledge out of interlinked data (2010) 0.01
    0.011030916 = product of:
      0.03309275 = sum of:
        0.03309275 = product of:
          0.0661855 = sum of:
            0.0661855 = weight(_text_:publishing in 112) [ClassicSimilarity], result of:
              0.0661855 = score(doc=112,freq=2.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.26989618 = fieldWeight in 112, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=112)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Over the past 3 years, the semantic web activity has gained momentum with the widespread publishing of structured data as RDF. The Linked Data paradigm has therefore evolved from a practical research idea into a very promising candidate for addressing one of the biggest challenges in the area of the Semantic Web vision: the exploitation of the Web as a platform for data and information integration. To translate this initial success into a world-scale reality, a number of research challenges need to be addressed: the performance gap between relational and RDF data management has to be closed, coherence and quality of data published on theWeb have to be improved, provenance and trust on the Linked Data Web must be established and generally the entrance barrier for data publishers and users has to be lowered. In this vision statement we discuss these challenges and argue, that research approaches tackling these challenges should be integrated into a mutual refinement cycle. We also present two crucial use-cases for the widespread adoption of linked data.
  19. Rousset, M.-C.; Atencia, M.; David, J.; Jouanot, F.; Ulliana, F.; Palombi, O.: Datalog revisited for reasoning in linked data (2017) 0.01
    0.011030916 = product of:
      0.03309275 = sum of:
        0.03309275 = product of:
          0.0661855 = sum of:
            0.0661855 = weight(_text_:publishing in 3936) [ClassicSimilarity], result of:
              0.0661855 = score(doc=3936,freq=2.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.26989618 = fieldWeight in 3936, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3936)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Imprint
    Cham : Springer International Publishing
  20. Kaminski, R.; Schaub, T.; Wanko, P.: ¬A tutorial on hybrid answer set solving with clingo (2017) 0.01
    0.011030916 = product of:
      0.03309275 = sum of:
        0.03309275 = product of:
          0.0661855 = sum of:
            0.0661855 = weight(_text_:publishing in 3937) [ClassicSimilarity], result of:
              0.0661855 = score(doc=3937,freq=2.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.26989618 = fieldWeight in 3937, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3937)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Imprint
    Cham : Springer International Publishing

Authors

Years

Languages

  • e 36
  • d 4