Search (44 results, page 1 of 3)

  • × theme_ss:"Semantic Web"
  • × type_ss:"el"
  1. OWL Web Ontology Language Guide (2004) 0.06
    0.058129102 = product of:
      0.14532275 = sum of:
        0.13287885 = weight(_text_:readable in 4687) [ClassicSimilarity], result of:
          0.13287885 = score(doc=4687,freq=4.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.47999436 = fieldWeight in 4687, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4687)
        0.012443894 = product of:
          0.024887787 = sum of:
            0.024887787 = weight(_text_:data in 4687) [ClassicSimilarity], result of:
              0.024887787 = score(doc=4687,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.17468026 = fieldWeight in 4687, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4687)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The World Wide Web as it is currently constituted resembles a poorly mapped geography. Our insight into the documents and capabilities available are based on keyword searches, abetted by clever use of document connectivity and usage patterns. The sheer mass of this data is unmanageable without powerful tool support. In order to map this terrain more precisely, computational agents require machine-readable descriptions of the content and capabilities of Web accessible resources. These descriptions must be in addition to the human-readable versions of that information. The OWL Web Ontology Language is intended to provide a language that can be used to describe the classes and relations between them that are inherent in Web documents and applications. This document demonstrates the use of the OWL language to - formalize a domain by defining classes and properties of those classes, - define individuals and assert properties about them, and - reason about these classes and individuals to the degree permitted by the formal semantics of the OWL language. The sections are organized to present an incremental definition of a set of classes, properties and individuals, beginning with the fundamentals and proceeding to more complex language components.
  2. Gómez-Pérez, A.; Corcho, O.: Ontology languages for the Semantic Web (2015) 0.04
    0.04256137 = product of:
      0.106403425 = sum of:
        0.09395953 = weight(_text_:readable in 3297) [ClassicSimilarity], result of:
          0.09395953 = score(doc=3297,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.33940727 = fieldWeight in 3297, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3297)
        0.012443894 = product of:
          0.024887787 = sum of:
            0.024887787 = weight(_text_:data in 3297) [ClassicSimilarity], result of:
              0.024887787 = score(doc=3297,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.17468026 = fieldWeight in 3297, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3297)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Ontologies have proven to be an essential element in many applications. They are used in agent systems, knowledge management systems, and e-commerce platforms. They can also generate natural language, integrate intelligent information, provide semantic-based access to the Internet, and extract information from texts in addition to being used in many other applications to explicitly declare the knowledge embedded in them. However, not only are ontologies useful for applications in which knowledge plays a key role, but they can also trigger a major change in current Web contents. This change is leading to the third generation of the Web-known as the Semantic Web-which has been defined as "the conceptual structuring of the Web in an explicit machine-readable way."1 This definition does not differ too much from the one used for defining an ontology: "An ontology is an explicit, machinereadable specification of a shared conceptualization."2 In fact, new ontology-based applications and knowledge architectures are developing for this new Web. A common claim for all of these approaches is the need for languages to represent the semantic information that this Web requires-solving the heterogeneous data exchange in this heterogeneous environment. Here, we don't decide which language is best of the Semantic Web. Rather, our goal is to help developers find the most suitable language for their representation needs. The authors analyze the most representative ontology languages created for the Web and compare them using a common framework.
  3. Radhakrishnan, A.: Swoogle : an engine for the Semantic Web (2007) 0.04
    0.03696416 = product of:
      0.09241039 = sum of:
        0.075167626 = weight(_text_:readable in 4709) [ClassicSimilarity], result of:
          0.075167626 = score(doc=4709,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.2715258 = fieldWeight in 4709, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.03125 = fieldNorm(doc=4709)
        0.017242765 = product of:
          0.03448553 = sum of:
            0.03448553 = weight(_text_:data in 4709) [ClassicSimilarity], result of:
              0.03448553 = score(doc=4709,freq=6.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.24204408 = fieldWeight in 4709, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4709)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    "Swoogle, the Semantic web search engine, is a research project carried out by the ebiquity research group in the Computer Science and Electrical Engineering Department at the University of Maryland. It's an engine tailored towards finding documents on the semantic web. The whole research paper is available here. Semantic web is touted as the next generation of online content representation where the web documents are represented in a language that is not only easy for humans but is machine readable (easing the integration of data as never thought possible) as well. And the main elements of the semantic web include data model description formats such as Resource Description Framework (RDF), a variety of data interchange formats (e.g. RDF/XML, Turtle, N-Triples), and notations such as RDF Schema (RDFS), the Web Ontology Language (OWL), all of which are intended to provide a formal description of concepts, terms, and relationships within a given knowledge domain (Wikipedia). And Swoogle is an attempt to mine and index this new set of web documents. The engine performs crawling of semantic documents like most web search engines and the search is available as web service too. The engine is primarily written in Java with the PHP used for the front-end and MySQL for database. Swoogle is capable of searching over 10,000 ontologies and indexes more that 1.3 million web documents. It also computes the importance of a Semantic Web document. The techniques used for indexing are the more google-type page ranking and also mining the documents for inter-relationships that are the basis for the semantic web. For more information on how the RDF framework can be used to relate documents, read the link here. Being a research project, and with a non-commercial motive, there is not much hype around Swoogle. However, the approach to indexing of Semantic web documents is an approach that most engines will have to take at some point of time. When the Internet debuted, there were no specific engines available for indexing or searching. The Search domain only picked up as more and more content became available. One fundamental question that I've always wondered about it is - provided that the search engines return very relevant results for a query - how to ascertain that the documents are indeed the most relevant ones available. There is always an inherent delay in indexing of document. Its here that the new semantic documents search engines can close delay. Experimenting with the concept of Search in the semantic web can only bore well for the future of search technology."
  4. Baker, T.; Bermès, E.; Coyle, K.; Dunsire, G.; Isaac, A.; Murray, P.; Panzer, M.; Schneider, J.; Singer, R.; Summers, E.; Waites, W.; Young, J.; Zeng, M.: Library Linked Data Incubator Group Final Report (2011) 0.03
    0.033142954 = product of:
      0.082857385 = sum of:
        0.030179864 = weight(_text_:bibliographic in 4796) [ClassicSimilarity], result of:
          0.030179864 = score(doc=4796,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.17204987 = fieldWeight in 4796, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=4796)
        0.05267752 = product of:
          0.10535504 = sum of:
            0.10535504 = weight(_text_:data in 4796) [ClassicSimilarity], result of:
              0.10535504 = score(doc=4796,freq=56.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.7394569 = fieldWeight in 4796, product of:
                  7.483315 = tf(freq=56.0), with freq of:
                    56.0 = termFreq=56.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4796)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The mission of the W3C Library Linked Data Incubator Group, chartered from May 2010 through August 2011, has been "to help increase global interoperability of library data on the Web, by bringing together people involved in Semantic Web activities - focusing on Linked Data - in the library community and beyond, building on existing initiatives, and identifying collaboration tracks for the future." In Linked Data [LINKEDDATA], data is expressed using standards such as Resource Description Framework (RDF) [RDF], which specifies relationships between things, and Uniform Resource Identifiers (URIs, or "Web addresses") [URI]. This final report of the Incubator Group examines how Semantic Web standards and Linked Data principles can be used to make the valuable information assets that library create and curate - resources such as bibliographic data, authorities, and concept schemes - more visible and re-usable outside of their original library context on the wider Web. The Incubator Group began by eliciting reports on relevant activities from parties ranging from small, independent projects to national library initiatives (see the separate report, Library Linked Data Incubator Group: Use Cases) [USECASE]. These use cases provided the starting point for the work summarized in the report: an analysis of the benefits of library Linked Data, a discussion of current issues with regard to traditional library data, existing library Linked Data initiatives, and legal rights over library data; and recommendations for next steps. The report also summarizes the results of a survey of current Linked Data technologies and an inventory of library Linked Data resources available today (see also the more detailed report, Library Linked Data Incubator Group: Datasets, Value Vocabularies, and Metadata Element Sets) [VOCABDATASET].
    Key recommendations of the report are: - That library leaders identify sets of data as possible candidates for early exposure as Linked Data and foster a discussion about Open Data and rights; - That library standards bodies increase library participation in Semantic Web standardization, develop library data standards that are compatible with Linked Data, and disseminate best-practice design patterns tailored to library Linked Data; - That data and systems designers design enhanced user services based on Linked Data capabilities, create URIs for the items in library datasets, develop policies for managing RDF vocabularies and their URIs, and express library data by re-using or mapping to existing Linked Data vocabularies; - That librarians and archivists preserve Linked Data element sets and value vocabularies and apply library experience in curation and long-term preservation to Linked Data datasets.
  5. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.02
    0.018401727 = product of:
      0.092008635 = sum of:
        0.092008635 = sum of:
          0.049275305 = weight(_text_:data in 759) [ClassicSimilarity], result of:
            0.049275305 = score(doc=759,freq=4.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.34584928 = fieldWeight in 759, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.0546875 = fieldNorm(doc=759)
          0.04273333 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
            0.04273333 = score(doc=759,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.2708308 = fieldWeight in 759, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=759)
      0.2 = coord(1/5)
    
    Abstract
    XML will have a profound impact on the way data is exchanged on the Internet. An important feature of this language is the separation of content from presentation, which makes it easier to select and/or reformat the data. However, due to the likelihood of numerous industry and domain specific DTDs, those who wish to integrate information will still be faced with the problem of semantic interoperability. In this paper we discuss why this problem is not solved by XML, and then discuss why the Resource Description Framework is only a partial solution. We then present the SHOE language, which we feel has many of the features necessary to enable a semantic web, and describe an existing set of tools that make it easy to use the language.
    Date
    11. 5.2013 19:22:18
  6. Eckert, K.: SKOS: eine Sprache für die Übertragung von Thesauri ins Semantic Web (2011) 0.02
    0.01773171 = product of:
      0.08865855 = sum of:
        0.08865855 = sum of:
          0.03982046 = weight(_text_:data in 4331) [ClassicSimilarity], result of:
            0.03982046 = score(doc=4331,freq=2.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.2794884 = fieldWeight in 4331, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.0625 = fieldNorm(doc=4331)
          0.04883809 = weight(_text_:22 in 4331) [ClassicSimilarity], result of:
            0.04883809 = score(doc=4331,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.30952093 = fieldWeight in 4331, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4331)
      0.2 = coord(1/5)
    
    Abstract
    Das Semantic Web - bzw. Linked Data - hat das Potenzial, die Verfügbarkeit von Daten und Wissen, sowie den Zugriff darauf zu revolutionieren. Einen großen Beitrag dazu können Wissensorganisationssysteme wie Thesauri leisten, die die Daten inhaltlich erschließen und strukturieren. Leider sind immer noch viele dieser Systeme lediglich in Buchform oder in speziellen Anwendungen verfügbar. Wie also lassen sie sich für das Semantic Web nutzen? Das Simple Knowledge Organization System (SKOS) bietet eine Möglichkeit, die Wissensorganisationssysteme in eine Form zu "übersetzen", die im Web zitiert und mit anderen Resourcen verknüpft werden kann.
    Date
    15. 3.2011 19:21:22
  7. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.02
    0.015772909 = product of:
      0.078864545 = sum of:
        0.078864545 = sum of:
          0.042235978 = weight(_text_:data in 4649) [ClassicSimilarity], result of:
            0.042235978 = score(doc=4649,freq=4.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.29644224 = fieldWeight in 4649, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.046875 = fieldNorm(doc=4649)
          0.036628567 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
            0.036628567 = score(doc=4649,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.23214069 = fieldWeight in 4649, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4649)
      0.2 = coord(1/5)
    
    Abstract
    More and more cultural heritage institutions publish their collections, vocabularies and metadata on the Web. The resulting Web of linked cultural data opens up exciting new possibilities for searching and browsing through these cultural heritage collections. We report on ongoing work in which we investigate the estimation of relevance in this Web of Culture. We study existing measures of semantic distance and how they apply to two use cases. The use cases relate to the structured, multilingual and multimodal nature of the Culture Web. We distinguish between measures using the Web, such as Google distance and PMI, and measures using the Linked Data Web, i.e. the semantic structure of metadata vocabularies. We perform a small study in which we compare these semantic distance measures to human judgements of relevance. Although it is too early to draw any definitive conclusions, the study provides new insights into the applicability of semantic distance measures to the Web of Culture, and clear starting points for further research.
    Date
    26.12.2011 13:40:22
  8. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.01
    0.013144091 = product of:
      0.065720454 = sum of:
        0.065720454 = sum of:
          0.035196647 = weight(_text_:data in 4553) [ClassicSimilarity], result of:
            0.035196647 = score(doc=4553,freq=4.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.24703519 = fieldWeight in 4553, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4553)
          0.030523809 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
            0.030523809 = score(doc=4553,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.19345059 = fieldWeight in 4553, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4553)
      0.2 = coord(1/5)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
  9. Bizer, C.; Cyganiak, R.; Heath, T.: How to publish Linked Data on the Web (2007) 0.01
    0.009855061 = product of:
      0.049275305 = sum of:
        0.049275305 = product of:
          0.09855061 = sum of:
            0.09855061 = weight(_text_:data in 3791) [ClassicSimilarity], result of:
              0.09855061 = score(doc=3791,freq=16.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.69169855 = fieldWeight in 3791, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3791)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    This document provides a tutorial on how to publish Linked Data on the Web. After a general overview of the concept of Linked Data, we describe several practical recipes for publishing information as Linked Data on the Web.
    Content
    This tutorial has been superseeded by the book Linked Data: Evolving the Web into a Global Data Space written by Tom Heath and Christian Bizer. This tutorial was published in 2007 and is still online for historical reasons. The Linked Data book was published in 2011 and provides a more detailed and up-to-date introduction into Linked Data.
  10. Wright, H.: Semantic Web and ontologies (2018) 0.01
    0.009218565 = product of:
      0.046092827 = sum of:
        0.046092827 = product of:
          0.092185654 = sum of:
            0.092185654 = weight(_text_:data in 80) [ClassicSimilarity], result of:
              0.092185654 = score(doc=80,freq=14.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.64702475 = fieldWeight in 80, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=80)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The Semantic Web and ontologies can help archaeologists combine and share data, making it more open and useful. Archaeologists create diverse types of data, using a wide variety of technologies and methodologies. Like all research domains, these data are increasingly digital. The creation of data that are now openly and persistently available from disparate sources has also inspired efforts to bring archaeological resources together and make them more interoperable. This allows functionality such as federated cross-search across different datasets, and the mapping of heterogeneous data to authoritative structures to build a single data source. Ontologies provide the structure and relationships for Semantic Web data, and have been developed for use in cultural heritage applications generally, and archaeology specifically. A variety of online resources for archaeology now incorporate Semantic Web principles and technologies.
  11. Auer, S.; Lehmann, J.: Making the Web a data washing machine : creating knowledge out of interlinked data (2010) 0.01
    0.0086213825 = product of:
      0.043106914 = sum of:
        0.043106914 = product of:
          0.08621383 = sum of:
            0.08621383 = weight(_text_:data in 112) [ClassicSimilarity], result of:
              0.08621383 = score(doc=112,freq=24.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.60511017 = fieldWeight in 112, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=112)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Over the past 3 years, the semantic web activity has gained momentum with the widespread publishing of structured data as RDF. The Linked Data paradigm has therefore evolved from a practical research idea into a very promising candidate for addressing one of the biggest challenges in the area of the Semantic Web vision: the exploitation of the Web as a platform for data and information integration. To translate this initial success into a world-scale reality, a number of research challenges need to be addressed: the performance gap between relational and RDF data management has to be closed, coherence and quality of data published on theWeb have to be improved, provenance and trust on the Linked Data Web must be established and generally the entrance barrier for data publishers and users has to be lowered. In this vision statement we discuss these challenges and argue, that research approaches tackling these challenges should be integrated into a mutual refinement cycle. We also present two crucial use-cases for the widespread adoption of linked data.
    Content
    Vgl.: http://www.semantic-web-journal.net/content/new-submission-making-web-data-washing-machine-creating-knowledge-out-interlinked-data http://www.semantic-web-journal.net/sites/default/files/swj24_0.pdf.
  12. Dextre Clarke, S.G.: Challenges and opportunities for KOS standards (2007) 0.01
    0.008546666 = product of:
      0.04273333 = sum of:
        0.04273333 = product of:
          0.08546666 = sum of:
            0.08546666 = weight(_text_:22 in 4643) [ClassicSimilarity], result of:
              0.08546666 = score(doc=4643,freq=2.0), product of:
                0.15778607 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505818 = queryNorm
                0.5416616 = fieldWeight in 4643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4643)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 9.2007 15:41:14
  13. Smith, D.A.; Shadbolt, N.R.: FacetOntology : expressive descriptions of facets in the Semantic Web (2012) 0.01
    0.007870209 = product of:
      0.039351046 = sum of:
        0.039351046 = product of:
          0.07870209 = sum of:
            0.07870209 = weight(_text_:data in 2208) [ClassicSimilarity], result of:
              0.07870209 = score(doc=2208,freq=20.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.5523875 = fieldWeight in 2208, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2208)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The formal structure of the information on the Semantic Web lends itself to faceted browsing, an information retrieval method where users can filter results based on the values of properties ("facets"). Numerous faceted browsers have been created to browse RDF and Linked Data, but these systems use their own ontologies for defining how data is queried to populate their facets. Since the source data is the same format across these systems (specifically, RDF), we can unify the different methods of describing how to quer the underlying data, to enable compatibility across systems, and provide an extensible base ontology for future systems. To this end, we present FacetOntology, an ontology that defines how to query data to form a faceted browser, and a number of transformations and filters that can be applied to data before it is shown to users. FacetOntology overcomes limitations in the expressivity of existing work, by enabling the full expressivity of SPARQL when selecting data for facets. By applying a FacetOntology definition to data, a set of facets are specified, each with queries and filters to source RDF data, which enables faceted browsing systems to be created using that RDF data.
  14. Harlow, C.: Data munging tools in Preparation for RDF : Catmandu and LODRefine (2015) 0.01
    0.007870209 = product of:
      0.039351046 = sum of:
        0.039351046 = product of:
          0.07870209 = sum of:
            0.07870209 = weight(_text_:data in 2277) [ClassicSimilarity], result of:
              0.07870209 = score(doc=2277,freq=20.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.5523875 = fieldWeight in 2277, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2277)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Data munging, or the work of remediating, enhancing and transforming library datasets for new or improved uses, has become more important and staff-inclusive in many library technology discussions and projects. Many times we know how we want our data to look, as well as how we want our data to act in discovery interfaces or when exposed, but we are uncertain how to make the data we have into the data we want. This article introduces and compares two library data munging tools that can help: LODRefine (OpenRefine with the DERI RDF Extension) and Catmandu. The strengths and best practices of each tool are discussed in the context of metadata munging use cases for an institution's metadata migration workflow. There is a focus on Linked Open Data modeling and transformation applications of each tool, in particular how metadataists, catalogers, and programmers can create metadata quality reports, enhance existing data with LOD sets, and transform that data to a RDF model. Integration of these tools with other systems and projects, the use of domain specific transformation languages, and the expansion of vocabulary reconciliation services are mentioned.
  15. Broughton, V.: Automatic metadata generation : Digital resource description without human intervention (2007) 0.01
    0.0073257135 = product of:
      0.036628567 = sum of:
        0.036628567 = product of:
          0.07325713 = sum of:
            0.07325713 = weight(_text_:22 in 6048) [ClassicSimilarity], result of:
              0.07325713 = score(doc=6048,freq=2.0), product of:
                0.15778607 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505818 = queryNorm
                0.46428138 = fieldWeight in 6048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6048)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 9.2007 15:41:14
  16. Tudhope, D.: Knowledge Organization System Services : brief review of NKOS activities and possibility of KOS registries (2007) 0.01
    0.0073257135 = product of:
      0.036628567 = sum of:
        0.036628567 = product of:
          0.07325713 = sum of:
            0.07325713 = weight(_text_:22 in 100) [ClassicSimilarity], result of:
              0.07325713 = score(doc=100,freq=2.0), product of:
                0.15778607 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505818 = queryNorm
                0.46428138 = fieldWeight in 100, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=100)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 9.2007 15:41:14
  17. Glimm, B.; Hogan, A.; Krötzsch, M.; Polleres, A.: OWL: Yet to arrive on the Web of Data? (2012) 0.01
    0.007315486 = product of:
      0.03657743 = sum of:
        0.03657743 = product of:
          0.07315486 = sum of:
            0.07315486 = weight(_text_:data in 4798) [ClassicSimilarity], result of:
              0.07315486 = score(doc=4798,freq=12.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.513453 = fieldWeight in 4798, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4798)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Seven years on from OWL becoming a W3C recommendation, and two years on from the more recent OWL 2 W3C recommendation, OWL has still experienced only patchy uptake on the Web. Although certain OWL features (like owl:sameAs) are very popular, other features of OWL are largely neglected by publishers in the Linked Data world. This may suggest that despite the promise of easy implementations and the proposal of tractable profiles suggested in OWL's second version, there is still no "right" standard fragment for the Linked Data community. In this paper, we (1) analyse uptake of OWL on the Web of Data, (2) gain insights into the OWL fragment that is actually used/usable on the Web, where we arrive at the conclusion that this fragment is likely to be a simplified profile based on OWL RL, (3) propose and discuss such a new fragment, which we call OWL LD (for Linked Data).
    Content
    Beitrag des Workshops: Linked Data on the Web (LDOW2012), April 16, 2012 Lyon, France; vgl.: http://events.linkeddata.org/ldow2012/.
  18. Carbonaro, A.; Santandrea, L.: ¬A general Semantic Web approach for data analysis on graduates statistics 0.01
    0.0070393295 = product of:
      0.035196647 = sum of:
        0.035196647 = product of:
          0.070393294 = sum of:
            0.070393294 = weight(_text_:data in 5309) [ClassicSimilarity], result of:
              0.070393294 = score(doc=5309,freq=16.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.49407038 = fieldWeight in 5309, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5309)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Currently, several datasets released in a Linked Open Data format are available at a national and international level, but the lack of shared strategies concerning the definition of concepts related to the statistical publishing community makes difficult a comparison among given facts starting from different data sources. In order to guarantee a shared representation framework for what concerns the dissemination of statistical concepts about graduates, we developed SW4AL, an ontology-based system for graduate's surveys domain. The developed system transforms low-level data into an enriched information model and is based on the AlmaLaurea surveys covering more than 90% of Italian graduates. SW4AL: i) semantically describes the different peculiarities of the graduates; ii) promotes the structured definition of the AlmaLaurea data and the following publication in the Linked Open Data context; iii) provides their reuse in the open data scope; iv) enables logical reasoning about knowledge representation. SW4AL establishes a common semantic for addressing the concept of graduate's surveys domain by proposing the creation of a SPARQL endpoint and a Web based interface for the query and the visualization of the structured data.
  19. Leskinen, P.; Hyvönen, E.: Extracting genealogical networks of linked data from biographical texts (2019) 0.01
    0.00696858 = product of:
      0.0348429 = sum of:
        0.0348429 = product of:
          0.0696858 = sum of:
            0.0696858 = weight(_text_:data in 5798) [ClassicSimilarity], result of:
              0.0696858 = score(doc=5798,freq=8.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.48910472 = fieldWeight in 5798, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5798)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    This paper presents the idea and our work of extracting and reassembling a genealogical network automatically from a collection of biographies. The network can be used as a tool for network analysis of historical persons. The data has been published as Linked Data and as an interactive online service as part of the in-use data service and semantic portal BiographySampo - Finnish Biographies on the Semantic Web.
  20. Singh, A.; Sinha, U.; Sharma, D.k.: Semantic Web and data visualization (2020) 0.01
    0.0066034766 = product of:
      0.033017382 = sum of:
        0.033017382 = product of:
          0.066034764 = sum of:
            0.066034764 = weight(_text_:data in 79) [ClassicSimilarity], result of:
              0.066034764 = score(doc=79,freq=22.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.46347913 = fieldWeight in 79, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.03125 = fieldNorm(doc=79)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    With the terrific growth of data volume and data being produced every second on millions of devices across the globe, there is a desperate need to manage the unstructured data available on web pages efficiently. Semantic Web or also known as Web of Trust structures the scattered data on the Internet according to the needs of the user. It is an extension of the World Wide Web (WWW) which focuses on manipulating web data on behalf of Humans. Due to the ability of the Semantic Web to integrate data from disparate sources and hence makes it more user-friendly, it is an emerging trend. Tim Berners-Lee first introduced the term Semantic Web and since then it has come a long way to become a more intelligent and intuitive web. Data Visualization plays an essential role in explaining complex concepts in a universal manner through pictorial representation, and the Semantic Web helps in broadening the potential of Data Visualization and thus making it an appropriate combination. The objective of this chapter is to provide fundamental insights concerning the semantic web technologies and in addition to that it also elucidates the issues as well as the solutions regarding the semantic web. The purpose of this chapter is to highlight the semantic web architecture in detail while also comparing it with the traditional search system. It classifies the semantic web architecture into three major pillars i.e. RDF, Ontology, and XML. Moreover, it describes different semantic web tools used in the framework and technology. It attempts to illustrate different approaches of the semantic web search engines. Besides stating numerous challenges faced by the semantic web it also illustrates the solutions.
    Series
    Lecture notes on data engineering and communications technologies book series; vol.32
    Source
    Data visualization and knowledge engineering. Eds. J. Hemanth, et al

Years

Languages

  • e 41
  • d 2
  • More… Less…

Types