Search (93 results, page 1 of 5)

  • × theme_ss:"Semantic Web"
  • × type_ss:"a"
  • × year_i:[2010 TO 2020}
  1. Zumstein, P.: ¬Die Rolle des Semantic Web für Bibliotheken : Linked Open Data und mehr: Welche Strategien können hier die Bibliotheken in die Zukunft führen? (2012) 0.08
    0.08254977 = product of:
      0.123824656 = sum of:
        0.07707474 = weight(_text_:wide in 2450) [ClassicSimilarity], result of:
          0.07707474 = score(doc=2450,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.342674 = fieldWeight in 2450, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2450)
        0.046749923 = product of:
          0.09349985 = sum of:
            0.09349985 = weight(_text_:web in 2450) [ClassicSimilarity], result of:
              0.09349985 = score(doc=2450,freq=10.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.5643819 = fieldWeight in 2450, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2450)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Das Semantic Web ist die Vision einer Erweiterung des World Wide Webs, so dass die Daten nicht nur für Menschen leicht verständlich dargestellt werden, sondern auch von Maschinen verwertbar sind. Mit einer entsprechenden Ausgestaltung von Links zwischen einzelnen Webressourcen wäre das Web als riesige, globale Datenbank nutzbar. Darin könnten dann Softwareagenten für uns auch komplexe Fragestellungen und Planungen bearbeiten. In dieser Arbeit soll gezeigt werden, dass jede Bibliothek interessante Daten für das Semantic Web hat und umgekehrt von ihm profitieren kann. Ein Schwerpunkt liegt auf möglichen Anwendungsszenarien mit dem speziellen Fokus beim Bibliothekswesen.
    Theme
    Semantic Web
  2. Neubauer, G.: Visualization of typed links in linked data (2017) 0.08
    0.08006412 = product of:
      0.12009617 = sum of:
        0.07785724 = weight(_text_:wide in 3912) [ClassicSimilarity], result of:
          0.07785724 = score(doc=3912,freq=4.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.34615302 = fieldWeight in 3912, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
        0.04223893 = product of:
          0.08447786 = sum of:
            0.08447786 = weight(_text_:web in 3912) [ClassicSimilarity], result of:
              0.08447786 = score(doc=3912,freq=16.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.5099235 = fieldWeight in 3912, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3912)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Das Themengebiet der Arbeit behandelt Visualisierungen von typisierten Links in Linked Data. Die wissenschaftlichen Gebiete, die im Allgemeinen den Inhalt des Beitrags abgrenzen, sind das Semantic Web, das Web of Data und Informationsvisualisierung. Das Semantic Web, das von Tim Berners Lee 2001 erfunden wurde, stellt eine Erweiterung zum World Wide Web (Web 2.0) dar. Aktuelle Forschungen beziehen sich auf die Verknüpfbarkeit von Informationen im World Wide Web. Um es zu ermöglichen, solche Verbindungen wahrnehmen und verarbeiten zu können sind Visualisierungen die wichtigsten Anforderungen als Hauptteil der Datenverarbeitung. Im Zusammenhang mit dem Sematic Web werden Repräsentationen von zusammenhängenden Informationen anhand von Graphen gehandhabt. Der Grund des Entstehens dieser Arbeit ist in erster Linie die Beschreibung der Gestaltung von Linked Data-Visualisierungskonzepten, deren Prinzipien im Rahmen einer theoretischen Annäherung eingeführt werden. Anhand des Kontexts führt eine schrittweise Erweiterung der Informationen mit dem Ziel, praktische Richtlinien anzubieten, zur Vernetzung dieser ausgearbeiteten Gestaltungsrichtlinien. Indem die Entwürfe zweier alternativer Visualisierungen einer standardisierten Webapplikation beschrieben werden, die Linked Data als Netzwerk visualisiert, konnte ein Test durchgeführt werden, der deren Kompatibilität zum Inhalt hatte. Der praktische Teil behandelt daher die Designphase, die Resultate, und zukünftige Anforderungen des Projektes, die durch die Testung ausgearbeitet wurden.
    Theme
    Semantic Web
  3. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.07
    0.07022996 = product of:
      0.105344936 = sum of:
        0.044042703 = weight(_text_:wide in 1634) [ClassicSimilarity], result of:
          0.044042703 = score(doc=1634,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.1958137 = fieldWeight in 1634, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1634)
        0.061302237 = sum of:
          0.033791143 = weight(_text_:web in 1634) [ClassicSimilarity], result of:
            0.033791143 = score(doc=1634,freq=4.0), product of:
              0.1656677 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.050763648 = queryNorm
              0.2039694 = fieldWeight in 1634, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
          0.027511096 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
            0.027511096 = score(doc=1634,freq=2.0), product of:
              0.17776565 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050763648 = queryNorm
              0.15476047 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
    Theme
    Semantic Web
  4. Oliveira Machado, L.M.; Souza, R.R.; Simões, M. da Graça: Semantic web or web of data? : a diachronic study (1999 to 2017) of the publications of Tim Berners-Lee and the World Wide Web Consortium (2019) 0.07
    0.06972195 = product of:
      0.10458292 = sum of:
        0.055053383 = weight(_text_:wide in 5300) [ClassicSimilarity], result of:
          0.055053383 = score(doc=5300,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.24476713 = fieldWeight in 5300, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5300)
        0.049529534 = product of:
          0.09905907 = sum of:
            0.09905907 = weight(_text_:web in 5300) [ClassicSimilarity], result of:
              0.09905907 = score(doc=5300,freq=22.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.59793836 = fieldWeight in 5300, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5300)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The web has been, in the last decades, the place where information retrieval achieved its maximum importance, given its ubiquity and the sheer volume of information. However, its exponential growth made the retrieval task increasingly hard, relying in its effectiveness on idiosyncratic and somewhat biased ranking algorithms. To deal with this problem, a "new" web, called the Semantic Web (SW), was proposed, bringing along concepts like "Web of Data" and "Linked Data," although the definitions and connections among these concepts are often unclear. Based on a qualitative approach built over a literature review, a definition of SW is presented, discussing the related concepts sometimes used as synonyms. It concludes that the SW is a comprehensive and ambitious construct that includes the great purpose of making the web a global database. It also follows the specifications developed and/or associated with its operationalization and the necessary procedures for the connection of data in an open format on the web. The goals of this comprehensive SW are the union of two outcomes still tenuously connected: the virtually unlimited possibility of connections between data-the web domain-with the potentiality of the automated inference of "intelligent" systems-the semantic component.
    Theme
    Semantic Web
  5. Martínez-González, M.M.; Alvite-Díez, M.L.: Thesauri and Semantic Web : discussion of the evolution of thesauri toward their integration with the Semantic Web (2019) 0.07
    0.066569686 = product of:
      0.09985453 = sum of:
        0.055053383 = weight(_text_:wide in 5997) [ClassicSimilarity], result of:
          0.055053383 = score(doc=5997,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.24476713 = fieldWeight in 5997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
        0.044801146 = product of:
          0.08960229 = sum of:
            0.08960229 = weight(_text_:web in 5997) [ClassicSimilarity], result of:
              0.08960229 = score(doc=5997,freq=18.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.5408555 = fieldWeight in 5997, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5997)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Thesauri are Knowledge Organization Systems (KOS), that arise from the consensus of wide communities. They have been in use for many years and are regularly updated. Whereas in the past thesauri were designed for information professionals for indexing and searching, today there is a demand for conceptual vocabularies that enable inferencing by machines. The development of the Semantic Web has brought a new opportunity for thesauri, but thesauri also face the challenge of proving that they add value to it. The evolution of thesauri toward their integration with the Semantic Web is examined. Elements and structures in the thesaurus standard, ISO 25964, and SKOS (Simple Knowledge Organization System), the Semantic Web standard for representing KOS, are reviewed and compared. Moreover, the integrity rules of thesauri are contrasted with the axioms of SKOS. How SKOS has been applied to represent some real thesauri is taken into account. Three thesauri are chosen for this aim: AGROVOC, EuroVoc and the UNESCO Thesaurus. Based on the results of this comparison and analysis, the benefits that Semantic Web technologies offer to thesauri, how thesauri can contribute to the Semantic Web, and the challenges that would help to improve their integration with the Semantic Web are discussed.
    Theme
    Semantic Web
  6. Menzel, C.: Knowledge representation, the World Wide Web, and the evolution of logic (2011) 0.06
    0.06473547 = product of:
      0.0971032 = sum of:
        0.06606405 = weight(_text_:wide in 761) [ClassicSimilarity], result of:
          0.06606405 = score(doc=761,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.29372054 = fieldWeight in 761, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=761)
        0.031039147 = product of:
          0.062078293 = sum of:
            0.062078293 = weight(_text_:web in 761) [ClassicSimilarity], result of:
              0.062078293 = score(doc=761,freq=6.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.37471575 = fieldWeight in 761, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=761)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In this paper, I have traced a series of evolutionary adaptations of FOL motivated entirely by its use by knowledge engineers to represent and share information on the Web culminating in the development of Common Logic. While the primary goal in this paper has been to document this evolution, it is arguable, I think that CL's syntactic and semantic egalitarianism better realizes the goal "topic neutrality" that a logic should ideally exemplify - understood, at least in part, as the idea that logic should as far as possible not itself embody any metaphysical presuppositions. Instead of retaining the traditional metaphysical divisions of FOL that reflect its Fregean origins, CL begins as it were with a single, metaphysically homogeneous domain in which, potentially, anything can play the traditional roles of object, property, relation, and function. Note that the effect of this is not to destroy traditional metaphysical divisions. Rather, it simply to refrain from building those divisions explicitly into one's logic; instead, such divisions are left to the user to introduce and enforce axiomatically in an explicit metaphysical theory.
    Theme
    Semantic Web
  7. Fernández, M.; Cantador, I.; López, V.; Vallet, D.; Castells, P.; Motta, E.: Semantically enhanced Information Retrieval : an ontology-based approach (2011) 0.06
    0.059333354 = product of:
      0.08900003 = sum of:
        0.062285792 = weight(_text_:wide in 230) [ClassicSimilarity], result of:
          0.062285792 = score(doc=230,freq=4.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.2769224 = fieldWeight in 230, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=230)
        0.026714243 = product of:
          0.053428486 = sum of:
            0.053428486 = weight(_text_:web in 230) [ClassicSimilarity], result of:
              0.053428486 = score(doc=230,freq=10.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.32250395 = fieldWeight in 230, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=230)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Currently, techniques for content description and query processing in Information Retrieval (IR) are based on keywords, and therefore provide limited capabilities to capture the conceptualizations associated with user needs and contents. Aiming to solve the limitations of keyword-based models, the idea of conceptual search, understood as searching by meanings rather than literal strings, has been the focus of a wide body of research in the IR field. More recently, it has been used as a prototypical scenario (or even envisioned as a potential "killer app") in the Semantic Web (SW) vision, since its emergence in the late nineties. However, current approaches to semantic search developed in the SW area have not yet taken full advantage of the acquired knowledge, accumulated experience, and technological sophistication achieved through several decades of work in the IR field. Starting from this position, this work investigates the definition of an ontology-based IR model, oriented to the exploitation of domain Knowledge Bases to support semantic search capabilities in large document repositories, stressing on the one hand the use of fully fledged ontologies in the semantic-based perspective, and on the other hand the consideration of unstructured content as the target search space. The major contribution of this work is an innovative, comprehensive semantic search model, which extends the classic IR model, addresses the challenges of the massive and heterogeneous Web environment, and integrates the benefits of both keyword and semantic-based search. Additional contributions include: an innovative rank fusion technique that minimizes the undesired effects of knowledge sparseness on the yet juvenile SW, and the creation of a large-scale evaluation benchmark, based on TREC IR evaluation standards, which allows a rigorous comparison between IR and SW approaches. Conducted experiments show that our semantic search model obtained comparable and better performance results (in terms of MAP and P@10 values) than the best TREC automatic system.
    Source
    Web semantics: science, services and agents on the World Wide Web. 9(2011) no.4, S.434-452
    Theme
    Semantic Web
  8. Piscitelli, F.A.: Library linked data models : library data in the Semantic Web (2019) 0.05
    0.053946227 = product of:
      0.08091934 = sum of:
        0.055053383 = weight(_text_:wide in 5478) [ClassicSimilarity], result of:
          0.055053383 = score(doc=5478,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.24476713 = fieldWeight in 5478, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5478)
        0.025865955 = product of:
          0.05173191 = sum of:
            0.05173191 = weight(_text_:web in 5478) [ClassicSimilarity], result of:
              0.05173191 = score(doc=5478,freq=6.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.3122631 = fieldWeight in 5478, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5478)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This exploratory study examined Linked Data (LD) schemas/ontologies and data models proposed or in use by libraries around the world using MAchine Readable Cataloging (MARC) as a basis for comparison of the scope and extensibility of these potential new standards. The researchers selected 14 libraries from national libraries, academic libraries, government libraries, public libraries, multi-national libraries, and cultural heritage centers currently developing Library Linked Data (LLD) schemas. The choices of models, schemas, and elements used in each library's LD can create interoperability issues for LD services because of substantial differences between schemas and data models evolving via local decisions. The researchers observed that a wide variety of vocabularies and ontologies were used for LLD including common web schemas such as Dublin Core (DC)/DCTerms, Schema.org and Resource Description Framework (RDF), as well as deprecated schemas such as MarcOnt and rdagroup1elements. A sharp divide existed as well between LLD schemas using variations of the Functional Requirements for Bibliographic Records (FRBR) data model and those with different data models or even with no listed data model. Libraries worldwide are not using the same elements or even the same ontologies, schemas and data models to describe the same materials using the same general concepts.
    Theme
    Semantic Web
  9. Papadakis, I. et al.: Highlighting timely information in libraries through social and semantic Web technologies (2016) 0.05
    0.0510852 = product of:
      0.1532556 = sum of:
        0.1532556 = sum of:
          0.08447786 = weight(_text_:web in 2090) [ClassicSimilarity], result of:
            0.08447786 = score(doc=2090,freq=4.0), product of:
              0.1656677 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.050763648 = queryNorm
              0.5099235 = fieldWeight in 2090, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.078125 = fieldNorm(doc=2090)
          0.06877774 = weight(_text_:22 in 2090) [ClassicSimilarity], result of:
            0.06877774 = score(doc=2090,freq=2.0), product of:
              0.17776565 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050763648 = queryNorm
              0.38690117 = fieldWeight in 2090, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=2090)
      0.33333334 = coord(1/3)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
    Theme
    Semantic Web
  10. Sah, M.; Wade, V.: Personalized concept-based search on the Linked Open Data (2015) 0.05
    0.045291096 = product of:
      0.067936644 = sum of:
        0.044042703 = weight(_text_:wide in 2511) [ClassicSimilarity], result of:
          0.044042703 = score(doc=2511,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.1958137 = fieldWeight in 2511, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=2511)
        0.023893945 = product of:
          0.04778789 = sum of:
            0.04778789 = weight(_text_:web in 2511) [ClassicSimilarity], result of:
              0.04778789 = score(doc=2511,freq=8.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.2884563 = fieldWeight in 2511, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2511)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In this paper, we present a novel personalized concept-based search mechanism for the Web of Data based on results categorization. The innovation of the paper comes from combining novel categorization and personalization techniques, and using categorization for providing personalization. In our approach, search results (Linked Open Data resources) are dynamically categorized into Upper Mapping and Binding Exchange Layer (UMBEL) concepts using a novel fuzzy retrieval model. Then, results with the same concepts are grouped together to form categories, which we call conceptlenses. Such categorization enables concept-based browsing of the retrieved results aligned to users' intent or interests. When the user selects a concept lens for exploration, results are immediately personalized. In particular, all concept lenses are personally re-organized according to their similarity to the selected lens. Within the selected concept lens; more relevant results are included using results re-ranking and query expansion, as well as relevant concept lenses are suggested to support results exploration. This allows dynamic adaptation of results to the user's local choices. We also support interactive personalization; when the user clicks on a result, within the interacted lens, relevant lenses and results are included using results re-ranking and query expansion. Extensive evaluations were performed to assess our approach: (i) Performance of our fuzzy-based categorization approach was evaluated on a particular benchmark (~10,000 mappings). The evaluations showed that we can achieve highly acceptable categorization accuracy and perform better than the vector space model. (ii) Personalized search efficacy was assessed using a user study with 32 participants in a tourist domain. The results revealed that our approach performed significantly better than a non-adaptive baseline search. (iii) Dynamic personalization performance was evaluated, which illustrated that our personalization approach is scalable. (iv) Finally, we compared our system with the existing LOD search engines, which showed that our approach is unique.
    Source
    Web Semantics: Science, Services and Agents on the World Wide Web. 35(2015) [in press]
    Theme
    Semantic Web
  11. Borst, T.; Neubert, J.; Seiler, A.: Bibliotheken auf dem Weg in das Semantic Web : Bericht von der SWIB2010 in Köln - unterschiedliche Entwicklungsschwerpunkte (2011) 0.04
    0.03994181 = product of:
      0.05991271 = sum of:
        0.033032026 = weight(_text_:wide in 4532) [ClassicSimilarity], result of:
          0.033032026 = score(doc=4532,freq=2.0), product of:
            0.22492146 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.050763648 = queryNorm
            0.14686027 = fieldWeight in 4532, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4532)
        0.026880687 = product of:
          0.053761374 = sum of:
            0.053761374 = weight(_text_:web in 4532) [ClassicSimilarity], result of:
              0.053761374 = score(doc=4532,freq=18.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.32451332 = fieldWeight in 4532, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=4532)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Zum zweiten Mal nach 2009 hat in Köln die Konferenz »Semantic Web in Bibliotheken (SWIB)« stattgefunden. Unter Beteiligung von 120 Teilnehmern aus neun Nationen wurde an zwei Tagen über die Semantic Web-Aktivitäten von in- und ausländischen Einrichtungen, ferner seitens des W3C und der Forschung berichtet. Die Konferenz wurde wie im Vorjahr vom Hochschulbibliothekszentrum des Landes Nordrhein-Westfalen (hbz) und der Deutschen Zentralbibliothek für Wirtschaftswissenschaften (ZBW) - Leibniz-Informationszentrum Wirtschaft veranstaltet und war bereits im Vorfeld ausgebucht.
    Content
    "Gegenüber der vorjährigen Veranstaltung war ein deutlicher Fortschritt bei der Entwicklung hin zu einem »Web of Linked Open Data (LOD)« zu erkennen. VertreterInnen aus Einrichtungen wie der Deutschen, der Französischen und der Ungarischen Nationalbibliothek, der Prager Wirtschaftsuniversität, der UB Mannheim, des hbz sowie der ZBW berichteten über ihre Ansätze, Entwicklungen und bereits vorhandenen Angebote und Infrastrukturen. Stand der Entwicklung Im Vordergrund stand dabei zum einen die Transformation und Publikation von semantisch angereicherten Bibliotheksdaten aus den vorhandenen Datenquellen in ein maschinenlesbares RDF-Format zwecks Weiterverarbeitung und Verknüpfung mit anderen Datenbeständen. Auf diese Weise betreibt etwa die Deutsche Nationalbibliothek (DNB) die Veröffentlichung und dauerhafte Addressierbarkeit der Gemeinsamen Normdateien (GND) als in der Erprobungsphase befindlichen LOD-Service. Im Verbund mit anderen internationalen »Gedächtnisinstitutionen« soll dieser Service künftig eine verlässliche Infrastruktur zur Identifikation und Einbindung von Personen, Organisationen oder Konzepten in die eigenen Datenbestände bieten.
    Einen zweiten Entwicklungsschwerpunkt bildeten Ansätze, die vorhandenen Bibliotheksdaten auf der Basis von derzeit in der Standardisierung befindlichen Beschreibungssprachen wie »Resource Description and Access« (RDA) und Modellierungen wie »Functional Requirements for Bibliograhical Records« (FRBR) so aufzubereiten, dass die Daten durch externe Softwaresysteme allgemein interpretierbar und zum Beispiel in modernen Katalogsystemen navigierbar gemacht werden können. Aufbauend auf den zu Beginn des zweiten Tages vorgetragenen Überlegungen der US-amerikanischen Bibliotheksberaterin Karen Coyle schilderten Vertreter der DNB sowie Stefan Gradmann von der Europeana, wie grundlegende Unterscheidungen zwischen einem Werk (zum Beispiel einem Roman), seiner medialen Erscheinungsform (zum Beispiel als Hörbuch) sowie seiner Manifestation (zum Beispiel als CD-ROM) mithilfe von RDA-Elementen ausgedrückt werden können. Aus der Sicht des World Wide Web Konsortiums (W3C) berichtete Antoine Isaac über die Gründung und Arbeit einer »Library Linked Data Incubator Group«, die sich mit der Inventarisierung typischer Nutzungsfälle und »best practices« für LOD-basierte Anwendungen befasst. Sören Auer von der Universität Leipzig bot einen Überblick zu innovativen Forschungsansätzen, die den von ihm so genannten »Lebenszyklus« von LOD an verschiedenen Stellen unterstützen sollen. Angesprochen wurden dabei verschiedene Projekte unter anderem zur Datenhaltung sowie zur automatischen Verlinkung von LOD.
    Rechtliche Aspekte Dass das Semantische Web speziell in seiner Ausprägung als LOD nicht nur Entwickler und Projektleiter beschäftigt, zeigte sich in zwei weiteren Vorträgen. So erläuterte Stefanie Grunow von der ZBW die rechtlichen Rahmenbedingungen bei der Veröffentlichung und Nutzung von LOD, insbesondere wenn es sich um Datenbankinhalte aus verschiedenen Quellen handelt. Angesichts der durchaus gewünschten, teilweise aber nicht antizipierbaren, Nachnutzung von LOD durch Dritte sei im Einzelfall zu prüfen, ob und welche Lizenz für einen Herausgeber die geeignete ist. Aus der Sicht eines Hochschullehrers reflektierte Günther Neher von der FH Potsdam, wie das Semantische Web und LODTechnologien in der informationswissenschaftlichen Ausbildung an seiner Einrichtung zukünftig berücksichtigt werden könnten.
    Perspektiven Welche Potenziale das Semantic Web schließlich für Wissenschaft und Forschung bietet, zeigte sich in dem Vortrag von Klaus Tochtermann, Direktor der ZBW. Ausgehend von klassischen Wissensmanagement-Prozessen wie dem Recherchieren, der Bereitstellung, der Organisation und der Erschließung von Fachinformationen wurden hier punktuell die Ansätze für semantische Technologien angesprochen. Ein auch für Bibliotheken typischer Anwendungsfall sei etwa die Erweiterung der syntaktisch-basierten Suche um Konzepte aus Fachvokabularen, mit der die ursprüngliche Sucheingabe ergänzt wird. Auf diese Weise können Forschende, so Tochtermann, wesentlich mehr und gleichzeitig auch relevante Dokumente erschließen. Anette Seiler (hbz), die die Konferenz moderiert hatte, zog abschließend ein positives Fazit des - auch in den Konferenzpausen - sehr lebendigen Austauschs der hiesigen und internationalen Bibliotheksszene. Auch das von den Teilnehmern spontan erbetene Feedback fiel außerordentlich erfreulich aus. Per Handzeichen sprach man sich fast einhellig für eine Fortführung der SWIB aus - für Anette Seiler ein Anlass mehr anzukündigen, dass die SWIB 2011 erneut stattfinden wird, diesmal in Hamburg und voraussichtlich wieder Ende November."
    Theme
    Semantic Web
  12. Brunetti, J.M.; Roberto García, R.: User-centered design and evaluation of overview components for semantic data exploration (2014) 0.03
    0.031697795 = product of:
      0.095093384 = sum of:
        0.095093384 = sum of:
          0.06758229 = weight(_text_:web in 1626) [ClassicSimilarity], result of:
            0.06758229 = score(doc=1626,freq=16.0), product of:
              0.1656677 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.050763648 = queryNorm
              0.4079388 = fieldWeight in 1626, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.03125 = fieldNorm(doc=1626)
          0.027511096 = weight(_text_:22 in 1626) [ClassicSimilarity], result of:
            0.027511096 = score(doc=1626,freq=2.0), product of:
              0.17776565 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050763648 = queryNorm
              0.15476047 = fieldWeight in 1626, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1626)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - The growing volumes of semantic data available in the web result in the need for handling the information overload phenomenon. The potential of this amount of data is enormous but in most cases it is very difficult for users to visualize, explore and use this data, especially for lay-users without experience with Semantic Web technologies. The paper aims to discuss these issues. Design/methodology/approach - The Visual Information-Seeking Mantra "Overview first, zoom and filter, then details-on-demand" proposed by Shneiderman describes how data should be presented in different stages to achieve an effective exploration. The overview is the first user task when dealing with a data set. The objective is that the user is capable of getting an idea about the overall structure of the data set. Different information architecture (IA) components supporting the overview tasks have been developed, so they are automatically generated from semantic data, and evaluated with end-users. Findings - The chosen IA components are well known to web users, as they are present in most web pages: navigation bars, site maps and site indexes. The authors complement them with Treemaps, a visualization technique for displaying hierarchical data. These components have been developed following an iterative User-Centered Design methodology. Evaluations with end-users have shown that they get easily used to them despite the fact that they are generated automatically from structured data, without requiring knowledge about the underlying semantic technologies, and that the different overview components complement each other as they focus on different information search needs. Originality/value - Obtaining semantic data sets overviews cannot be easily done with the current semantic web browsers. Overviews become difficult to achieve with large heterogeneous data sets, which is typical in the Semantic Web, because traditional IA techniques do not easily scale to large data sets. There is little or no support to obtain overview information quickly and easily at the beginning of the exploration of a new data set. This can be a serious limitation when exploring a data set for the first time, especially for lay-users. The proposal is to reuse and adapt existing IA components to provide this overview to users and show that they can be generated automatically from the thesaurus and ontologies that structure semantic data while providing a comparable user experience to traditional web sites.
    Date
    20. 1.2015 18:30:22
    Theme
    Semantic Web
  13. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.03
    0.028706929 = product of:
      0.086120784 = sum of:
        0.086120784 = sum of:
          0.05173191 = weight(_text_:web in 4553) [ClassicSimilarity], result of:
            0.05173191 = score(doc=4553,freq=6.0), product of:
              0.1656677 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.050763648 = queryNorm
              0.3122631 = fieldWeight in 4553, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4553)
          0.03438887 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
            0.03438887 = score(doc=4553,freq=2.0), product of:
              0.17776565 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050763648 = queryNorm
              0.19345059 = fieldWeight in 4553, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4553)
      0.33333334 = coord(1/3)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
    Theme
    Semantic Web
  14. Hooland, S. van; Verborgh, R.; Wilde, M. De; Hercher, J.; Mannens, E.; Wa, R.Van de: Evaluating the success of vocabulary reconciliation for cultural heritage collections (2013) 0.03
    0.025702521 = product of:
      0.07710756 = sum of:
        0.07710756 = sum of:
          0.035840917 = weight(_text_:web in 662) [ClassicSimilarity], result of:
            0.035840917 = score(doc=662,freq=2.0), product of:
              0.1656677 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.050763648 = queryNorm
              0.21634221 = fieldWeight in 662, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.046875 = fieldNorm(doc=662)
          0.041266643 = weight(_text_:22 in 662) [ClassicSimilarity], result of:
            0.041266643 = score(doc=662,freq=2.0), product of:
              0.17776565 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050763648 = queryNorm
              0.23214069 = fieldWeight in 662, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=662)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2013 19:29:20
    Theme
    Semantic Web
  15. Prud'hommeaux, E.; Gayo, E.: RDF ventures to boldly meet your most pedestrian needs (2015) 0.03
    0.025702521 = product of:
      0.07710756 = sum of:
        0.07710756 = sum of:
          0.035840917 = weight(_text_:web in 2024) [ClassicSimilarity], result of:
            0.035840917 = score(doc=2024,freq=2.0), product of:
              0.1656677 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.050763648 = queryNorm
              0.21634221 = fieldWeight in 2024, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.046875 = fieldNorm(doc=2024)
          0.041266643 = weight(_text_:22 in 2024) [ClassicSimilarity], result of:
            0.041266643 = score(doc=2024,freq=2.0), product of:
              0.17776565 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050763648 = queryNorm
              0.23214069 = fieldWeight in 2024, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2024)
      0.33333334 = coord(1/3)
    
    Source
    Bulletin of the Association for Information Science and Technology. 41(2015) no.4, S.18-22
    Theme
    Semantic Web
  16. Aslam, S.; Sonkar, S.K.: Semantic Web : an overview (2019) 0.02
    0.02252743 = product of:
      0.06758229 = sum of:
        0.06758229 = product of:
          0.13516457 = sum of:
            0.13516457 = weight(_text_:web in 54) [ClassicSimilarity], result of:
              0.13516457 = score(doc=54,freq=16.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.8158776 = fieldWeight in 54, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0625 = fieldNorm(doc=54)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper presents the semantic web, web writing content, web technology, goals of semantic and obligation for the expansion of web 3.0. This paper also shows the different components of semantic web and such as HTTP, HTML, XML, XML Schema, URI, RDF, Taxonomy and OWL. To provide valuable information services semantic web execute the benefits of library functions and also to be the best use of library collection are mention here.
    Theme
    Semantic Web
  17. Weller, K.: Anforderungen an die Wissensrepräsentation im Social Semantic Web (2010) 0.02
    0.0209072 = product of:
      0.0627216 = sum of:
        0.0627216 = product of:
          0.1254432 = sum of:
            0.1254432 = weight(_text_:web in 4061) [ClassicSimilarity], result of:
              0.1254432 = score(doc=4061,freq=18.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.75719774 = fieldWeight in 4061, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4061)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Dieser Artikel gibt einen Einblick in die aktuelle Verschmelzung von Web 2.0-und Semantic Web-Ansätzen, die als Social Semantic Web beschrieben werden kann. Die Grundidee des Social Semantic Web wird beschrieben und einzelne erste Anwendungsbeispiele vorgestellt. Ein wesentlicher Schwerpunkt dieser Entwicklung besteht in der Umsetzung neuer Methoden und Herangehensweisen im Bereich der Wissensrepräsentation. Dieser Artikel stellt vier Schwerpunkte vor, in denen sich die Wissensrepräsentationsmethoden im Social Semantic Web weiterentwickeln müssen und geht dabei jeweils auf den aktuellen Stand ein.
    Object
    Web 2.0
    Source
    Semantic web & linked data: Elemente zukünftiger Informationsinfrastrukturen ; 1. DGI-Konferenz ; 62. Jahrestagung der DGI ; Frankfurt am Main, 7. - 9. Oktober 2010 ; Proceedings / Deutsche Gesellschaft für Informationswissenschaft und Informationspraxis. Hrsg.: M. Ockenfeld
    Theme
    Semantic Web
  18. Lukasiewicz, T.: Uncertainty reasoning for the Semantic Web (2017) 0.02
    0.0209072 = product of:
      0.0627216 = sum of:
        0.0627216 = product of:
          0.1254432 = sum of:
            0.1254432 = weight(_text_:web in 3939) [ClassicSimilarity], result of:
              0.1254432 = score(doc=3939,freq=18.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.75719774 = fieldWeight in 3939, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3939)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The Semantic Web has attracted much attention, both from academia and industry. An important role in research towards the Semantic Web is played by formalisms and technologies for handling uncertainty and/or vagueness. In this paper, I first provide some motivating examples for handling uncertainty and/or vagueness in the Semantic Web. I then give an overview of some own formalisms for handling uncertainty and/or vagueness in the Semantic Web.
    Series
    Lecture Notes in Computer Scienc;10370) (Information Systems and Applications, incl. Internet/Web, and HCI
    Source
    Reasoning Web: Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures. Eds.: Ianni, G. et al
    Theme
    Semantic Web
  19. Sequeda, J.F.: Integrating relational databases with the Semantic Web : a reflection (2017) 0.02
    0.019811815 = product of:
      0.059435442 = sum of:
        0.059435442 = product of:
          0.118870884 = sum of:
            0.118870884 = weight(_text_:web in 3935) [ClassicSimilarity], result of:
              0.118870884 = score(doc=3935,freq=22.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.717526 = fieldWeight in 3935, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3935)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    From the beginning it was understood that the success of the Semantic Web hinges on integrating the vast amount of data stored in Relational Databases. This manuscript reflects on the last 10 years of our research results to integrate Relational Databases with the Semantic Web. Since 2007, our research has led us to answer the following question: How and to what extent can Relational Databases be Integrated with the Semantic Web? The answer comes in two parts. We start by presenting how to get from Relational Databases to the Semantic Web via mappings, such as the W3C Direct Mapping and R2RML standards. Subsequently, we present how the Semantic Web can access Relational Databases. We finalize with how Relational Databases and Semantic Web technologies are being used practice for data integration and discuss open challenges.
    Series
    Lecture Notes in Computer Scienc;10370) (Information Systems and Applications, incl. Internet/Web, and HCI
    Source
    Reasoning Web: Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures. Eds.: Ianni, G. et al
    Theme
    Semantic Web
  20. Padmavathi, T.; Krishnamurthy, M.: Semantic Web tools and techniques for knowledge organization : an overview (2017) 0.02
    0.018438421 = product of:
      0.05531526 = sum of:
        0.05531526 = product of:
          0.11063052 = sum of:
            0.11063052 = weight(_text_:web in 3618) [ClassicSimilarity], result of:
              0.11063052 = score(doc=3618,freq=14.0), product of:
                0.1656677 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.050763648 = queryNorm
                0.6677857 = fieldWeight in 3618, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3618)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The enormous amount of information generated every day and spread across the web is diversified in nature far beyond human consumption. To overcome this difficulty, the transformation of current unstructured information into a structured form called a "Semantic Web" was proposed by Tim Berners-Lee in 1989 to enable computers to understand and interpret the information they store. The aim of the semantic web is the integration of heterogeneous and distributed data spread across the web for knowledge discovery. The core of sematic web technologies includes knowledge representation languages RDF and OWL, ontology editors and reasoning tools, and ontology query languages such as SPARQL have also been discussed.
    Theme
    Semantic Web

Languages

  • e 71
  • d 21
  • f 1
  • More… Less…

Types