Search (93 results, page 1 of 5)

  • × theme_ss:"Semantic Web"
  • × type_ss:"a"
  • × year_i:[2010 TO 2020}
  1. Allocca, C.; Aquin, M.d'; Motta, E.: Impact of using relationships between ontologies to enhance the ontology search results (2012) 0.12
    0.12242785 = product of:
      0.16323714 = sum of:
        0.050382458 = weight(_text_:web in 264) [ClassicSimilarity], result of:
          0.050382458 = score(doc=264,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.3122631 = fieldWeight in 264, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=264)
        0.07377557 = weight(_text_:search in 264) [ClassicSimilarity], result of:
          0.07377557 = score(doc=264,freq=10.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.4293381 = fieldWeight in 264, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=264)
        0.03907912 = product of:
          0.07815824 = sum of:
            0.07815824 = weight(_text_:engine in 264) [ClassicSimilarity], result of:
              0.07815824 = score(doc=264,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.29552078 = fieldWeight in 264, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=264)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Using semantic web search engines, such as Watson, Swoogle or Sindice, to find ontologies is a complex exploratory activity. It generally requires formulating multiple queries, browsing pages of results, and assessing the returned ontologies against each other to obtain a relevant and adequate subset of ontologies for the intended use. Our hypothesis is that at least some of the difficulties related to searching ontologies stem from the lack of structure in the search results, where ontologies that are implicitly related to each other are presented as disconnected and shown on different result pages. In earlier publications we presented a software framework, Kannel, which is able to automatically detect and make explicit relationships between ontologies in large ontology repositories. In this paper, we present a study that compares the use of the Watson ontology search engine with an extension,Watson+Kannel, which provides information regarding the various relationships occurring between the result ontologies. We evaluate Watson+Kannel by demonstrating through various indicators that explicit relationships between ontologies improve users' efficiency in ontology search, thus validating our hypothesis.
    Source
    9th Extended Semantic Web Conference (ESWC), 2012-05-27/2012-05-31 in Hersonissos, Crete, Greece. Eds.: Elena Simperl et al
    Theme
    Semantic Web
  2. Brunetti, J.M.; Roberto García, R.: User-centered design and evaluation of overview components for semantic data exploration (2014) 0.09
    0.08740793 = product of:
      0.116543904 = sum of:
        0.06581937 = weight(_text_:web in 1626) [ClassicSimilarity], result of:
          0.06581937 = score(doc=1626,freq=16.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.4079388 = fieldWeight in 1626, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1626)
        0.03732781 = weight(_text_:search in 1626) [ClassicSimilarity], result of:
          0.03732781 = score(doc=1626,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.21722981 = fieldWeight in 1626, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=1626)
        0.013396727 = product of:
          0.026793454 = sum of:
            0.026793454 = weight(_text_:22 in 1626) [ClassicSimilarity], result of:
              0.026793454 = score(doc=1626,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.15476047 = fieldWeight in 1626, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1626)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Purpose - The growing volumes of semantic data available in the web result in the need for handling the information overload phenomenon. The potential of this amount of data is enormous but in most cases it is very difficult for users to visualize, explore and use this data, especially for lay-users without experience with Semantic Web technologies. The paper aims to discuss these issues. Design/methodology/approach - The Visual Information-Seeking Mantra "Overview first, zoom and filter, then details-on-demand" proposed by Shneiderman describes how data should be presented in different stages to achieve an effective exploration. The overview is the first user task when dealing with a data set. The objective is that the user is capable of getting an idea about the overall structure of the data set. Different information architecture (IA) components supporting the overview tasks have been developed, so they are automatically generated from semantic data, and evaluated with end-users. Findings - The chosen IA components are well known to web users, as they are present in most web pages: navigation bars, site maps and site indexes. The authors complement them with Treemaps, a visualization technique for displaying hierarchical data. These components have been developed following an iterative User-Centered Design methodology. Evaluations with end-users have shown that they get easily used to them despite the fact that they are generated automatically from structured data, without requiring knowledge about the underlying semantic technologies, and that the different overview components complement each other as they focus on different information search needs. Originality/value - Obtaining semantic data sets overviews cannot be easily done with the current semantic web browsers. Overviews become difficult to achieve with large heterogeneous data sets, which is typical in the Semantic Web, because traditional IA techniques do not easily scale to large data sets. There is little or no support to obtain overview information quickly and easily at the beginning of the exploration of a new data set. This can be a serious limitation when exploring a data set for the first time, especially for lay-users. The proposal is to reuse and adapt existing IA components to provide this overview to users and show that they can be generated automatically from the thesaurus and ontologies that structure semantic data while providing a comparable user experience to traditional web sites.
    Date
    20. 1.2015 18:30:22
    Series
    Special issue: Semantic search
    Theme
    Semantic Web
  3. Djioua, B.; Desclés, J.-P.; Alrahabi, M.: Searching and mining with semantic categories (2012) 0.09
    0.08612041 = product of:
      0.114827216 = sum of:
        0.029088326 = weight(_text_:web in 99) [ClassicSimilarity], result of:
          0.029088326 = score(doc=99,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 99, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=99)
        0.046659768 = weight(_text_:search in 99) [ClassicSimilarity], result of:
          0.046659768 = score(doc=99,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.27153727 = fieldWeight in 99, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=99)
        0.03907912 = product of:
          0.07815824 = sum of:
            0.07815824 = weight(_text_:engine in 99) [ClassicSimilarity], result of:
              0.07815824 = score(doc=99,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.29552078 = fieldWeight in 99, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=99)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    A new model is proposed to retrieve information by building automatically a semantic metatext structure for texts that allow searching and extracting discourse and semantic information according to certain linguistic categorizations. This paper presents approaches for searching and mining full text with semantic categories. The model is built up from two engines: The first one, called EXCOM (Djioua et al., 2006; Alrahabi, 2010), is an automatic system for text annotation, related to discourse and semantic maps, which are specification of general linguistic ontologies founded on the Applicative and Cognitive Grammar. The annotation layer uses a linguistic method called Contextual Exploration, which handles the polysemic values of a term in texts. Several 'semantic maps' underlying 'point of views' for text mining guide this automatic annotation process. The second engine uses semantic annotated texts, produced previously in order to create a semantic inverted index, which is able to retrieve relevant documents for queries associated with discourse and semantic categories such as definition, quotation, causality, relations between concepts, etc. (Djioua & Desclés, 2007). This semantic indexation process builds a metatext layer for textual contents. Some data and linguistic rules sets as well as the general architecture that extend third-party software are expressed as supplementary information.
    Footnote
    Vgl.: http://www.igi-global.com/book/next-generation-search-engines/64423.
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
    Theme
    Semantic Web
  4. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.07
    0.07432193 = product of:
      0.09909591 = sum of:
        0.032909684 = weight(_text_:web in 1634) [ClassicSimilarity], result of:
          0.032909684 = score(doc=1634,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.2039694 = fieldWeight in 1634, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1634)
        0.052789498 = weight(_text_:search in 1634) [ClassicSimilarity], result of:
          0.052789498 = score(doc=1634,freq=8.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.30720934 = fieldWeight in 1634, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=1634)
        0.013396727 = product of:
          0.026793454 = sum of:
            0.026793454 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
              0.026793454 = score(doc=1634,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.15476047 = fieldWeight in 1634, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1634)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
    Series
    Special issue: Semantic search
    Theme
    Semantic Web
  5. Fernández, M.; Cantador, I.; López, V.; Vallet, D.; Castells, P.; Motta, E.: Semantically enhanced Information Retrieval : an ontology-based approach (2011) 0.06
    0.0633452 = product of:
      0.1266904 = sum of:
        0.05203478 = weight(_text_:web in 230) [ClassicSimilarity], result of:
          0.05203478 = score(doc=230,freq=10.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.32250395 = fieldWeight in 230, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=230)
        0.07465562 = weight(_text_:search in 230) [ClassicSimilarity], result of:
          0.07465562 = score(doc=230,freq=16.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.43445963 = fieldWeight in 230, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=230)
      0.5 = coord(2/4)
    
    Abstract
    Currently, techniques for content description and query processing in Information Retrieval (IR) are based on keywords, and therefore provide limited capabilities to capture the conceptualizations associated with user needs and contents. Aiming to solve the limitations of keyword-based models, the idea of conceptual search, understood as searching by meanings rather than literal strings, has been the focus of a wide body of research in the IR field. More recently, it has been used as a prototypical scenario (or even envisioned as a potential "killer app") in the Semantic Web (SW) vision, since its emergence in the late nineties. However, current approaches to semantic search developed in the SW area have not yet taken full advantage of the acquired knowledge, accumulated experience, and technological sophistication achieved through several decades of work in the IR field. Starting from this position, this work investigates the definition of an ontology-based IR model, oriented to the exploitation of domain Knowledge Bases to support semantic search capabilities in large document repositories, stressing on the one hand the use of fully fledged ontologies in the semantic-based perspective, and on the other hand the consideration of unstructured content as the target search space. The major contribution of this work is an innovative, comprehensive semantic search model, which extends the classic IR model, addresses the challenges of the massive and heterogeneous Web environment, and integrates the benefits of both keyword and semantic-based search. Additional contributions include: an innovative rank fusion technique that minimizes the undesired effects of knowledge sparseness on the yet juvenile SW, and the creation of a large-scale evaluation benchmark, based on TREC IR evaluation standards, which allows a rigorous comparison between IR and SW approaches. Conducted experiments show that our semantic search model obtained comparable and better performance results (in terms of MAP and P@10 values) than the best TREC automatic system.
    Series
    JWS special issue on Semantic Search
    Source
    Web semantics: science, services and agents on the World Wide Web. 9(2011) no.4, S.434-452
    Theme
    Semantic Web
  6. Neumaier, S.: Data integration for open data on the Web (2017) 0.06
    0.0624894 = product of:
      0.1249788 = sum of:
        0.09198537 = weight(_text_:web in 3923) [ClassicSimilarity], result of:
          0.09198537 = score(doc=3923,freq=20.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.5701118 = fieldWeight in 3923, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3923)
        0.032993436 = weight(_text_:search in 3923) [ClassicSimilarity], result of:
          0.032993436 = score(doc=3923,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.19200584 = fieldWeight in 3923, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3923)
      0.5 = coord(2/4)
    
    Abstract
    In this lecture we will discuss and introduce challenges of integrating openly available Web data and how to solve them. Firstly, while we will address this topic from the viewpoint of Semantic Web research, not all data is readily available as RDF or Linked Data, so we will give an introduction to different data formats prevalent on the Web, namely, standard formats for publishing and exchanging tabular, tree-shaped, and graph data. Secondly, not all Open Data is really completely open, so we will discuss and address issues around licences, terms of usage associated with Open Data, as well as documentation of data provenance. Thirdly, we will discuss issues connected with (meta-)data quality issues associated with Open Data on the Web and how Semantic Web techniques and vocabularies can be used to describe and remedy them. Fourth, we will address issues about searchability and integration of Open Data and discuss in how far semantic search can help to overcome these. We close with briefly summarizing further issues not covered explicitly herein, such as multi-linguality, temporal aspects (archiving, evolution, temporal querying), as well as how/whether OWL and RDFS reasoning on top of integrated open data could be help.
    Series
    Lecture Notes in Computer Scienc;10370) (Information Systems and Applications, incl. Internet/Web, and HCI
    Source
    Reasoning Web: Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures. Eds.: Ianni, G. et al
    Theme
    Semantic Web
  7. Ghorbel, H.; Bahri, A.; Bouaziz, R.: Fuzzy ontologies building platform for Semantic Web : FOB platform (2012) 0.06
    0.061810125 = product of:
      0.12362025 = sum of:
        0.07696048 = weight(_text_:web in 98) [ClassicSimilarity], result of:
          0.07696048 = score(doc=98,freq=14.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.47698978 = fieldWeight in 98, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=98)
        0.046659768 = weight(_text_:search in 98) [ClassicSimilarity], result of:
          0.046659768 = score(doc=98,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.27153727 = fieldWeight in 98, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=98)
      0.5 = coord(2/4)
    
    Abstract
    The unstructured design of Web resources favors human comprehension, but makes difficult the automatic exploitation of the contents of these resources by machines. So, the Semantic Web aims at making the cooperation between human and machine possible, by giving any information a well defined meaning. The first weavings of the Semantic Web are already prepared. Machines become able to treat and understand the data that were accustomed to only visualization, by using ontologies constitute an essential element of the Semantic Web, as they serve as a form of knowledge representation, sharing, and reuse. However, the Web content is subject to imperfection, and crisp ontologies become less suitable to represent concepts with imprecise definitions. To overcome this problem, fuzzy ontologies constitute a promising research orientation. Indeed, the definition of fuzzy ontologies components constitutes an issue that needs to be well treated. It is necessary to have an appropriate methodology of building an operationalization of fuzzy ontological models. This chapter defines a fuzzy ontological model based on fuzzy description logic. This model uses a new approach for the formal description of fuzzy ontologies. This new methodology shows how all the basic components defined for fuzzy ontologies can be constructed.
    Footnote
    Vgl.: http://www.igi-global.com/book/next-generation-search-engines/64422.
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
    Theme
    Semantic Web
  8. Spree, U.; Feißt, N.; Lühr, A.; Piesztal, B.; Schroeder, N.; Wollschläger, P.: Semantic search : State-of-the-Art-Überblick zu semantischen Suchlösungen im WWW (2011) 0.06
    0.05836313 = product of:
      0.11672626 = sum of:
        0.07053544 = weight(_text_:web in 345) [ClassicSimilarity], result of:
          0.07053544 = score(doc=345,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.43716836 = fieldWeight in 345, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=345)
        0.046190813 = weight(_text_:search in 345) [ClassicSimilarity], result of:
          0.046190813 = score(doc=345,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.2688082 = fieldWeight in 345, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=345)
      0.5 = coord(2/4)
    
    Abstract
    In diesem Kapitel wird ein Überblick über bestehende semantische Suchmaschinen gegeben. Insgesamt werden 95 solcher Suchdienste identifiziert und im Rahmen einer Inhaltsanalyse verglichen. Es kann festgestellt werden, dass die Semantische Suche sich wesentlich von den im Rahmen des Semantic Web propagierten Technologien unterscheidet und Semantik in den betrachteten Suchmaschinen weiter zu fassen ist. Die betrachteten Suchmaschinen werden in ein Stufenmodell, welches nach dem Grad der Semantik unterscheidet, eingeordnet. Das Kapitel schließt mit 8 Thesen zum aktuellen Stand der semantischen Suche.
    Source
    Handbuch Internet-Suchmaschinen, 2: Neue Entwicklungen in der Web-Suche. Hrsg.: D. Lewandowski
    Theme
    Semantic Web
  9. Rajabi, E.; Sanchez-Alonso, S.; Sicilia, M.-A.: Analyzing broken links on the web of data : An experiment with DBpedia (2014) 0.06
    0.05836313 = product of:
      0.11672626 = sum of:
        0.07053544 = weight(_text_:web in 1330) [ClassicSimilarity], result of:
          0.07053544 = score(doc=1330,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.43716836 = fieldWeight in 1330, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1330)
        0.046190813 = weight(_text_:search in 1330) [ClassicSimilarity], result of:
          0.046190813 = score(doc=1330,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.2688082 = fieldWeight in 1330, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1330)
      0.5 = coord(2/4)
    
    Abstract
    Linked open data allow interlinking and integrating any kind of data on the web. Links between various data sources play a key role insofar as they allow software applications (e.g., browsers, search engines) to operate over the aggregated data space as if it was a unique local database. In this new data space, where DBpedia, a data set including structured information from Wikipedia, seems to be the central hub, we analyzed and highlighted outgoing links from this hub in an effort to discover broken links. The paper reports on an experiment to examine the causes of broken links and proposes some treatments for solving this problem.
    Theme
    Semantic Web
  10. Papadakis, I. et al.: Highlighting timely information in libraries through social and semantic Web technologies (2016) 0.06
    0.057883017 = product of:
      0.11576603 = sum of:
        0.08227421 = weight(_text_:web in 2090) [ClassicSimilarity], result of:
          0.08227421 = score(doc=2090,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.5099235 = fieldWeight in 2090, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=2090)
        0.03349182 = product of:
          0.06698364 = sum of:
            0.06698364 = weight(_text_:22 in 2090) [ClassicSimilarity], result of:
              0.06698364 = score(doc=2090,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.38690117 = fieldWeight in 2090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2090)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
    Theme
    Semantic Web
  11. Sah, M.; Wade, V.: Personalized concept-based search on the Linked Open Data (2015) 0.06
    0.05559749 = product of:
      0.11119498 = sum of:
        0.046541322 = weight(_text_:web in 2511) [ClassicSimilarity], result of:
          0.046541322 = score(doc=2511,freq=8.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.2884563 = fieldWeight in 2511, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2511)
        0.064653665 = weight(_text_:search in 2511) [ClassicSimilarity], result of:
          0.064653665 = score(doc=2511,freq=12.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.37625307 = fieldWeight in 2511, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=2511)
      0.5 = coord(2/4)
    
    Abstract
    In this paper, we present a novel personalized concept-based search mechanism for the Web of Data based on results categorization. The innovation of the paper comes from combining novel categorization and personalization techniques, and using categorization for providing personalization. In our approach, search results (Linked Open Data resources) are dynamically categorized into Upper Mapping and Binding Exchange Layer (UMBEL) concepts using a novel fuzzy retrieval model. Then, results with the same concepts are grouped together to form categories, which we call conceptlenses. Such categorization enables concept-based browsing of the retrieved results aligned to users' intent or interests. When the user selects a concept lens for exploration, results are immediately personalized. In particular, all concept lenses are personally re-organized according to their similarity to the selected lens. Within the selected concept lens; more relevant results are included using results re-ranking and query expansion, as well as relevant concept lenses are suggested to support results exploration. This allows dynamic adaptation of results to the user's local choices. We also support interactive personalization; when the user clicks on a result, within the interacted lens, relevant lenses and results are included using results re-ranking and query expansion. Extensive evaluations were performed to assess our approach: (i) Performance of our fuzzy-based categorization approach was evaluated on a particular benchmark (~10,000 mappings). The evaluations showed that we can achieve highly acceptable categorization accuracy and perform better than the vector space model. (ii) Personalized search efficacy was assessed using a user study with 32 participants in a tourist domain. The results revealed that our approach performed significantly better than a non-adaptive baseline search. (iii) Dynamic personalization performance was evaluated, which illustrated that our personalization approach is scalable. (iv) Finally, we compared our system with the existing LOD search engines, which showed that our approach is unique.
    Source
    Web Semantics: Science, Services and Agents on the World Wide Web. 35(2015) [in press]
    Theme
    Semantic Web
  12. Rüther, M.; Fock, J.; Schultz-Krutisch, T.; Bandholtz, T.: Classification and reference vocabulary in linked environment data (2011) 0.04
    0.044478323 = product of:
      0.08895665 = sum of:
        0.049364526 = weight(_text_:web in 4816) [ClassicSimilarity], result of:
          0.049364526 = score(doc=4816,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.3059541 = fieldWeight in 4816, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4816)
        0.03959212 = weight(_text_:search in 4816) [ClassicSimilarity], result of:
          0.03959212 = score(doc=4816,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.230407 = fieldWeight in 4816, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=4816)
      0.5 = coord(2/4)
    
    Abstract
    The Federal Environment Agency (UBA), Germany, has a long tradition in knowledge organization, using a library along with many Web-based information systems. The backbone of this information space is a classification system enhanced by a reference vocabulary which consists of a thesaurus, a gazetteer and a chronicle. Over the years, classification has increasingly been relegated to the background compared with the reference vocabulary indexing and full text search. Bibliographic items are no longer classified directly but tagged with thesaurus terms, with those terms being classified. Since 2010 we have been developing a linked data representation of this knowledge base. While we are linking bibliographic and observation data with the controlled vocabulary in a Resource Desrcription Framework (RDF) representation, the classification may be revisited as a powerful organization system by inference. This also raises questions about the quality and feasibility of an unambiguous classification of thesaurus terms.
    Theme
    Semantic Web
  13. Lassalle, E.; Lassalle, E.: Semantic models in information retrieval (2012) 0.04
    0.037874047 = product of:
      0.07574809 = sum of:
        0.029088326 = weight(_text_:web in 97) [ClassicSimilarity], result of:
          0.029088326 = score(doc=97,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 97, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=97)
        0.046659768 = weight(_text_:search in 97) [ClassicSimilarity], result of:
          0.046659768 = score(doc=97,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.27153727 = fieldWeight in 97, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=97)
      0.5 = coord(2/4)
    
    Footnote
    Vgl.: http://www.igi-global.com/book/next-generation-search-engines/64424.
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
    Theme
    Semantic Web
  14. Kara, S.: ¬An ontology-based retrieval system using semantic indexing (2012) 0.04
    0.037249055 = product of:
      0.07449811 = sum of:
        0.03490599 = weight(_text_:web in 3829) [ClassicSimilarity], result of:
          0.03490599 = score(doc=3829,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.21634221 = fieldWeight in 3829, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3829)
        0.03959212 = weight(_text_:search in 3829) [ClassicSimilarity], result of:
          0.03959212 = score(doc=3829,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.230407 = fieldWeight in 3829, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=3829)
      0.5 = coord(2/4)
    
    Abstract
    In this thesis, we present an ontology-based information extraction and retrieval system and its application to soccer domain. In general, we deal with three issues in semantic search, namely, usability, scalability and retrieval performance. We propose a keyword-based semantic retrieval approach. The performance of the system is improved considerably using domain-specific information extraction, inference and rules. Scalability is achieved by adapting a semantic indexing approach. The system is implemented using the state-of-the-art technologies in SemanticWeb and its performance is evaluated against traditional systems as well as the query expansion methods. Furthermore, a detailed evaluation is provided to observe the performance gain due to domain-specific information extraction and inference. Finally, we show how we use semantic indexing to solve simple structural ambiguities.
    Theme
    Semantic Web
  15. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.03
    0.033564184 = product of:
      0.06712837 = sum of:
        0.050382458 = weight(_text_:web in 4553) [ClassicSimilarity], result of:
          0.050382458 = score(doc=4553,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.3122631 = fieldWeight in 4553, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4553)
        0.01674591 = product of:
          0.03349182 = sum of:
            0.03349182 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
              0.03349182 = score(doc=4553,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.19345059 = fieldWeight in 4553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4553)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
    Theme
    Semantic Web
  16. Aslam, S.; Sonkar, S.K.: Semantic Web : an overview (2019) 0.03
    0.032909684 = product of:
      0.13163874 = sum of:
        0.13163874 = weight(_text_:web in 54) [ClassicSimilarity], result of:
          0.13163874 = score(doc=54,freq=16.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.8158776 = fieldWeight in 54, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=54)
      0.25 = coord(1/4)
    
    Abstract
    This paper presents the semantic web, web writing content, web technology, goals of semantic and obligation for the expansion of web 3.0. This paper also shows the different components of semantic web and such as HTTP, HTML, XML, XML Schema, URI, RDF, Taxonomy and OWL. To provide valuable information services semantic web execute the benefits of library functions and also to be the best use of library collection are mention here.
    Theme
    Semantic Web
  17. Weller, K.: Anforderungen an die Wissensrepräsentation im Social Semantic Web (2010) 0.03
    0.03054274 = product of:
      0.12217096 = sum of:
        0.12217096 = weight(_text_:web in 4061) [ClassicSimilarity], result of:
          0.12217096 = score(doc=4061,freq=18.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.75719774 = fieldWeight in 4061, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4061)
      0.25 = coord(1/4)
    
    Abstract
    Dieser Artikel gibt einen Einblick in die aktuelle Verschmelzung von Web 2.0-und Semantic Web-Ansätzen, die als Social Semantic Web beschrieben werden kann. Die Grundidee des Social Semantic Web wird beschrieben und einzelne erste Anwendungsbeispiele vorgestellt. Ein wesentlicher Schwerpunkt dieser Entwicklung besteht in der Umsetzung neuer Methoden und Herangehensweisen im Bereich der Wissensrepräsentation. Dieser Artikel stellt vier Schwerpunkte vor, in denen sich die Wissensrepräsentationsmethoden im Social Semantic Web weiterentwickeln müssen und geht dabei jeweils auf den aktuellen Stand ein.
    Object
    Web 2.0
    Source
    Semantic web & linked data: Elemente zukünftiger Informationsinfrastrukturen ; 1. DGI-Konferenz ; 62. Jahrestagung der DGI ; Frankfurt am Main, 7. - 9. Oktober 2010 ; Proceedings / Deutsche Gesellschaft für Informationswissenschaft und Informationspraxis. Hrsg.: M. Ockenfeld
    Theme
    Semantic Web
  18. Lukasiewicz, T.: Uncertainty reasoning for the Semantic Web (2017) 0.03
    0.03054274 = product of:
      0.12217096 = sum of:
        0.12217096 = weight(_text_:web in 3939) [ClassicSimilarity], result of:
          0.12217096 = score(doc=3939,freq=18.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.75719774 = fieldWeight in 3939, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3939)
      0.25 = coord(1/4)
    
    Abstract
    The Semantic Web has attracted much attention, both from academia and industry. An important role in research towards the Semantic Web is played by formalisms and technologies for handling uncertainty and/or vagueness. In this paper, I first provide some motivating examples for handling uncertainty and/or vagueness in the Semantic Web. I then give an overview of some own formalisms for handling uncertainty and/or vagueness in the Semantic Web.
    Series
    Lecture Notes in Computer Scienc;10370) (Information Systems and Applications, incl. Internet/Web, and HCI
    Source
    Reasoning Web: Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures. Eds.: Ianni, G. et al
    Theme
    Semantic Web
  19. Sequeda, J.F.: Integrating relational databases with the Semantic Web : a reflection (2017) 0.03
    0.028942522 = product of:
      0.11577009 = sum of:
        0.11577009 = weight(_text_:web in 3935) [ClassicSimilarity], result of:
          0.11577009 = score(doc=3935,freq=22.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.717526 = fieldWeight in 3935, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3935)
      0.25 = coord(1/4)
    
    Abstract
    From the beginning it was understood that the success of the Semantic Web hinges on integrating the vast amount of data stored in Relational Databases. This manuscript reflects on the last 10 years of our research results to integrate Relational Databases with the Semantic Web. Since 2007, our research has led us to answer the following question: How and to what extent can Relational Databases be Integrated with the Semantic Web? The answer comes in two parts. We start by presenting how to get from Relational Databases to the Semantic Web via mappings, such as the W3C Direct Mapping and R2RML standards. Subsequently, we present how the Semantic Web can access Relational Databases. We finalize with how Relational Databases and Semantic Web technologies are being used practice for data integration and discuss open challenges.
    Series
    Lecture Notes in Computer Scienc;10370) (Information Systems and Applications, incl. Internet/Web, and HCI
    Source
    Reasoning Web: Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures. Eds.: Ianni, G. et al
    Theme
    Semantic Web
  20. Hooland, S. van; Verborgh, R.; Wilde, M. De; Hercher, J.; Mannens, E.; Wa, R.Van de: Evaluating the success of vocabulary reconciliation for cultural heritage collections (2013) 0.03
    0.02750054 = product of:
      0.05500108 = sum of:
        0.03490599 = weight(_text_:web in 662) [ClassicSimilarity], result of:
          0.03490599 = score(doc=662,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.21634221 = fieldWeight in 662, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=662)
        0.02009509 = product of:
          0.04019018 = sum of:
            0.04019018 = weight(_text_:22 in 662) [ClassicSimilarity], result of:
              0.04019018 = score(doc=662,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.23214069 = fieldWeight in 662, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=662)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 3.2013 19:29:20
    Theme
    Semantic Web

Languages

  • e 71
  • d 21
  • f 1
  • More… Less…

Types