Search (314 results, page 1 of 16)

  • × theme_ss:"Semantic Web"
  1. Hooland, S. van; Verborgh, R.; Wilde, M. De; Hercher, J.; Mannens, E.; Wa, R.Van de: Evaluating the success of vocabulary reconciliation for cultural heritage collections (2013) 0.09
    0.09452079 = product of:
      0.14178118 = sum of:
        0.010192491 = weight(_text_:a in 662) [ClassicSimilarity], result of:
          0.010192491 = score(doc=662,freq=10.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.1709182 = fieldWeight in 662, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=662)
        0.1315887 = sum of:
          0.089545935 = weight(_text_:de in 662) [ClassicSimilarity], result of:
            0.089545935 = score(doc=662,freq=4.0), product of:
              0.22225924 = queryWeight, product of:
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.051718395 = queryNorm
              0.4028896 = fieldWeight in 662, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.046875 = fieldNorm(doc=662)
          0.04204277 = weight(_text_:22 in 662) [ClassicSimilarity], result of:
            0.04204277 = score(doc=662,freq=2.0), product of:
              0.18110901 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051718395 = queryNorm
              0.23214069 = fieldWeight in 662, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=662)
      0.6666667 = coord(2/3)
    
    Abstract
    The concept of Linked Data has made its entrance in the cultural heritage sector due to its potential use for the integration of heterogeneous collections and deriving additional value out of existing metadata. However, practitioners and researchers alike need a better understanding of what outcome they can reasonably expect of the reconciliation process between their local metadata and established controlled vocabularies which are already a part of the Linked Data cloud. This paper offers an in-depth analysis of how a locally developed vocabulary can be successfully reconciled with the Library of Congress Subject Headings (LCSH) and the Arts and Architecture Thesaurus (AAT) through the help of a general-purpose tool for interactive data transformation (OpenRefine). Issues negatively affecting the reconciliation process are identified and solutions are proposed in order to derive maximum value from existing metadata and controlled vocabularies in an automated manner.
    Date
    22. 3.2013 19:29:20
    Type
    a
  2. Keyser, P. de: Indexing : from thesauri to the Semantic Web (2012) 0.07
    0.073279686 = product of:
      0.109919526 = sum of:
        0.0045582205 = weight(_text_:a in 3197) [ClassicSimilarity], result of:
          0.0045582205 = score(doc=3197,freq=2.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.07643694 = fieldWeight in 3197, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3197)
        0.105361305 = sum of:
          0.063318536 = weight(_text_:de in 3197) [ClassicSimilarity], result of:
            0.063318536 = score(doc=3197,freq=2.0), product of:
              0.22225924 = queryWeight, product of:
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.051718395 = queryNorm
              0.28488597 = fieldWeight in 3197, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.046875 = fieldNorm(doc=3197)
          0.04204277 = weight(_text_:22 in 3197) [ClassicSimilarity], result of:
            0.04204277 = score(doc=3197,freq=2.0), product of:
              0.18110901 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051718395 = queryNorm
              0.23214069 = fieldWeight in 3197, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3197)
      0.6666667 = coord(2/3)
    
    Abstract
    Indexing consists of both novel and more traditional techniques. Cutting-edge indexing techniques, such as automatic indexing, ontologies, and topic maps, were developed independently of older techniques such as thesauri, but it is now recognized that these older methods also hold expertise. Indexing describes various traditional and novel indexing techniques, giving information professionals and students of library and information sciences a broad and comprehensible introduction to indexing. This title consists of twelve chapters: an Introduction to subject readings and theasauri; Automatic indexing versus manual indexing; Techniques applied in automatic indexing of text material; Automatic indexing of images; The black art of indexing moving images; Automatic indexing of music; Taxonomies and ontologies; Metadata formats and indexing; Tagging; Topic maps; Indexing the web; and The Semantic Web.
    Date
    24. 8.2016 14:03:22
  3. OWL Web Ontology Language Test Cases (2004) 0.05
    0.04682725 = product of:
      0.14048174 = sum of:
        0.14048174 = sum of:
          0.08442472 = weight(_text_:de in 4685) [ClassicSimilarity], result of:
            0.08442472 = score(doc=4685,freq=2.0), product of:
              0.22225924 = queryWeight, product of:
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.051718395 = queryNorm
              0.37984797 = fieldWeight in 4685, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.0625 = fieldNorm(doc=4685)
          0.05605703 = weight(_text_:22 in 4685) [ClassicSimilarity], result of:
            0.05605703 = score(doc=4685,freq=2.0), product of:
              0.18110901 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051718395 = queryNorm
              0.30952093 = fieldWeight in 4685, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4685)
      0.33333334 = coord(1/3)
    
    Date
    14. 8.2011 13:33:22
    Editor
    Carroll, J.J. u. J. de Roo
  4. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.05
    0.04595352 = sum of:
      0.020569062 = product of:
        0.08227625 = sum of:
          0.08227625 = weight(_text_:authors in 1634) [ClassicSimilarity], result of:
            0.08227625 = score(doc=1634,freq=6.0), product of:
              0.23577455 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.051718395 = queryNorm
              0.34896153 = fieldWeight in 1634, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
        0.25 = coord(1/4)
      0.011370199 = weight(_text_:a in 1634) [ClassicSimilarity], result of:
        0.011370199 = score(doc=1634,freq=28.0), product of:
          0.05963374 = queryWeight, product of:
            1.153047 = idf(docFreq=37942, maxDocs=44218)
            0.051718395 = queryNorm
          0.19066721 = fieldWeight in 1634, product of:
            5.2915025 = tf(freq=28.0), with freq of:
              28.0 = termFreq=28.0
            1.153047 = idf(docFreq=37942, maxDocs=44218)
            0.03125 = fieldNorm(doc=1634)
      0.014014257 = product of:
        0.028028514 = sum of:
          0.028028514 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
            0.028028514 = score(doc=1634,freq=2.0), product of:
              0.18110901 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051718395 = queryNorm
              0.15476047 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
        0.5 = coord(1/2)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
    Type
    a
  5. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.04
    0.035975903 = product of:
      0.053963855 = sum of:
        0.041071262 = product of:
          0.16428505 = sum of:
            0.16428505 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.16428505 = score(doc=701,freq=2.0), product of:
                0.43846914 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051718395 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.25 = coord(1/4)
        0.012892594 = weight(_text_:a in 701) [ClassicSimilarity], result of:
          0.012892594 = score(doc=701,freq=36.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.2161963 = fieldWeight in 701, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.6666667 = coord(2/3)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  6. Brunetti, J.M.; Roberto García, R.: User-centered design and evaluation of overview components for semantic data exploration (2014) 0.03
    0.033929754 = sum of:
      0.011875552 = product of:
        0.04750221 = sum of:
          0.04750221 = weight(_text_:authors in 1626) [ClassicSimilarity], result of:
            0.04750221 = score(doc=1626,freq=2.0), product of:
              0.23577455 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.051718395 = queryNorm
              0.20147301 = fieldWeight in 1626, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=1626)
        0.25 = coord(1/4)
      0.008039945 = weight(_text_:a in 1626) [ClassicSimilarity], result of:
        0.008039945 = score(doc=1626,freq=14.0), product of:
          0.05963374 = queryWeight, product of:
            1.153047 = idf(docFreq=37942, maxDocs=44218)
            0.051718395 = queryNorm
          0.13482209 = fieldWeight in 1626, product of:
            3.7416575 = tf(freq=14.0), with freq of:
              14.0 = termFreq=14.0
            1.153047 = idf(docFreq=37942, maxDocs=44218)
            0.03125 = fieldNorm(doc=1626)
      0.014014257 = product of:
        0.028028514 = sum of:
          0.028028514 = weight(_text_:22 in 1626) [ClassicSimilarity], result of:
            0.028028514 = score(doc=1626,freq=2.0), product of:
              0.18110901 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051718395 = queryNorm
              0.15476047 = fieldWeight in 1626, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1626)
        0.5 = coord(1/2)
    
    Abstract
    Purpose - The growing volumes of semantic data available in the web result in the need for handling the information overload phenomenon. The potential of this amount of data is enormous but in most cases it is very difficult for users to visualize, explore and use this data, especially for lay-users without experience with Semantic Web technologies. The paper aims to discuss these issues. Design/methodology/approach - The Visual Information-Seeking Mantra "Overview first, zoom and filter, then details-on-demand" proposed by Shneiderman describes how data should be presented in different stages to achieve an effective exploration. The overview is the first user task when dealing with a data set. The objective is that the user is capable of getting an idea about the overall structure of the data set. Different information architecture (IA) components supporting the overview tasks have been developed, so they are automatically generated from semantic data, and evaluated with end-users. Findings - The chosen IA components are well known to web users, as they are present in most web pages: navigation bars, site maps and site indexes. The authors complement them with Treemaps, a visualization technique for displaying hierarchical data. These components have been developed following an iterative User-Centered Design methodology. Evaluations with end-users have shown that they get easily used to them despite the fact that they are generated automatically from structured data, without requiring knowledge about the underlying semantic technologies, and that the different overview components complement each other as they focus on different information search needs. Originality/value - Obtaining semantic data sets overviews cannot be easily done with the current semantic web browsers. Overviews become difficult to achieve with large heterogeneous data sets, which is typical in the Semantic Web, because traditional IA techniques do not easily scale to large data sets. There is little or no support to obtain overview information quickly and easily at the beginning of the exploration of a new data set. This can be a serious limitation when exploring a data set for the first time, especially for lay-users. The proposal is to reuse and adapt existing IA components to provide this overview to users and show that they can be generated automatically from the thesaurus and ontologies that structure semantic data while providing a comparable user experience to traditional web sites.
    Date
    20. 1.2015 18:30:22
    Type
    a
  7. Heflin, J.; Hendler, J.: ¬A portrait of the Semantic Web in action (2001) 0.03
    0.032681372 = product of:
      0.049022056 = sum of:
        0.035995856 = product of:
          0.14398342 = sum of:
            0.14398342 = weight(_text_:authors in 2547) [ClassicSimilarity], result of:
              0.14398342 = score(doc=2547,freq=6.0), product of:
                0.23577455 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.051718395 = queryNorm
                0.61068267 = fieldWeight in 2547, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2547)
          0.25 = coord(1/4)
        0.013026199 = weight(_text_:a in 2547) [ClassicSimilarity], result of:
          0.013026199 = score(doc=2547,freq=12.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.21843673 = fieldWeight in 2547, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2547)
      0.6666667 = coord(2/3)
    
    Abstract
    Without semantically enriched content, the Web cannot reach its full potential. The authors discuss tools and techniques for generating and processing such content, thus setting a foundation upon which to build the Semantic Web. In particular, they put a Semantic Web language through its paces and try to answer questions about how people can use it, such as, How do authors generate semantic descriptions? How do agents discover these descriptions? How can agents integrate information from different sites? How can users query the Semantic Web? The authors present a system that addresses these questions and describe tools that help users interact with the Semantic Web. They motivate the design of their system with a specific application: semantic markup for computer science.
    Type
    a
  8. Finke, M.; Risch, J.: "Match Me If You Can" : Sammeln und semantisches Aufbereiten von Fußballdaten (2017) 0.03
    0.032193325 = product of:
      0.048289984 = sum of:
        0.0060776267 = weight(_text_:a in 3723) [ClassicSimilarity], result of:
          0.0060776267 = score(doc=3723,freq=2.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.10191591 = fieldWeight in 3723, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=3723)
        0.04221236 = product of:
          0.08442472 = sum of:
            0.08442472 = weight(_text_:de in 3723) [ClassicSimilarity], result of:
              0.08442472 = score(doc=3723,freq=2.0), product of:
                0.22225924 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.051718395 = queryNorm
                0.37984797 = fieldWeight in 3723, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3723)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    Vgl.: www.info7.de/info7_2017-2_S-36-51.pdf.
    Type
    a
  9. Virgilio, R. De; Cappellari, P.; Maccioni, A.; Torlone, R.: Path-oriented keyword search query over RDF (2012) 0.03
    0.03157383 = product of:
      0.04736074 = sum of:
        0.010049931 = weight(_text_:a in 429) [ClassicSimilarity], result of:
          0.010049931 = score(doc=429,freq=14.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.1685276 = fieldWeight in 429, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=429)
        0.03731081 = product of:
          0.07462162 = sum of:
            0.07462162 = weight(_text_:de in 429) [ClassicSimilarity], result of:
              0.07462162 = score(doc=429,freq=4.0), product of:
                0.22225924 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.051718395 = queryNorm
                0.33574134 = fieldWeight in 429, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=429)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    We are witnessing a smooth evolution of the Web from a worldwide information space of linked documents to a global knowledge base, where resources are identified by means of uniform resource identifiers (URIs, essentially string identifiers) and are semantically described and correlated through resource description framework (RDF, a metadata data model) statements. With the size and availability of data constantly increasing (currently around 7 billion RDF triples and 150 million RDF links), a fundamental problem lies in the difficulty users face to find and retrieve the information they are interested in. In general, to access semantic data, users need to know the organization of data and the syntax of a specific query language (e.g., SPARQL or variants thereof). Clearly, this represents an obstacle to information access for nonexpert users. For this reason, keyword search-based systems are increasingly capturing the attention of researchers. Recently, many approaches to keyword-based search over structured and semistructured data have been proposed]. These approaches usually implement IR strategies on top of traditional database management systems with the goal of freeing the users from having to know data organization and query languages.
    Source
    Semantic search over the Web. Eds.: R. De Virgilio, et al
  10. Weiand, K.; Hartl, A.; Hausmann, S.; Furche, T.; Bry, F.: Keyword-based search over semantic data (2012) 0.03
    0.031076824 = product of:
      0.046615236 = sum of:
        0.0093044285 = weight(_text_:a in 432) [ClassicSimilarity], result of:
          0.0093044285 = score(doc=432,freq=12.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.15602624 = fieldWeight in 432, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=432)
        0.03731081 = product of:
          0.07462162 = sum of:
            0.07462162 = weight(_text_:de in 432) [ClassicSimilarity], result of:
              0.07462162 = score(doc=432,freq=4.0), product of:
                0.22225924 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.051718395 = queryNorm
                0.33574134 = fieldWeight in 432, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=432)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    For a long while, the creation of Web content required at least basic knowledge of Web technologies, meaning that for many Web users, the Web was de facto a read-only medium. This changed with the arrival of the "social Web," when Web applications started to allow users to publish Web content without technological expertise. Here, content creation is often an inclusive, iterative, and interactive process. Examples of social Web applications include blogs, social networking sites, as well as many specialized applications, for example, for saving and sharing bookmarks and publishing photos. Social semantic Web applications are social Web applications in which knowledge is expressed not only in the form of text and multimedia but also through informal to formal annotations that describe, reflect, and enhance the content. These annotations often take the shape of RDF graphs backed by ontologies, but less formal annotations such as free-form tags or tags from a controlled vocabulary may also be available. Wikis are one example of social Web applications for collecting and sharing knowledge. They allow users to easily create and edit documents, so-called wiki pages, using a Web browser. The pages in a wiki are often heavily interlinked, which makes it easy to find related information and browse the content.
    Source
    Semantic search over the Web. Eds.: R. De Virgilio, et al
  11. Multimedia content and the Semantic Web : methods, standards, and tools (2005) 0.03
    0.029172326 = sum of:
      0.0074222204 = product of:
        0.029688882 = sum of:
          0.029688882 = weight(_text_:authors in 150) [ClassicSimilarity], result of:
            0.029688882 = score(doc=150,freq=2.0), product of:
              0.23577455 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.051718395 = queryNorm
              0.12592064 = fieldWeight in 150, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
        0.25 = coord(1/4)
      0.006579225 = weight(_text_:a in 150) [ClassicSimilarity], result of:
        0.006579225 = score(doc=150,freq=24.0), product of:
          0.05963374 = queryWeight, product of:
            1.153047 = idf(docFreq=37942, maxDocs=44218)
            0.051718395 = queryNorm
          0.11032722 = fieldWeight in 150, product of:
            4.8989797 = tf(freq=24.0), with freq of:
              24.0 = termFreq=24.0
            1.153047 = idf(docFreq=37942, maxDocs=44218)
            0.01953125 = fieldNorm(doc=150)
      0.01517088 = product of:
        0.03034176 = sum of:
          0.03034176 = weight(_text_:22 in 150) [ClassicSimilarity], result of:
            0.03034176 = score(doc=150,freq=6.0), product of:
              0.18110901 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051718395 = queryNorm
              0.16753313 = fieldWeight in 150, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=150)
        0.5 = coord(1/2)
    
    Classification
    006.7 22
    Date
    7. 3.2007 19:30:22
    DDC
    006.7 22
    Footnote
    Rez. in: JASIST 58(2007) no.3, S.457-458 (A.M.A. Ahmad): "The concept of the semantic web has emerged because search engines and text-based searching are no longer adequate, as these approaches involve an extensive information retrieval process. The deployed searching and retrieving descriptors arc naturally subjective and their deployment is often restricted to the specific application domain for which the descriptors were configured. The new era of information technology imposes different kinds of requirements and challenges. Automatic extracted audiovisual features are required, as these features are more objective, domain-independent, and more native to audiovisual content. This book is a useful guide for researchers, experts, students, and practitioners; it is a very valuable reference and can lead them through their exploration and research in multimedia content and the semantic web. The book is well organized, and introduces the concept of the semantic web and multimedia content analysis to the reader through a logical sequence from standards and hypotheses through system examples, presenting relevant tools and methods. But in some chapters readers will need a good technical background to understand some of the details. Readers may attain sufficient knowledge here to start projects or research related to the book's theme; recent results and articles related to the active research area of integrating multimedia with semantic web technologies are included. This book includes full descriptions of approaches to specific problem domains such as content search, indexing, and retrieval. This book will be very useful to researchers in the multimedia content analysis field who wish to explore the benefits of emerging semantic web technologies in applying multimedia content approaches. The first part of the book covers the definition of the two basic terms multimedia content and semantic web. The Moving Picture Experts Group standards MPEG7 and MPEG21 are quoted extensively. In addition, the means of multimedia content description are elaborated upon and schematically drawn. This extensive description is introduced by authors who are actively involved in those standards and have been participating in the work of the International Organization for Standardization (ISO)/MPEG for many years. On the other hand, this results in bias against the ad hoc or nonstandard tools for multimedia description in favor of the standard approaches. This is a general book for multimedia content; more emphasis on the general multimedia description and extraction could be provided.
    Semantic web technologies are explained, and ontology representation is emphasized. There is an excellent summary of the fundamental theory behind applying a knowledge-engineering approach to vision problems. This summary represents the concept of the semantic web and multimedia content analysis. A definition of the fuzzy knowledge representation that can be used for realization in multimedia content applications has been provided, with a comprehensive analysis. The second part of the book introduces the multimedia content analysis approaches and applications. In addition, some examples of methods applicable to multimedia content analysis are presented. Multimedia content analysis is a very diverse field and concerns many other research fields at the same time; this creates strong diversity issues, as everything from low-level features (e.g., colors, DCT coefficients, motion vectors, etc.) up to the very high and semantic level (e.g., Object, Events, Tracks, etc.) are involved. The second part includes topics on structure identification (e.g., shot detection for video sequences), and object-based video indexing. These conventional analysis methods are supplemented by results on semantic multimedia analysis, including three detailed chapters on the development and use of knowledge models for automatic multimedia analysis. Starting from object-based indexing and continuing with machine learning, these three chapters are very logically organized. Because of the diversity of this research field, including several chapters of recent research results is not sufficient to cover the state of the art of multimedia. The editors of the book should write an introductory chapter about multimedia content analysis approaches, basic problems, and technical issues and challenges, and try to survey the state of the art of the field and thus introduce the field to the reader.
    The final part of the book discusses research in multimedia content management systems and the semantic web, and presents examples and applications for semantic multimedia analysis in search and retrieval systems. These chapters describe example systems in which current projects have been implemented, and include extensive results and real demonstrations. For example, real case scenarios such as ECommerce medical applications and Web services have been introduced. Topics in natural language, speech and image processing techniques and their application for multimedia indexing, and content-based retrieval have been elaborated upon with extensive examples and deployment methods. The editors of the book themselves provide the readers with a chapter about their latest research results on knowledge-based multimedia content indexing and retrieval. Some interesting applications for multimedia content and the semantic web are introduced. Applications that have taken advantage of the metadata provided by MPEG7 in order to realize advance-access services for multimedia content have been provided. The applications discussed in the third part of the book provide useful guidance to researchers and practitioners properly planning to implement semantic multimedia analysis techniques in new research and development projects in both academia and industry. A fourth part should be added to this book: performance measurements for integrated approaches of multimedia analysis and the semantic web. Performance of the semantic approach is a very sophisticated issue and requires extensive elaboration and effort. Measuring the semantic search is an ongoing research area; several chapters concerning performance measurement and analysis would be required to adequately cover this area and introduce it to readers."
  12. Papadakis, I. et al.: Highlighting timely information in libraries through social and semantic Web technologies (2016) 0.03
    0.028421786 = product of:
      0.042632677 = sum of:
        0.0075970334 = weight(_text_:a in 2090) [ClassicSimilarity], result of:
          0.0075970334 = score(doc=2090,freq=2.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.12739488 = fieldWeight in 2090, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=2090)
        0.035035644 = product of:
          0.07007129 = sum of:
            0.07007129 = weight(_text_:22 in 2090) [ClassicSimilarity], result of:
              0.07007129 = score(doc=2090,freq=2.0), product of:
                0.18110901 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051718395 = queryNorm
                0.38690117 = fieldWeight in 2090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2090)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
    Type
    a
  13. De Luca, E.W.: Using multilingual lexical resources for extending the linked data cloud (2017) 0.03
    0.028169159 = product of:
      0.042253736 = sum of:
        0.0053179236 = weight(_text_:a in 3506) [ClassicSimilarity], result of:
          0.0053179236 = score(doc=3506,freq=2.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.089176424 = fieldWeight in 3506, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3506)
        0.036935814 = product of:
          0.07387163 = sum of:
            0.07387163 = weight(_text_:de in 3506) [ClassicSimilarity], result of:
              0.07387163 = score(doc=3506,freq=2.0), product of:
                0.22225924 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.051718395 = queryNorm
                0.33236697 = fieldWeight in 3506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3506)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Type
    a
  14. Faaborg, A.; Lagoze, C.: Semantic browsing (2003) 0.03
    0.0250341 = product of:
      0.03755115 = sum of:
        0.013026199 = weight(_text_:a in 1026) [ClassicSimilarity], result of:
          0.013026199 = score(doc=1026,freq=12.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.21843673 = fieldWeight in 1026, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1026)
        0.02452495 = product of:
          0.0490499 = sum of:
            0.0490499 = weight(_text_:22 in 1026) [ClassicSimilarity], result of:
              0.0490499 = score(doc=1026,freq=2.0), product of:
                0.18110901 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051718395 = queryNorm
                0.2708308 = fieldWeight in 1026, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1026)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    We have created software applications that allow users to both author and use Semantic Web metadata. To create and use a layer of semantic content on top of the existing Web, we have (1) implemented a user interface that expedites the task of attributing metadata to resources on the Web, and (2) augmented a Web browser to leverage this semantic metadata to provide relevant information and tasks to the user. This project provides a framework for annotating and reorganizing existing files, pages, and sites on the Web that is similar to Vannevar Bushrsquos original concepts of trail blazing and associative indexing.
    Source
    Research and advanced technology for digital libraries : 7th European Conference, proceedings / ECDL 2003, Trondheim, Norway, August 17-22, 2003
    Type
    a
  15. Bianchini, D.; Antonellis, V. De: Linked data services and semantics-enabled mashup (2012) 0.02
    0.02486146 = product of:
      0.03729219 = sum of:
        0.0074435426 = weight(_text_:a in 435) [ClassicSimilarity], result of:
          0.0074435426 = score(doc=435,freq=12.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.12482099 = fieldWeight in 435, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=435)
        0.029848646 = product of:
          0.059697293 = sum of:
            0.059697293 = weight(_text_:de in 435) [ClassicSimilarity], result of:
              0.059697293 = score(doc=435,freq=4.0), product of:
                0.22225924 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.051718395 = queryNorm
                0.26859307 = fieldWeight in 435, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.03125 = fieldNorm(doc=435)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The Web of Linked Data can be seen as a global database, where resources are identified through URIs, are self-described (by means of the URI dereferencing mechanism), and are globally connected through RDF links. According to the Linked Data perspective, research attention is progressively shifting from data organization and representation to linkage and composition of the huge amount of data available on the Web. For example, at the time of this writing, the DBpedia knowledge base describes more than 3.5 million things, conceptualized through 672 million RDF triples, with 6.5 million external links into other RDF datasets. Useful applications have been provided for enabling people to browse this wealth of data, like Tabulator. Other systems have been implemented to collect, index, and provide advanced searching facilities over the Web of Linked Data, such as Watson and Sindice. Besides these applications, domain-specific systems to gather and mash up Linked Data have been proposed, like DBpedia Mobile and Revyu . corn. DBpedia Mobile is a location-aware client for the semantic Web that can be used on an iPhone and other mobile devices. Based on the current GPS position of a mobile device, DBpedia Mobile renders a map indicating nearby locations from the DBpedia dataset. Starting from this map, the user can explore background information about his or her surroundings. Revyu . corn is a Web site where you can review and rate whatever is possible to identify (through a URI) on the Web. Nevertheless, the potential advantages implicit in the Web of Linked Data are far from being fully exploited. Current applications hardly go beyond presenting together data gathered from different sources. Recently, research on the Web of Linked Data has been devoted to the study of models and languages to add functionalities to the Web of Linked Data by means of Linked Data services.
    Source
    Semantic search over the Web. Eds.: R. De Virgilio, et al
  16. Boer, V. de; Wielemaker, J.; Gent, J. van; Hildebrand, M.; Isaac, A.; Ossenbruggen, J. van; Schreiber, G.: Supporting linked data production for cultural heritage institutes : the Amsterdam Museum case study (2012) 0.02
    0.024288438 = product of:
      0.036432657 = sum of:
        0.010049931 = weight(_text_:a in 265) [ClassicSimilarity], result of:
          0.010049931 = score(doc=265,freq=14.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.1685276 = fieldWeight in 265, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=265)
        0.026382726 = product of:
          0.05276545 = sum of:
            0.05276545 = weight(_text_:de in 265) [ClassicSimilarity], result of:
              0.05276545 = score(doc=265,freq=2.0), product of:
                0.22225924 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.051718395 = queryNorm
                0.23740499 = fieldWeight in 265, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=265)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Within the cultural heritage field, proprietary metadata and vocabularies are being transformed into public Linked Data. These efforts have mostly been at the level of large-scale aggregators such as Europeana where the original data is abstracted to a common format and schema. Although this approach ensures a level of consistency and interoperability, the richness of the original data is lost in the process. In this paper, we present a transparent and interactive methodology for ingesting, converting and linking cultural heritage metadata into Linked Data. The methodology is designed to maintain the richness and detail of the original metadata. We introduce the XMLRDF conversion tool and describe how it is integrated in the ClioPatria semantic web toolkit. The methodology and the tools have been validated by converting the Amsterdam Museum metadata to a Linked Data version. In this way, the Amsterdam Museum became the first 'small' cultural heritage institution with a node in the Linked Data cloud.
    Type
    a
  17. Zenz, G.; Zhou, X.; Minack, E.; Siberski, W.; Nejdl, W.: Interactive query construction for keyword search on the Semantic Web (2012) 0.02
    0.024288438 = product of:
      0.036432657 = sum of:
        0.010049931 = weight(_text_:a in 430) [ClassicSimilarity], result of:
          0.010049931 = score(doc=430,freq=14.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.1685276 = fieldWeight in 430, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=430)
        0.026382726 = product of:
          0.05276545 = sum of:
            0.05276545 = weight(_text_:de in 430) [ClassicSimilarity], result of:
              0.05276545 = score(doc=430,freq=2.0), product of:
                0.22225924 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.051718395 = queryNorm
                0.23740499 = fieldWeight in 430, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=430)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    With the advance of the semantic Web, increasing amounts of data are available in a structured and machine-understandable form. This opens opportunities for users to employ semantic queries instead of simple keyword-based ones to accurately express the information need. However, constructing semantic queries is a demanding task for human users [11]. To compose a valid semantic query, a user has to (1) master a query language (e.g., SPARQL) and (2) acquire sufficient knowledge about the ontology or the schema of the data source. While there are systems which support this task with visual tools [21, 26] or natural language interfaces [3, 13, 14, 18], the process of query construction can still be complex and time consuming. According to [24], users prefer keyword search, and struggle with the construction of semantic queries although being supported with a natural language interface. Several keyword search approaches have already been proposed to ease information seeking on semantic data [16, 32, 35] or databases [1, 31]. However, keyword queries lack the expressivity to precisely describe the user's intent. As a result, ranking can at best put query intentions of the majority on top, making it impossible to take the intentions of all users into consideration.
    Source
    Semantic search over the Web. Eds.: R. De Virgilio, et al
  18. Ioannou, E.; Nejdl, W.; Niederée, C.; Velegrakis, Y.: Embracing uncertainty in entity linking (2012) 0.02
    0.023791438 = product of:
      0.035687156 = sum of:
        0.0093044285 = weight(_text_:a in 433) [ClassicSimilarity], result of:
          0.0093044285 = score(doc=433,freq=12.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.15602624 = fieldWeight in 433, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=433)
        0.026382726 = product of:
          0.05276545 = sum of:
            0.05276545 = weight(_text_:de in 433) [ClassicSimilarity], result of:
              0.05276545 = score(doc=433,freq=2.0), product of:
                0.22225924 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.051718395 = queryNorm
                0.23740499 = fieldWeight in 433, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=433)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The modern Web has grown from a publishing place of well-structured data and HTML pages for companies and experienced users into a vivid publishing and data exchange community in which everyone can participate, both as a data consumer and as a data producer. Unavoidably, the data available on the Web became highly heterogeneous, ranging from highly structured and semistructured to highly unstructured user-generated content, reflecting different perspectives and structuring principles. The full potential of such data can only be realized by combining information from multiple sources. For instance, the knowledge that is typically embedded in monolithic applications can be outsourced and, thus, used also in other applications. Numerous systems nowadays are already actively utilizing existing content from various sources such as WordNet or Wikipedia. Some well-known examples of such systems include DBpedia, Freebase, Spock, and DBLife. A major challenge during combining and querying information from multiple heterogeneous sources is entity linkage, i.e., the ability to detect whether two pieces of information correspond to the same real-world object. This chapter introduces a novel approach for addressing the entity linkage problem for heterogeneous, uncertain, and volatile data.
    Source
    Semantic search over the Web. Eds.: R. De Virgilio, et al
  19. Call, A.; Gottlob, G.; Pieris, A.: ¬The return of the entity-relationship model : ontological query answering (2012) 0.02
    0.023791438 = product of:
      0.035687156 = sum of:
        0.0093044285 = weight(_text_:a in 434) [ClassicSimilarity], result of:
          0.0093044285 = score(doc=434,freq=12.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.15602624 = fieldWeight in 434, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=434)
        0.026382726 = product of:
          0.05276545 = sum of:
            0.05276545 = weight(_text_:de in 434) [ClassicSimilarity], result of:
              0.05276545 = score(doc=434,freq=2.0), product of:
                0.22225924 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.051718395 = queryNorm
                0.23740499 = fieldWeight in 434, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=434)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The Entity-Relationship (ER) model is a fundamental formalism for conceptual modeling in database design; it was introduced by Chen in his milestone paper, and it is now widely used, being flexible and easily understood by practitioners. With the rise of the Semantic Web, conceptual modeling formalisms have gained importance again as ontology formalisms, in the Semantic Web parlance. Ontologies and conceptual models are aimed at representing, rather than the structure of data, the domain of interest, that is, the fragment of the real world that is being represented by the data and the schema. A prominent formalism for modeling ontologies are Description Logics (DLs), which are decidable fragments of first-order logic, particularly suitable for ontological modeling and querying. In particular, DL ontologies are sets of assertions describing sets of objects and (usually binary) relations among such sets, exactly in the same fashion as the ER model. Recently, research on DLs has been focusing on the problem of answering queries under ontologies, that is, given a query q, an instance B, and an ontology X, answering q under B and amounts to compute the answers that are logically entailed from B by using the assertions of X. In this context, where data size is usually large, a central issue the data complexity of query answering, i.e., the computational complexity with respect to the data set B only, while the ontology X and the query q are fixed.
    Source
    Semantic search over the Web. Eds.: R. De Virgilio, et al
  20. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.02
    0.02344053 = product of:
      0.035160795 = sum of:
        0.010635847 = weight(_text_:a in 759) [ClassicSimilarity], result of:
          0.010635847 = score(doc=759,freq=8.0), product of:
            0.05963374 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.051718395 = queryNorm
            0.17835285 = fieldWeight in 759, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=759)
        0.02452495 = product of:
          0.0490499 = sum of:
            0.0490499 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
              0.0490499 = score(doc=759,freq=2.0), product of:
                0.18110901 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051718395 = queryNorm
                0.2708308 = fieldWeight in 759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=759)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    XML will have a profound impact on the way data is exchanged on the Internet. An important feature of this language is the separation of content from presentation, which makes it easier to select and/or reformat the data. However, due to the likelihood of numerous industry and domain specific DTDs, those who wish to integrate information will still be faced with the problem of semantic interoperability. In this paper we discuss why this problem is not solved by XML, and then discuss why the Resource Description Framework is only a partial solution. We then present the SHOE language, which we feel has many of the features necessary to enable a semantic web, and describe an existing set of tools that make it easy to use the language.
    Date
    11. 5.2013 19:22:18
    Type
    a

Years

Languages

  • e 242
  • d 69
  • f 1
  • More… Less…

Types

  • a 213
  • el 81
  • m 43
  • s 17
  • n 10
  • x 6
  • r 2
  • More… Less…

Subjects

Classifications