Search (89 results, page 1 of 5)

  • × theme_ss:"Semantic Web"
  • × theme_ss:"Wissensrepräsentation"
  1. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.03
    0.029507384 = product of:
      0.044261076 = sum of:
        0.034791786 = product of:
          0.13916714 = sum of:
            0.13916714 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.13916714 = score(doc=701,freq=2.0), product of:
                0.37143064 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.043811057 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.25 = coord(1/4)
        0.009469291 = product of:
          0.018938582 = sum of:
            0.018938582 = weight(_text_:of in 701) [ClassicSimilarity], result of:
              0.018938582 = score(doc=701,freq=32.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.27643585 = fieldWeight in 701, product of:
                  5.656854 = tf(freq=32.0), with freq of:
                    32.0 = termFreq=32.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  2. OWL Web Ontology Language Test Cases (2004) 0.02
    0.022141643 = product of:
      0.06642493 = sum of:
        0.06642493 = sum of:
          0.018938582 = weight(_text_:of in 4685) [ClassicSimilarity], result of:
            0.018938582 = score(doc=4685,freq=8.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.27643585 = fieldWeight in 4685, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0625 = fieldNorm(doc=4685)
          0.047486346 = weight(_text_:22 in 4685) [ClassicSimilarity], result of:
            0.047486346 = score(doc=4685,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.30952093 = fieldWeight in 4685, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4685)
      0.33333334 = coord(1/3)
    
    Abstract
    This document contains and presents test cases for the Web Ontology Language (OWL) approved by the Web Ontology Working Group. Many of the test cases illustrate the correct usage of the Web Ontology Language (OWL), and the formal meaning of its constructs. Other test cases illustrate the resolution of issues considered by the Working Group. Conformance for OWL documents and OWL document checkers is specified.
    Date
    14. 8.2011 13:33:22
  3. Hollink, L.; Assem, M. van: Estimating the relevance of search results in the Culture-Web : a study of semantic distance measures (2010) 0.02
    0.019723108 = product of:
      0.059169322 = sum of:
        0.059169322 = sum of:
          0.023554565 = weight(_text_:of in 4649) [ClassicSimilarity], result of:
            0.023554565 = score(doc=4649,freq=22.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.34381276 = fieldWeight in 4649, product of:
                4.690416 = tf(freq=22.0), with freq of:
                  22.0 = termFreq=22.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.046875 = fieldNorm(doc=4649)
          0.03561476 = weight(_text_:22 in 4649) [ClassicSimilarity], result of:
            0.03561476 = score(doc=4649,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.23214069 = fieldWeight in 4649, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4649)
      0.33333334 = coord(1/3)
    
    Abstract
    More and more cultural heritage institutions publish their collections, vocabularies and metadata on the Web. The resulting Web of linked cultural data opens up exciting new possibilities for searching and browsing through these cultural heritage collections. We report on ongoing work in which we investigate the estimation of relevance in this Web of Culture. We study existing measures of semantic distance and how they apply to two use cases. The use cases relate to the structured, multilingual and multimodal nature of the Culture Web. We distinguish between measures using the Web, such as Google distance and PMI, and measures using the Linked Data Web, i.e. the semantic structure of metadata vocabularies. We perform a small study in which we compare these semantic distance measures to human judgements of relevance. Although it is too early to draw any definitive conclusions, the study provides new insights into the applicability of semantic distance measures to the Web of Culture, and clear starting points for further research.
    Date
    26.12.2011 13:40:22
  4. Synak, M.; Dabrowski, M.; Kruk, S.R.: Semantic Web and ontologies (2009) 0.02
    0.018985212 = product of:
      0.056955636 = sum of:
        0.056955636 = sum of:
          0.009469291 = weight(_text_:of in 3376) [ClassicSimilarity], result of:
            0.009469291 = score(doc=3376,freq=2.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.13821793 = fieldWeight in 3376, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0625 = fieldNorm(doc=3376)
          0.047486346 = weight(_text_:22 in 3376) [ClassicSimilarity], result of:
            0.047486346 = score(doc=3376,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.30952093 = fieldWeight in 3376, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=3376)
      0.33333334 = coord(1/3)
    
    Abstract
    This chapter presents ontologies and their role in the creation of the Semantic Web. Ontologies hold special interest, because they are very closely related to the way we understand the world. They provide common understanding, the very first step to successful communication. In following sections, we will present ontologies, how they are created and used. We will describe available tools for specifying and working with ontologies.
    Date
    31. 7.2010 16:58:22
  5. Mayfield, J.; Finin, T.: Information retrieval on the Semantic Web : integrating inference and retrieval 0.02
    0.018633895 = product of:
      0.055901684 = sum of:
        0.055901684 = sum of:
          0.014351131 = weight(_text_:of in 4330) [ClassicSimilarity], result of:
            0.014351131 = score(doc=4330,freq=6.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.20947541 = fieldWeight in 4330, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4330)
          0.041550554 = weight(_text_:22 in 4330) [ClassicSimilarity], result of:
            0.041550554 = score(doc=4330,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.2708308 = fieldWeight in 4330, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4330)
      0.33333334 = coord(1/3)
    
    Abstract
    One vision of the Semantic Web is that it will be much like the Web we know today, except that documents will be enriched by annotations in machine understandable markup. These annotations will provide metadata about the documents as well as machine interpretable statements capturing some of the meaning of document content. We discuss how the information retrieval paradigm might be recast in such an environment. We suggest that retrieval can be tightly bound to inference. Doing so makes today's Web search engines useful to Semantic Web inference engines, and causes improvements in either retrieval or inference to lead directly to improvements in the other.
    Date
    12. 2.2011 17:35:22
  6. Zeng, M.L.; Fan, W.; Lin, X.: SKOS for an integrated vocabulary structure (2008) 0.02
    0.01788844 = product of:
      0.053665318 = sum of:
        0.053665318 = sum of:
          0.020087399 = weight(_text_:of in 2654) [ClassicSimilarity], result of:
            0.020087399 = score(doc=2654,freq=36.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.2932045 = fieldWeight in 2654, product of:
                6.0 = tf(freq=36.0), with freq of:
                  36.0 = termFreq=36.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.03125 = fieldNorm(doc=2654)
          0.03357792 = weight(_text_:22 in 2654) [ClassicSimilarity], result of:
            0.03357792 = score(doc=2654,freq=4.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.21886435 = fieldWeight in 2654, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2654)
      0.33333334 = coord(1/3)
    
    Abstract
    In order to transfer the Chinese Classified Thesaurus (CCT) into a machine-processable format and provide CCT-based Web services, a pilot study has been conducted in which a variety of selected CCT classes and mapped thesaurus entries are encoded with SKOS. OWL and RDFS are also used to encode the same contents for the purposes of feasibility and cost-benefit comparison. CCT is a collected effort led by the National Library of China. It is an integration of the national standards Chinese Library Classification (CLC) 4th edition and Chinese Thesaurus (CT). As a manually created mapping product, CCT provides for each of the classes the corresponding thesaurus terms, and vice versa. The coverage of CCT includes four major clusters: philosophy, social sciences and humanities, natural sciences and technologies, and general works. There are 22 main-classes, 52,992 sub-classes and divisions, 110,837 preferred thesaurus terms, 35,690 entry terms (non-preferred terms), and 59,738 pre-coordinated headings (Chinese Classified Thesaurus, 2005) Major challenges of encoding this large vocabulary comes from its integrated structure. CCT is a result of the combination of two structures (illustrated in Figure 1): a thesaurus that uses ISO-2788 standardized structure and a classification scheme that is basically enumerative, but provides some flexibility for several kinds of synthetic mechanisms Other challenges include the complex relationships caused by differences of granularities of two original schemes and their presentation with various levels of SKOS elements; as well as the diverse coordination of entries due to the use of auxiliary tables and pre-coordinated headings derived from combining classes, subdivisions, and thesaurus terms, which do not correspond to existing unique identifiers. The poster reports the progress, shares the sample SKOS entries, and summarizes problems identified during the SKOS encoding process. Although OWL Lite and OWL Full provide richer expressiveness, the cost-benefit issues and the final purposes of encoding CCT raise questions of using such approaches.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  7. Prud'hommeaux, E.; Gayo, E.: RDF ventures to boldly meet your most pedestrian needs (2015) 0.02
    0.016606232 = product of:
      0.049818695 = sum of:
        0.049818695 = sum of:
          0.014203937 = weight(_text_:of in 2024) [ClassicSimilarity], result of:
            0.014203937 = score(doc=2024,freq=8.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.20732689 = fieldWeight in 2024, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.046875 = fieldNorm(doc=2024)
          0.03561476 = weight(_text_:22 in 2024) [ClassicSimilarity], result of:
            0.03561476 = score(doc=2024,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.23214069 = fieldWeight in 2024, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2024)
      0.33333334 = coord(1/3)
    
    Abstract
    Defined in 1999 and paired with XML, the Resource Description Framework (RDF) has been cast as an RDF Schema, producing data that is well-structured but not validated, permitting certain illogical relationships. When stakeholders convened in 2014 to consider solutions to the data validation challenge, a W3C working group proposed Resource Shapes and Shape Expressions to describe the properties expected for an RDF node. Resistance rose from concerns about data and schema reuse, key principles in RDF. Ideally data types and properties are designed for broad use, but they are increasingly adopted with local restrictions for specific purposes. Resource Shapes are commonly treated as record classes, standing in for data structures but losing flexibility for later reuse. Of various solutions to the resulting tensions, the concept of record classes may be the most reasonable basis for agreement, satisfying stakeholders' objectives while allowing for variations with constraints.
    Footnote
    Contribution to a special section "Linked data and the charm of weak semantics".
    Source
    Bulletin of the Association for Information Science and Technology. 41(2015) no.4, S.18-22
  8. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.02
    0.015805468 = product of:
      0.0474164 = sum of:
        0.0474164 = sum of:
          0.023673227 = weight(_text_:of in 1634) [ClassicSimilarity], result of:
            0.023673227 = score(doc=1634,freq=50.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.34554482 = fieldWeight in 1634, product of:
                7.071068 = tf(freq=50.0), with freq of:
                  50.0 = termFreq=50.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
          0.023743173 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
            0.023743173 = score(doc=1634,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.15476047 = fieldWeight in 1634, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 66(2014) no.5, S.494-518
  9. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.01
    0.014304235 = product of:
      0.042912703 = sum of:
        0.042912703 = sum of:
          0.013233736 = weight(_text_:of in 4553) [ClassicSimilarity], result of:
            0.013233736 = score(doc=4553,freq=10.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.19316542 = fieldWeight in 4553, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4553)
          0.029678967 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
            0.029678967 = score(doc=4553,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.19345059 = fieldWeight in 4553, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4553)
      0.33333334 = coord(1/3)
    
    Abstract
    Semantic Web knowledge representation standards, and in particular RDF and OWL, often come endowed with a formal semantics which is considered to be of fundamental importance for the field. Reasoning, i.e., the drawing of logical inferences from knowledge expressed in such standards, is traditionally based on logical deductive methods and algorithms which can be proven to be sound and complete and terminating, i.e. correct in a very strong sense. For various reasons, though, in particular the scalability issues arising from the ever increasing amounts of Semantic Web data available and the inability of deductive algorithms to deal with noise in the data, it has been argued that alternative means of reasoning should be investigated which bear high promise for high scalability and better robustness. From this perspective, deductive algorithms can be considered the gold standard regarding correctness against which alternative methods need to be tested. In this paper, we show that it is possible to train a Deep Learning system on RDF knowledge graphs, such that it is able to perform reasoning over new RDF knowledge graphs, with high precision and recall compared to the deductive gold standard.
    Date
    16.11.2018 14:22:01
  10. Gendt, M. van; Isaac, I.; Meij, L. van der; Schlobach, S.: Semantic Web techniques for multiple views on heterogeneous collections : a case study (2006) 0.01
    0.014238909 = product of:
      0.042716727 = sum of:
        0.042716727 = sum of:
          0.0071019684 = weight(_text_:of in 2418) [ClassicSimilarity], result of:
            0.0071019684 = score(doc=2418,freq=2.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.103663445 = fieldWeight in 2418, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.046875 = fieldNorm(doc=2418)
          0.03561476 = weight(_text_:22 in 2418) [ClassicSimilarity], result of:
            0.03561476 = score(doc=2418,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.23214069 = fieldWeight in 2418, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2418)
      0.33333334 = coord(1/3)
    
    Abstract
    Integrated digital access to multiple collections is a prominent issue for many Cultural Heritage institutions. The metadata describing diverse collections must be interoperable, which requires aligning the controlled vocabularies that are used to annotate objects from these collections. In this paper, we present an experiment where we match the vocabularies of two collections by applying the Knowledge Representation techniques established in recent Semantic Web research. We discuss the steps that are required for such matching, namely formalising the initial resources using Semantic Web languages, and running ontology mapping tools on the resulting representations. In addition, we present a prototype that enables the user to browse the two collections using the obtained alignment while still providing her with the original vocabulary structures.
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
  11. Manaf, N.A. Abdul; Bechhofer, S.; Stevens, R.: ¬The current state of SKOS vocabularies on the Web (2012) 0.00
    0.0045201816 = product of:
      0.013560544 = sum of:
        0.013560544 = product of:
          0.027121088 = sum of:
            0.027121088 = weight(_text_:of in 266) [ClassicSimilarity], result of:
              0.027121088 = score(doc=266,freq=42.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.39587128 = fieldWeight in 266, product of:
                  6.4807405 = tf(freq=42.0), with freq of:
                    42.0 = termFreq=42.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=266)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    We present a survey of the current state of Simple Knowledge Organization System (SKOS) vocabularies on the Web. Candidate vocabularies were gathered through collections and web crawling, with 478 identified as complying to a given definition of a SKOS vocabulary. Analyses were then conducted that included investigation of the use of SKOS constructs; the use of SKOS semantic relations and lexical labels; and the structure of vocabularies in terms of the hierarchical and associative relations, branching factors and the depth of the vocabularies. Even though SKOS concepts are considered to be the core of SKOS vocabularies, our findings were that not all SKOS vocabularies published explicitly declared SKOS concepts in the vocabularies. Almost one-third of th SKOS vocabularies collected fall into the category of term lists, with no use of any SKOS semantic relations. As concept labelling is core to SKOS vocabularies, a surprising find is that not all SKOS vocabularies use SKOS lexical labels, whether skos:prefLabel or skos:altLabel, for their concepts. The branching factors and maximum depth of the vocabularies have no direct relationship to the size of the vocabularies. We also observed some common modelling slips found in SKOS vocabularies. The survey is useful when considering, for example, converting artefacts such as OWL ontologies into SKOS, where a definition of typicality of SKOS vocabularies could be used to guide the conversion. Moreover, the survey results can serve to provide a better understanding of the modelling styles of the SKOS vocabularies published on the Web, especially when considering the creation of applications that utilize these vocabularies.
  12. Fluit, C.; Horst, H. ter; Meer, J. van der; Sabou, M.; Mika, P.: Spectacle (2004) 0.00
    0.004267752 = product of:
      0.012803256 = sum of:
        0.012803256 = product of:
          0.025606511 = sum of:
            0.025606511 = weight(_text_:of in 4337) [ClassicSimilarity], result of:
              0.025606511 = score(doc=4337,freq=26.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.37376386 = fieldWeight in 4337, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4337)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Many Semantic Web initiatives improve the capabilities of machines to exchange the meaning of information with other machines. These efforts lead to an increased quality of the application's results, but their user interfaces take little or no advantage of the semantic richness. For example, an ontology-based search engine will use its ontology when evaluating the user's query (e.g. for query formulation, disambiguation or evaluation), but fails to use it to significantly enrich the presentation of the results to a human user. For example, one could imagine replacing the endless list of hits with a structured presentation based on the semantic properties of the hits. Another problem is that the modelling of a domain is done from a single perspective (most often that of the information provider). Therefore, presentation based on the resulting ontology is unlikely to satisfy the needs of all the different types of users of the information. So even assuming an ontology for the domain is in place, mapping that ontology to the needs of individual users - based on their tasks, expertise and personal preferences - is not trivial.
  13. OWL Web Ontology Language Overview (2004) 0.00
    0.003925761 = product of:
      0.011777283 = sum of:
        0.011777283 = product of:
          0.023554565 = sum of:
            0.023554565 = weight(_text_:of in 4682) [ClassicSimilarity], result of:
              0.023554565 = score(doc=4682,freq=22.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.34381276 = fieldWeight in 4682, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4682)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The OWL Web Ontology Language is designed for use by applications that need to process the content of information instead of just presenting information to humans. OWL facilitates greater machine interpretability of Web content than that supported by XML, RDF, and RDF Schema (RDF-S) by providing additional vocabulary along with a formal semantics. OWL has three increasingly-expressive sublanguages: OWL Lite, OWL DL, and OWL Full. This document is written for readers who want a first impression of the capabilities of OWL. It provides an introduction to OWL by informally describing the features of each of the sublanguages of OWL. Some knowledge of RDF Schema is useful for understanding this document, but not essential. After this document, interested readers may turn to the OWL Guide for more detailed descriptions and extensive examples on the features of OWL. The normative formal definition of OWL can be found in the OWL Semantics and Abstract Syntax.
  14. Iorio, A. di; Peroni, S.; Vitali, F.: ¬A Semantic Web approach to everyday overlapping markup (2011) 0.00
    0.0036907129 = product of:
      0.011072138 = sum of:
        0.011072138 = product of:
          0.022144277 = sum of:
            0.022144277 = weight(_text_:of in 4749) [ClassicSimilarity], result of:
              0.022144277 = score(doc=4749,freq=28.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.32322758 = fieldWeight in 4749, product of:
                  5.2915025 = tf(freq=28.0), with freq of:
                    28.0 = termFreq=28.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4749)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Overlapping structures in XML are not symptoms of a misunderstanding of the intrinsic characteristics of a text document nor evidence of extreme scholarly requirements far beyond those needed by the most common XML-based applications. On the contrary, overlaps have started to appear in a large number of incredibly popular applications hidden under the guise of syntactical tricks to the basic hierarchy of the XML data format. Unfortunately, syntactical tricks have the drawback that the affected structures require complicated workarounds to support even the simplest query or usage. In this article, we present Extremely Annotational Resource Description Framework (RDF) Markup (EARMARK), an approach to overlapping markup that simplifies and streamlines the management of multiple hierarchies on the same content, and provides an approach to sophisticated queries and usages over such structures without the need of ad-hoc applications, simply by using Semantic Web tools and languages. We compare how relevant tasks (e.g., the identification of the contribution of an author in a word processor document) are of some substantial complexity when using the original data format and become more or less trivial when using EARMARK. We finally evaluate positively the memory and disk requirements of EARMARK documents in comparison to Open Office and Microsoft Word XML-based formats.
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.9, S.1696-1716
  15. Veltman, K.H.: Towards a Semantic Web for culture 0.00
    0.0035289964 = product of:
      0.010586989 = sum of:
        0.010586989 = product of:
          0.021173978 = sum of:
            0.021173978 = weight(_text_:of in 4040) [ClassicSimilarity], result of:
              0.021173978 = score(doc=4040,freq=40.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.3090647 = fieldWeight in 4040, product of:
                  6.3245554 = tf(freq=40.0), with freq of:
                    40.0 = termFreq=40.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4040)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Today's semantic web deals with meaning in a very restricted sense and offers static solutions. This is adequate for many scientific, technical purposes and for business transactions requiring machine-to-machine communication, but does not answer the needs of culture. Science, technology and business are concerned primarily with the latest findings, the state of the art, i.e. the paradigm or dominant world-view of the day. In this context, history is considered non-essential because it deals with things that are out of date. By contrast, culture faces a much larger challenge, namely, to re-present changes in ways of knowing; changing meanings in different places at a given time (synchronically) and over time (diachronically). Culture is about both objects and the commentaries on them; about a cumulative body of knowledge; about collective memory and heritage. Here, history plays a central role and older does not mean less important or less relevant. Hence, a Leonardo painting that is 400 years old, or a Greek statue that is 2500 years old, typically have richer commentaries and are often more valuable than their contemporary equivalents. In this context, the science of meaning (semantics) is necessarily much more complex than semantic primitives. A semantic web in the cultural domain must enable us to trace how meaning and knowledge organisation have evolved historically in different cultures. This paper examines five issues to address this challenge: 1) different world-views (i.e. a shift from substance to function and from ontology to multiple ontologies); 2) developments in definitions and meaning; 3) distinctions between words and concepts; 4) new classes of relations; and 5) dynamic models of knowledge organisation. These issues reveal that historical dimensions of cultural diversity in knowledge organisation are also central to classification of biological diversity. New ways are proposed of visualizing knowledge using a time/space horizon to distinguish between universals and particulars. It is suggested that new visualization methods make possible a history of questions as well as of answers, thus enabling dynamic access to cultural and historical dimensions of knowledge. Unlike earlier media, which were limited to recording factual dimensions of collective memory, digital media enable us to explore theories, ways of perceiving, ways of knowing; to enter into other mindsets and world-views and thus to attain novel insights and new levels of tolerance. Some practical consequences are outlined.
    Source
    Journal of digital information. 4(2004), no.4
  16. Sure, Y.; Erdmann, M.; Studer, R.: OntoEdit: collaborative engineering of ontologies (2004) 0.00
    0.0035289964 = product of:
      0.010586989 = sum of:
        0.010586989 = product of:
          0.021173978 = sum of:
            0.021173978 = weight(_text_:of in 4405) [ClassicSimilarity], result of:
              0.021173978 = score(doc=4405,freq=40.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.3090647 = fieldWeight in 4405, product of:
                  6.3245554 = tf(freq=40.0), with freq of:
                    40.0 = termFreq=40.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4405)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Developing ontologies is central to our vision of Semantic Web-based knowledge management. The methodology described in Chapter 3 guides the development of ontologies for different applications. However, because of the size of ontologies, their complexity, their formal underpinnings and the necessity to come towards a shared understanding within a group of people when defining an ontology, ontology construction is still far from being a well-understood process. Concerning the methodology, OntoEdit focuses on three of the main steps for ontology development (the methodology is described in Chapter 3), viz. the kick off, refinement, and evaluation. We describe the steps supported by OntoEdit and focus on collaborative aspects that occur during each of the step. First, all requirements of the envisaged ontology are collected during the kick off phase. Typically for ontology engineering, ontology engineers and domain experts are joined in a team that works together on a description of the domain and the goal of the ontology, design guidelines, available knowledge sources (e.g. re-usable ontologies and thesauri, etc.), potential users and use cases and applications supported by the ontology. The output of this phase is a semiformal description of the ontology. Second, during the refinement phase, the team extends the semi-formal description in several iterations and formalizes it in an appropriate representation language like RDF(S) or, more advanced, DAML1OIL. The output of this phase is a mature ontology (the 'target ontology'). Third, the target ontology needs to be evaluated according to the requirement specifications. Typically this phase serves as a proof for the usefulness of ontologies (and ontology-based applications) and may involve the engineering team as well as end users of the targeted application. The output of this phase is an evaluated ontology, ready for roll-out into a productive environment. Support for these collaborative development steps within the ontology development methodology is crucial in order to meet the conflicting needs for ease of use and construction of complex ontology structures. We now illustrate OntoEdit's support for each of the supported steps. The examples shown are taken from the Swiss Life case study on skills management (cf. Chapter 12).
  17. Chaudhury, S.; Mallik, A.; Ghosh, H.: Multimedia ontology : representation and applications (2016) 0.00
    0.0034169364 = product of:
      0.010250809 = sum of:
        0.010250809 = product of:
          0.020501617 = sum of:
            0.020501617 = weight(_text_:of in 2801) [ClassicSimilarity], result of:
              0.020501617 = score(doc=2801,freq=24.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.2992506 = fieldWeight in 2801, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2801)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The book covers multimedia ontology in heritage preservation with intellectual explorations of various themes of Indian cultural heritage. The result of more than 15 years of collective research, Multimedia Ontology: Representation and Applications provides a theoretical foundation for understanding the nature of media data and the principles involved in its interpretation. The book presents a unified approach to recent advances in multimedia and explains how a multimedia ontology can fill the semantic gap between concepts and the media world. It relays real-life examples of implementations in different domains to illustrate how this gap can be filled. The book contains information that helps with building semantic, content-based search and retrieval engines and also with developing vertical application-specific search applications. It guides you in designing multimedia tools that aid in logical and conceptual organization of large amounts of multimedia data. As a practical demonstration, it showcases multimedia applications in cultural heritage preservation efforts and the creation of virtual museums. The book describes the limitations of existing ontology techniques in semantic multimedia data processing, as well as some open problems in the representations and applications of multimedia ontology. As an antidote, it introduces new ontology representation and reasoning schemes that overcome these limitations. The long, compiled efforts reflected in Multimedia Ontology: Representation and Applications are a signpost for new achievements and developments in efficiency and accessibility in the field.
    Footnote
    Rez. in: Annals of Library and Information Studies 62(2015) no.4, S.299-300 (A.K. Das)
  18. Miles, A.; Matthews, B.; Beckett, D.; Brickley, D.; Wilson, M.; Rogers, N.: SKOS: A language to describe simple knowledge structures for the web (2005) 0.00
    0.003382594 = product of:
      0.010147782 = sum of:
        0.010147782 = product of:
          0.020295564 = sum of:
            0.020295564 = weight(_text_:of in 517) [ClassicSimilarity], result of:
              0.020295564 = score(doc=517,freq=48.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.29624295 = fieldWeight in 517, product of:
                  6.928203 = tf(freq=48.0), with freq of:
                    48.0 = termFreq=48.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=517)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    "Textual content-based search engines for the web have a number of limitations. Firstly, many web resources have little or no textual content (images, audio or video streams etc.) Secondly, precision is low where natural language terms have overloaded meaning (e.g. 'bank', 'watch', 'chip' etc.) Thirdly, recall is incomplete where the search does not take account of synonyms or quasi-synonyms. Fourthly, there is no basis for assisting a user in modifying (expanding, refining, translating) a search based on the meaning of the original search. Fifthly, there is no basis for searching across natural languages, or framing search queries in terms of symbolic languages. The Semantic Web is a framework for creating, managing, publishing and searching semantically rich metadata for web resources. Annotating web resources with precise and meaningful statements about conceptual aspects of their content provides a basis for overcoming all of the limitations of textual content-based search engines listed above. Creating this type of metadata requires that metadata generators are able to refer to shared repositories of meaning: 'vocabularies' of concepts that are common to a community, and describe the domain of interest for that community.
    This type of effort is common in the digital library community, where a group of experts will interact with a user community to create a thesaurus for a specific domain (e.g. the Art & Architecture Thesaurus AAT AAT) or an overarching classification scheme (e.g. the Dewey Decimal Classification). A similar type of activity is being undertaken more recently in a less centralised manner by web communities, producing for example the DMOZ web directory DMOZ, or the Topic Exchange for weblog topics Topic Exchange. The web, including the semantic web, provides a medium within which communities can interact and collaboratively build and use vocabularies of concepts. A simple language is required that allows these communities to express the structure and content of their vocabularies in a machine-understandable way, enabling exchange and reuse. The Resource Description Framework (RDF) is an ideal language for making statements about web resources and publishing metadata. However, RDF provides only the low level semantics required to form metadata statements. RDF vocabularies must be built on top of RDF to support the expression of more specific types of information within metadata. Ontology languages such as OWL OWL add a layer of expressive power to RDF, and provide powerful tools for defining complex conceptual structures, which can be used to generate rich metadata. However, the class-oriented, logically precise modelling required to construct useful web ontologies is demanding in terms of expertise, effort, and therefore cost. In many cases this type of modelling may be superfluous or unsuited to requirements. Therefore there is a need for a language for expressing vocabularies of concepts for use in semantically rich metadata, that is powerful enough to support semantically enhanced search, but simple enough to be undemanding in terms of the cost and expertise required to use it."
  19. Corcho, O.; Poveda-Villalón, M.; Gómez-Pérez, A.: Ontology engineering in the era of linked data (2015) 0.00
    0.003382594 = product of:
      0.010147782 = sum of:
        0.010147782 = product of:
          0.020295564 = sum of:
            0.020295564 = weight(_text_:of in 3293) [ClassicSimilarity], result of:
              0.020295564 = score(doc=3293,freq=12.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.29624295 = fieldWeight in 3293, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3293)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Ontology engineering encompasses the method, tools and techniques used to develop ontologies. Without requiring ontologies, linked data is driving a paradigm shift, bringing benefits and drawbacks to the publishing world. Ontologies may be heavyweight, supporting deep understanding of a domain, or lightweight, suited to simple classification of concepts and more adaptable for linked data. They also vary in domain specificity, usability and reusabilty. Hybrid vocabularies drawing elements from diverse sources often suffer from internally incompatible semantics. To serve linked data purposes, ontology engineering teams require a range of skills in philosophy, computer science, web development, librarianship and domain expertise.
    Footnote
    Contribution to a special section "Linked data and the charm of weak semantics".
    Source
    Bulletin of the Association for Information Science and Technology. 41(2015) no.4, S.13-17
  20. Engels, R.H.P.; Lech, T.Ch.: Generating ontologies for the Semantic Web : OntoBuilder (2004) 0.00
    0.0033478998 = product of:
      0.010043699 = sum of:
        0.010043699 = product of:
          0.020087399 = sum of:
            0.020087399 = weight(_text_:of in 4404) [ClassicSimilarity], result of:
              0.020087399 = score(doc=4404,freq=36.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.2932045 = fieldWeight in 4404, product of:
                  6.0 = tf(freq=36.0), with freq of:
                    36.0 = termFreq=36.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4404)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Significant progress has been made in technologies for publishing and distributing knowledge and information on the web. However, much of the published information is not organized, and it is hard to find answers to questions that require more than a keyword search. In general, one can say that the web is organizing itself. Information is often published in relatively ad hoc fashion. Typically, concern about the presentation of content has been limited to purely layout issues. This, combined with the fact that the representation language used on the World Wide Web (HTML) is mainly format-oriented, makes publishing on the WWW easy, giving it an enormous expressiveness. People add private, educational or organizational content to the web that is of an immensely diverse nature. Content on the web is growing closer to a real universal knowledge base, with one problem relatively undefined; the problem of the interpretation of its contents. Although widely acknowledged for its general and universal advantages, the increasing popularity of the web also shows us some major drawbacks. The developments of the information content on the web during the last year alone, clearly indicates the need for some changes. Perhaps one of the most significant problems with the web as a distributed information system is the difficulty of finding and comparing information.
    Thus, there is a clear need for the web to become more semantic. The aim of introducing semantics into the web is to enhance the precision of search, but also enable the use of logical reasoning on web contents in order to answer queries. The CORPORUM OntoBuilder toolset is developed specifically for this task. It consists of a set of applications that can fulfil a variety of tasks, either as stand-alone tools, or augmenting each other. Important tasks that are dealt with by CORPORUM are related to document and information retrieval (find relevant documents, or support the user finding them), as well as information extraction (building a knowledge base from web documents to answer queries), information dissemination (summarizing strategies and information visualization), and automated document classification strategies. First versions of the toolset are encouraging in that they show large potential as a supportive technology for building up the Semantic Web. In this chapter, methods for transforming the current web into a semantic web are discussed, as well as a technical solution that can perform this task: the CORPORUM tool set. First, the toolset is introduced; followed by some pragmatic issues relating to the approach; then there will be a short overview of the theory in relation to CognIT's vision; and finally, a discussion on some of the applications that arose from the project.

Years

Languages

  • e 88
  • d 1
  • More… Less…

Types

  • a 50
  • el 41
  • n 9
  • m 8
  • s 6
  • x 2
  • r 1
  • More… Less…

Subjects