Search (19 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Semantic Web"
  • × theme_ss:"Semantische Interoperabilität"
  1. Carbonaro, A.; Santandrea, L.: ¬A general Semantic Web approach for data analysis on graduates statistics 0.03
    0.029319597 = product of:
      0.07329899 = sum of:
        0.040348392 = weight(_text_:context in 5309) [ClassicSimilarity], result of:
          0.040348392 = score(doc=5309,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.22896172 = fieldWeight in 5309, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5309)
        0.032950602 = weight(_text_:system in 5309) [ClassicSimilarity], result of:
          0.032950602 = score(doc=5309,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.24605882 = fieldWeight in 5309, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5309)
      0.4 = coord(2/5)
    
    Abstract
    Currently, several datasets released in a Linked Open Data format are available at a national and international level, but the lack of shared strategies concerning the definition of concepts related to the statistical publishing community makes difficult a comparison among given facts starting from different data sources. In order to guarantee a shared representation framework for what concerns the dissemination of statistical concepts about graduates, we developed SW4AL, an ontology-based system for graduate's surveys domain. The developed system transforms low-level data into an enriched information model and is based on the AlmaLaurea surveys covering more than 90% of Italian graduates. SW4AL: i) semantically describes the different peculiarities of the graduates; ii) promotes the structured definition of the AlmaLaurea data and the following publication in the Linked Open Data context; iii) provides their reuse in the open data scope; iv) enables logical reasoning about knowledge representation. SW4AL establishes a common semantic for addressing the concept of graduate's surveys domain by proposing the creation of a SPARQL endpoint and a Web based interface for the query and the visualization of the structured data.
  2. Binding, C.; Gnoli, C.; Tudhope, D.: Migrating a complex classification scheme to the semantic web : expressing the Integrative Levels Classification using SKOS RDF (2021) 0.03
    0.029319597 = product of:
      0.07329899 = sum of:
        0.040348392 = weight(_text_:context in 600) [ClassicSimilarity], result of:
          0.040348392 = score(doc=600,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.22896172 = fieldWeight in 600, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=600)
        0.032950602 = weight(_text_:system in 600) [ClassicSimilarity], result of:
          0.032950602 = score(doc=600,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.24605882 = fieldWeight in 600, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=600)
      0.4 = coord(2/5)
    
    Abstract
    Purpose The Integrative Levels Classification (ILC) is a comprehensive "freely faceted" knowledge organization system not previously expressed as SKOS (Simple Knowledge Organization System). This paper reports and reflects on work converting the ILC to SKOS representation. Design/methodology/approach The design of the ILC representation and the various steps in the conversion to SKOS are described and located within the context of previous work considering the representation of complex classification schemes in SKOS. Various issues and trade-offs emerging from the conversion are discussed. The conversion implementation employed the STELETO transformation tool. Findings The ILC conversion captures some of the ILC facet structure by a limited extension beyond the SKOS standard. SPARQL examples illustrate how this extension could be used to create faceted, compound descriptors when indexing or cataloguing. Basic query patterns are provided that might underpin search systems. Possible routes for reducing complexity are discussed. Originality/value Complex classification schemes, such as the ILC, have features which are not straight forward to represent in SKOS and which extend beyond the functionality of the SKOS standard. The ILC's facet indicators are modelled as rdf:Property sub-hierarchies that accompany the SKOS RDF statements. The ILC's top-level fundamental facet relationships are modelled by extensions of the associative relationship - specialised sub-properties of skos:related. An approach for representing faceted compound descriptions in ILC and other faceted classification schemes is proposed.
  3. Isaac, A.; Schlobach, S.; Matthezing, H.; Zinn, C.: Integrated access to cultural heritage resources through representation and alignment of controlled vocabularies (2008) 0.03
    0.025715468 = product of:
      0.06428867 = sum of:
        0.045648996 = weight(_text_:context in 3398) [ClassicSimilarity], result of:
          0.045648996 = score(doc=3398,freq=4.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.25904062 = fieldWeight in 3398, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.03125 = fieldNorm(doc=3398)
        0.018639674 = weight(_text_:system in 3398) [ClassicSimilarity], result of:
          0.018639674 = score(doc=3398,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.13919188 = fieldWeight in 3398, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=3398)
      0.4 = coord(2/5)
    
    Abstract
    Purpose - To show how semantic web techniques can help address semantic interoperability issues in the broad cultural heritage domain, allowing users an integrated and seamless access to heterogeneous collections. Design/methodology/approach - This paper presents the heterogeneity problems to be solved. It introduces semantic web techniques that can help in solving them, focusing on the representation of controlled vocabularies and their semantic alignment. It gives pointers to some previous projects and experiments that have tried to address the problems discussed. Findings - Semantic web research provides practical technical and methodological approaches to tackle the different issues. Two contributions of interest are the simple knowledge organisation system model and automatic vocabulary alignment methods and tools. These contributions were demonstrated to be usable for enabling semantic search and navigation across collections. Research limitations/implications - The research aims at designing different representation and alignment methods for solving interoperability problems in the context of controlled subject vocabularies. Given the variety and technical richness of current research in the semantic web field, it is impossible to provide an in-depth account or an exhaustive list of references. Every aspect of the paper is, however, given one or several pointers for further reading. Originality/value - This article provides a general and practical introduction to relevant semantic web techniques. It is of specific value for the practitioners in the cultural heritage and digital library domains who are interested in applying these methods in practice.
    Content
    This paper is based on a talk given at "Information Access for the Global Community, An International Seminar on the Universal Decimal Classification" held on 4-5 June 2007 in The Hague, The Netherlands. An abstract of this talk will be published in Extensions and Corrections to the UDC, an annual publication of the UDC consortium. Beitrag eines Themenheftes "Digital libraries and the semantic web: context, applications and research".
  4. Krause, J.: Semantic heterogeneity : comparing new semantic web approaches with those of digital libraries (2008) 0.03
    0.025459195 = product of:
      0.063647985 = sum of:
        0.040348392 = weight(_text_:context in 1908) [ClassicSimilarity], result of:
          0.040348392 = score(doc=1908,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.22896172 = fieldWeight in 1908, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1908)
        0.023299592 = weight(_text_:system in 1908) [ClassicSimilarity], result of:
          0.023299592 = score(doc=1908,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.17398985 = fieldWeight in 1908, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1908)
      0.4 = coord(2/5)
    
    Abstract
    Purpose - To demonstrate that newer developments in the semantic web community, particularly those based on ontologies (simple knowledge organization system and others) mitigate common arguments from the digital library (DL) community against participation in the Semantic web. Design/methodology/approach - The approach is a semantic web discussion focusing on the weak structure of the Web and the lack of consideration given to the semantic content during indexing. Findings - The points criticised by the semantic web and ontology approaches are the same as those of the DL "Shell model approach" from the mid-1990s, with emphasis on the centrality of its heterogeneity components (used, for example, in vascoda). The Shell model argument began with the "invisible web", necessitating the restructuring of DL approaches. The conclusion is that both approaches fit well together and that the Shell model, with its semantic heterogeneity components, can be reformulated on the semantic web basis. Practical implications - A reinterpretation of the DL approaches of semantic heterogeneity and adapting to standards and tools supported by the W3C should be the best solution. It is therefore recommended that - although most of the semantic web standards are not technologically refined for commercial applications at present - all individual DL developments should be checked for their adaptability to the W3C standards of the semantic web. Originality/value - A unique conceptual analysis of the parallel developments emanating from the digital library and semantic web communities.
    Footnote
    Beitrag eines Themenheftes "Digital libraries and the semantic web: context, applications and research".
  5. Isaac, A.; Baker, T.: Linked data practice at different levels of semantic precision : the perspective of libraries, archives and museums (2015) 0.03
    0.025459195 = product of:
      0.063647985 = sum of:
        0.040348392 = weight(_text_:context in 2026) [ClassicSimilarity], result of:
          0.040348392 = score(doc=2026,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.22896172 = fieldWeight in 2026, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2026)
        0.023299592 = weight(_text_:system in 2026) [ClassicSimilarity], result of:
          0.023299592 = score(doc=2026,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.17398985 = fieldWeight in 2026, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2026)
      0.4 = coord(2/5)
    
    Abstract
    Libraries, archives and museums rely on structured schemas and vocabularies to indicate classes in which a resource may belong. In the context of linked data, key organizational components are the RDF data model, element schemas and value vocabularies, with simple ontologies having minimally defined classes and properties in order to facilitate reuse and interoperability. Simplicity over formal semantics is a tenet of the open-world assumption underlying ontology languages central to the Semantic Web, but the result is a lack of constraints, data quality checks and validation capacity. Inconsistent use of vocabularies and ontologies that do not follow formal semantics rules and logical concept hierarchies further complicate the use of Semantic Web technologies. The Simple Knowledge Organization System (SKOS) helps make existing value vocabularies available in the linked data environment, but it exchanges precision for simplicity. Incompatibilities between simple organized vocabularies, Resource Description Framework Schemas and OWL ontologies and even basic notions of subjects and concepts prevent smooth translations and challenge the conversion of cultural institutions' unique legacy vocabularies for linked data. Adopting the linked data vision requires accepting loose semantic interpretations. To avoid semantic inconsistencies and illogical results, cultural organizations following the linked data path must be careful to choose the level of semantics that best suits their domain and needs.
  6. Sartini, B.; Erp, M. van; Gangemi, A.: Marriage is a peach and a chalice : modelling cultural symbolism on the Semantic Web (2021) 0.01
    0.009683615 = product of:
      0.04841807 = sum of:
        0.04841807 = weight(_text_:context in 557) [ClassicSimilarity], result of:
          0.04841807 = score(doc=557,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.27475408 = fieldWeight in 557, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.046875 = fieldNorm(doc=557)
      0.2 = coord(1/5)
    
    Abstract
    In this work, we fill the gap in the Semantic Web in the context of Cultural Symbolism. Building upon earlier work in \citesartini_towards_2021, we introduce the Simulation Ontology, an ontology that models the background knowledge of symbolic meanings, developed by combining the concepts taken from the authoritative theory of Simulacra and Simulations of Jean Baudrillard with symbolic structures and content taken from "Symbolism: a Comprehensive Dictionary'' by Steven Olderr. We re-engineered the symbolic knowledge already present in heterogeneous resources by converting it into our ontology schema to create HyperReal, the first knowledge graph completely dedicated to cultural symbolism. A first experiment run on the knowledge graph is presented to show the potential of quantitative research on symbolism.
  7. Mayr, P.; Mutschke, P.; Petras, V.: Reducing semantic complexity in distributed digital libraries : Treatment of term vagueness and document re-ranking (2008) 0.01
    0.008069678 = product of:
      0.040348392 = sum of:
        0.040348392 = weight(_text_:context in 1909) [ClassicSimilarity], result of:
          0.040348392 = score(doc=1909,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.22896172 = fieldWeight in 1909, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1909)
      0.2 = coord(1/5)
    
    Footnote
    Beitrag eines Themenheftes "Digital libraries and the semantic web: context, applications and research".
  8. Svensson, L.G.: Unified access : a semantic Web based model for multilingual navigation in heterogeneous data sources (2008) 0.01
    0.007908144 = product of:
      0.03954072 = sum of:
        0.03954072 = weight(_text_:system in 2191) [ClassicSimilarity], result of:
          0.03954072 = score(doc=2191,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.29527056 = fieldWeight in 2191, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=2191)
      0.2 = coord(1/5)
    
    Abstract
    Most online library catalogues are not well equipped for subject search. On the one hand it is difficult to navigate the structures of the thesauri and classification systems used for indexing. Further, there is little or no support for the integration of crosswalks between different controlled vocabularies, so that a subject search query formulated using one controlled vocabulary will not find resources indexed with another knowledge organisation system even if there exists a crosswalk between them. In this paper we will look at SemanticWeb technologies and a prototype system leveraging those technologies in order to enhance the subject search possibilities in heterogeneously indexed repositories. Finally, we will have a brief look at different initiatives aimed at integrating library data into the SemanticWeb.
  9. Veltman, K.H.: Syntactic and semantic interoperability : new approaches to knowledge and the Semantic Web (2001) 0.01
    0.0064557428 = product of:
      0.032278713 = sum of:
        0.032278713 = weight(_text_:context in 3883) [ClassicSimilarity], result of:
          0.032278713 = score(doc=3883,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.18316938 = fieldWeight in 3883, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.03125 = fieldNorm(doc=3883)
      0.2 = coord(1/5)
    
    Abstract
    At VVWW-7 (Brisbane, 1997), Tim Berners-Lee outlined his vision of a global reasoning web. At VVWW- 8 (Toronto, May 1998), he developed this into a vision of a semantic web, where one Gould search not just for isolated words, but for meaning in the form of logically provable claims. In the past four years this vision has spread with amazing speed. The semantic web has been adopted by the European Commission as one of the important goals of the Sixth Framework Programme. In the United States it has become linked with the Defense Advanced Research Projects Agency (DARPA). While this quest to achieve a semantic web is new, the quest for meaning in language has a history that is almost as old as language itself. Accordingly this paper opens with a survey of the historical background. The contributions of the Dublin Core are reviewed briefly. To achieve a semantic web requires both syntactic and semantic interoperability. These challenges are outlined. A basic contention of this paper is that semantic interoperability requires much more than a simple agreement concerning the static meaning of a term. Different levels of agreement (local, regional, national and international) are involved and these levels have their own history. Hence, one of the larger challenges is to create new systems of knowledge organization, which identify and connect these different levels. With respect to meaning or semantics, early twentieth century pioneers such as Wüster were hopeful that it might be sufficient to limit oneself to isolated terms and words without reference to the larger grammatical context: to concept systems rather than to propositional logic. While a fascination with concept systems implicitly dominates many contemporary discussions, this paper suggests why this approach is not sufficient. The final section of this paper explores how an approach using propositional logic could lead to a new approach to universals and particulars. This points to a re-organization of knowledge, and opens the way for a vision of a semantic web with all the historical and cultural richness and complexity of language itself.
  10. Miller, E.; Schloss. B.; Lassila, O.; Swick, R.R.: Resource Description Framework (RDF) : model and syntax (1997) 0.01
    0.005648775 = product of:
      0.028243875 = sum of:
        0.028243875 = weight(_text_:context in 5903) [ClassicSimilarity], result of:
          0.028243875 = score(doc=5903,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.16027321 = fieldWeight in 5903, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5903)
      0.2 = coord(1/5)
    
    Abstract
    RDF - the Resource Description Framework - is a foundation for processing metadata; it provides interoperability between applications that exchange machine-understandable information on the Web. RDF emphasizes facilities to enable automated processing of Web resources. RDF metadata can be used in a variety of application areas; for example: in resource discovery to provide better search engine capabilities; in cataloging for describing the content and content relationships available at a particular Web site, page, or digital library; by intelligent software agents to facilitate knowledge sharing and exchange; in content rating; in describing collections of pages that represent a single logical "document"; for describing intellectual property rights of Web pages, and in many others. RDF with digital signatures will be key to building the "Web of Trust" for electronic commerce, collaboration, and other applications. Metadata is "data about data" or specifically in the context of RDF "data describing web resources." The distinction between "data" and "metadata" is not an absolute one; it is a distinction created primarily by a particular application. Many times the same resource will be interpreted in both ways simultaneously. RDF encourages this view by using XML as the encoding syntax for the metadata. The resources being described by RDF are, in general, anything that can be named via a URI. The broad goal of RDF is to define a mechanism for describing resources that makes no assumptions about a particular application domain, nor defines the semantics of any application domain. The definition of the mechanism should be domain neutral, yet the mechanism should be suitable for describing information about any domain. This document introduces a model for representing RDF metadata and one syntax for expressing and transporting this metadata in a manner that maximizes the interoperability of independently developed web servers and clients. The syntax described in this document is best considered as a "serialization syntax" for the underlying RDF representation model. The serialization syntax is XML, XML being the W3C's work-in-progress to define a richer Web syntax for a variety of applications. RDF and XML are complementary; there will be alternate ways to represent the same RDF data model, some more suitable for direct human authoring. Future work may lead to including such alternatives in this document.
  11. Isaac, A.: Aligning thesauri for an integrated access to Cultural Heritage Resources (2007) 0.01
    0.005648775 = product of:
      0.028243875 = sum of:
        0.028243875 = weight(_text_:context in 553) [ClassicSimilarity], result of:
          0.028243875 = score(doc=553,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.16027321 = fieldWeight in 553, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.02734375 = fieldNorm(doc=553)
      0.2 = coord(1/5)
    
    Abstract
    Currently, a number of efforts are being carried out to integrate collections from different institutions and containing heterogeneous material. Examples of such projects are The European Library [1] and the Memory of the Netherlands [2]. A crucial point for the success of these is the availability to provide a unified access on top of the different collections, e.g. using one single vocabulary for querying or browsing the objects they contain. This is made difficult by the fact that the objects from different collections are often described using different vocabularies - thesauri, classification schemes - and are therefore not interoperable at the semantic level. To solve this problem, one can turn to semantic links - mappings - between the elements of the different vocabularies. If one knows that a concept C from a vocabulary V is semantically equivalent to a concept to a concept D from vocabulary W, then an appropriate search engine can return all the objects that were indexed against D for a query for objects described using C. We thus have an access to other collections, using a single one vocabulary. This is however an ideal situation, and hard alignment work is required to reach it. Several projects in the past have tried to implement such a solution, like MACS [3] and Renardus [4]. They have demonstrated very interesting results, but also highlighted the difficulty of aligning manually all the different vocabularies involved in practical cases, which sometimes contain hundreds of thousands of concepts. To alleviate this problem, a number of tools have been proposed in order to provide with candidate mappings between two input vocabularies, making alignment a (semi-) automatic task. Recently, the Semantic Web community has produced a lot of these alignment tools'. Several techniques are found, depending on the material they exploit: labels of concepts, structure of vocabularies, collection objects and external knowledge sources. Throughout our presentation, we will present a concrete heterogeneity case where alignment techniques have been applied to build a (pilot) browser, developed in the context of the STITCH project [5]. This browser enables a unified access to two collections of illuminated manuscripts, using the description vocabulary used in the first collection, Mandragore [6], or the one used by the second, Iconclass [7]. In our talk, we will also make the point for using unified representations the vocabulary semantic and lexical information. Additionally to ease the use of the alignment tools that have these vocabularies as input, turning to a standard representation format helps designing applications that are more generic, like the browser we demonstrate. We give pointers to SKOS [8], an open and web-enabled format currently developed by the Semantic Web community.
  12. Panzer, M.: Relationships, spaces, and the two faces of Dewey (2008) 0.00
    0.003954072 = product of:
      0.01977036 = sum of:
        0.01977036 = weight(_text_:system in 2127) [ClassicSimilarity], result of:
          0.01977036 = score(doc=2127,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.14763528 = fieldWeight in 2127, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2127)
      0.2 = coord(1/5)
    
    Content
    "When dealing with a large-scale and widely-used knowledge organization system like the Dewey Decimal Classification, we often tend to focus solely on the organization aspect, which is closely intertwined with editorial work. This is perfectly understandable, since developing and updating the DDC, keeping up with current scientific developments, spotting new trends in both scholarly communication and popular publishing, and figuring out how to fit those patterns into the structure of the scheme are as intriguing as they are challenging. From the organization perspective, the intended user of the scheme is mainly the classifier. Dewey acts very much as a number-building engine, providing richly documented concepts to help with classification decisions. Since the Middle Ages, quasi-religious battles have been fought over the "valid" arrangement of places according to specific views of the world, as parodied by Jorge Luis Borges and others. Organizing knowledge has always been primarily an ontological activity; it is about putting the world into the classification. However, there is another side to this coin--the discovery side. While the hierarchical organization of the DDC establishes a default set of places and neighborhoods that is also visible in the physical manifestation of library shelves, this is just one set of relationships in the DDC. A KOS (Knowledge Organization System) becomes powerful by expressing those other relationships in a manner that not only collocates items in a physical place but in a knowledge space, and exposes those other relationships in ways beneficial and congenial to the unique perspective of an information seeker.
  13. Semantic search over the Web (2012) 0.00
    0.003727935 = product of:
      0.018639674 = sum of:
        0.018639674 = weight(_text_:system in 411) [ClassicSimilarity], result of:
          0.018639674 = score(doc=411,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.13919188 = fieldWeight in 411, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=411)
      0.2 = coord(1/5)
    
    Content
    Inhalt: Introduction.- Part I Introduction to Web of Data.- Topology of the Web of Data.- Storing and Indexing Massive RDF Data Sets.- Designing Exploratory Search Applications upon Web Data Sources.- Part II Search over the Web.- Path-oriented Keyword Search query over RDF.- Interactive Query Construction for Keyword Search on the SemanticWeb.- Understanding the Semantics of Keyword Queries on Relational DataWithout Accessing the Instance.- Keyword-Based Search over Semantic Data.- Semantic Link Discovery over Relational Data.- Embracing Uncertainty in Entity Linking.- The Return of the Entity-Relationship Model: Ontological Query Answering.- Linked Data Services and Semantics-enabled Mashup.- Part III Linked Data Search engines.- A Recommender System for Linked Data.- Flint: from Web Pages to Probabilistic Semantic Data.- Searching and Browsing Linked Data with SWSE.
  14. Stamou, G.; Chortaras, A.: Ontological query answering over semantic data (2017) 0.00
    0.0031002287 = product of:
      0.015501143 = sum of:
        0.015501143 = product of:
          0.04650343 = sum of:
            0.04650343 = weight(_text_:29 in 3926) [ClassicSimilarity], result of:
              0.04650343 = score(doc=3926,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.31092256 = fieldWeight in 3926, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3926)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Pages
    S.29-63
  15. Siwecka, D.: Knowledge organization systems used in European national libraries towards interoperability of the semantic Web (2018) 0.00
    0.0031002287 = product of:
      0.015501143 = sum of:
        0.015501143 = product of:
          0.04650343 = sum of:
            0.04650343 = weight(_text_:29 in 4815) [ClassicSimilarity], result of:
              0.04650343 = score(doc=4815,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.31092256 = fieldWeight in 4815, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4815)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    18. 1.2019 18:46:29
  16. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.00
    0.0026882975 = product of:
      0.013441487 = sum of:
        0.013441487 = product of:
          0.04032446 = sum of:
            0.04032446 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
              0.04032446 = score(doc=759,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.2708308 = fieldWeight in 759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=759)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    11. 5.2013 19:22:18
  17. Metadata and semantics research : 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings (2016) 0.00
    0.0026882975 = product of:
      0.013441487 = sum of:
        0.013441487 = product of:
          0.04032446 = sum of:
            0.04032446 = weight(_text_:22 in 3283) [ClassicSimilarity], result of:
              0.04032446 = score(doc=3283,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.2708308 = fieldWeight in 3283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3283)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
  18. Vocht, L. De: Exploring semantic relationships in the Web of Data : Semantische relaties verkennen in data op het web (2017) 0.00
    0.0023299593 = product of:
      0.011649796 = sum of:
        0.011649796 = weight(_text_:system in 4232) [ClassicSimilarity], result of:
          0.011649796 = score(doc=4232,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.08699492 = fieldWeight in 4232, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4232)
      0.2 = coord(1/5)
    
    Abstract
    When we speak about finding relationships between resources, it is necessary to dive deeper in the structure. The graph structure of linked data where the semantics give meaning to the relationships between resources enable the execution of pathfinding algorithms. The assigned weights and heuristics are base components of such algorithms and ultimately define (the order) which resources are included in a path. These paths explain indirect connections between resources. Our third technique proposes an algorithm that optimizes the choice of resources in terms of serendipity. Some optimizations guard the consistence of candidate-paths where the coherence of consecutive connections is maximized to avoid trivial and too arbitrary paths. The implementation uses the A* algorithm, the de-facto reference when it comes to heuristically optimized minimal cost paths. The effectiveness of paths was measured based on common automatic metrics and surveys where the users could indicate their preference for paths, generated each time in a different way. Finally, all our techniques are applied to a use case about publications in digital libraries where they are aligned with information about scientific conferences and researchers. The application to this use case is a practical example because the different aspects of exploratory search come together. In fact, the techniques also evolved from the experiences when implementing the use case. Practical details about the semantic model are explained and the implementation of the search system is clarified module by module. The evaluation positions the result, a prototype of a tool to explore scientific publications, researchers and conferences next to some important alternatives.
  19. Metadata and semantics research : 8th Research Conference, MTSR 2014, Karlsruhe, Germany, November 27-29, 2014, Proceedings (2014) 0.00
    0.001937643 = product of:
      0.009688215 = sum of:
        0.009688215 = product of:
          0.029064644 = sum of:
            0.029064644 = weight(_text_:29 in 2192) [ClassicSimilarity], result of:
              0.029064644 = score(doc=2192,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.19432661 = fieldWeight in 2192, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2192)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)