Search (158 results, page 8 of 8)

  • × theme_ss:"Semantische Interoperabilität"
  1. Carbonaro, A.; Santandrea, L.: ¬A general Semantic Web approach for data analysis on graduates statistics 0.00
    0.0014872681 = product of:
      0.0059490725 = sum of:
        0.0059490725 = weight(_text_:information in 5309) [ClassicSimilarity], result of:
          0.0059490725 = score(doc=5309,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.09697737 = fieldWeight in 5309, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5309)
      0.25 = coord(1/4)
    
    Abstract
    Currently, several datasets released in a Linked Open Data format are available at a national and international level, but the lack of shared strategies concerning the definition of concepts related to the statistical publishing community makes difficult a comparison among given facts starting from different data sources. In order to guarantee a shared representation framework for what concerns the dissemination of statistical concepts about graduates, we developed SW4AL, an ontology-based system for graduate's surveys domain. The developed system transforms low-level data into an enriched information model and is based on the AlmaLaurea surveys covering more than 90% of Italian graduates. SW4AL: i) semantically describes the different peculiarities of the graduates; ii) promotes the structured definition of the AlmaLaurea data and the following publication in the Linked Open Data context; iii) provides their reuse in the open data scope; iv) enables logical reasoning about knowledge representation. SW4AL establishes a common semantic for addressing the concept of graduate's surveys domain by proposing the creation of a SPARQL endpoint and a Web based interface for the query and the visualization of the structured data.
  2. Wenige, L.; Ruhland, J.: Similarity-based knowledge graph queries for recommendation retrieval (2019) 0.00
    0.0014872681 = product of:
      0.0059490725 = sum of:
        0.0059490725 = weight(_text_:information in 5864) [ClassicSimilarity], result of:
          0.0059490725 = score(doc=5864,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.09697737 = fieldWeight in 5864, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5864)
      0.25 = coord(1/4)
    
    Abstract
    Current retrieval and recommendation approaches rely on hard-wired data models. This hinders personalized cus-tomizations to meet information needs of users in a more flexible manner. Therefore, the paper investigates how similarity-basedretrieval strategies can be combined with graph queries to enable users or system providers to explore repositories in the LinkedOpen Data (LOD) cloud more thoroughly. For this purpose, we developed novel content-based recommendation approaches.They rely on concept annotations of Simple Knowledge Organization System (SKOS) vocabularies and a SPARQL-based querylanguage that facilitates advanced and personalized requests for openly available knowledge graphs. We have comprehensivelyevaluated the novel search strategies in several test cases and example application domains (i.e., travel search and multimediaretrieval). The results of the web-based online experiments showed that our approaches increase the recall and diversity of rec-ommendations or at least provide a competitive alternative strategy of resource access when conventional methods do not providehelpful suggestions. The findings may be of use for Linked Data-enabled recommender systems (LDRS) as well as for semanticsearch engines that can consume LOD resources. (PDF) Similarity-based knowledge graph queries for recommendation retrieval. Available from: https://www.researchgate.net/publication/333358714_Similarity-based_knowledge_graph_queries_for_recommendation_retrieval [accessed May 21 2020].
  3. Gödert, W.; Hubrich, J.; Boteram, F.: Thematische Recherche und Interoperabilität : Wege zur Optimierung des Zugriffs auf heterogen erschlossene Dokumente (2009) 0.00
    0.0014872681 = product of:
      0.0059490725 = sum of:
        0.0059490725 = weight(_text_:information in 193) [ClassicSimilarity], result of:
          0.0059490725 = score(doc=193,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.09697737 = fieldWeight in 193, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=193)
      0.25 = coord(1/4)
    
    Abstract
    Die Verwendung von Erschließungsinstrumenten zur Beschreibung von Informationsressourcen kann Effizienz und Effektivität thematischer Recherchen wesentlich steigern: standardisierte Begriffe unterstützen Recall und Precision begrifflicher Suchen; Ausweisung von Relationen bietet die Grundlage für explorative Suchprozesse. Eine zusätzliche Steigerung der Funktionalitäten des Retrievals kann mittels einer Ausdifferenzierung und Spezifizierung der in Normdaten enthaltenen semantischen Informationen erreicht werden, die über die in Thesauri und Klassifikationen verbreiteten grundlegenden Relationstypen (äquivalent, hierarchisch, assoziativ) hinausgehen. In modernen Informationsräumen, in denen Daten verschiedener Institutionen über eine Plattform zeit- und ortsunabhängig zugänglich gemacht werden, können einzelne Wissenssysteme indes nur unzureichend das Information Retrieval unterstützen. Zu unterschiedlich sind die für thematische Suchanfragen relevanten Indexierungsdaten. Eine Verbesserung kann mittels Herstellung von Interoperabilität zwischen den verschiedenen Dokumentationssprachen erreicht werden. Im Vortrag wird dargelegt, in welcher Art und Weise die in Wissenssystemen enthaltenen semantischen Informationen zur Unterstützung thematischer Recherchen optimiert werden können und inwiefern Interoperabilität zwischen Systemen geschaffen werden kann, die gleichwertige Funktionalitäten in heterogenen Informationsräumen gewährleisten. In diesem Zusammenhang wird auch auf aktuelle Mappingprojekte wie das DFG-Projekt CrissCross oder das RESEDA-Projekt, welches sich mit den Möglichkeiten der semantischen Anreicherung bestehender Dokumentationssprachen befasst, eingegangen.
  4. Lee, S.: Pidgin metadata framework as a mediator for metadata interoperability (2021) 0.00
    0.0014872681 = product of:
      0.0059490725 = sum of:
        0.0059490725 = weight(_text_:information in 654) [ClassicSimilarity], result of:
          0.0059490725 = score(doc=654,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.09697737 = fieldWeight in 654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=654)
      0.25 = coord(1/4)
    
    Abstract
    A pidgin metadata framework based on the concept of pidgin metadata is proposed to complement the limitations of existing approaches to metadata interoperability and to achieve more reliable metadata interoperability. The framework consists of three layers, with a hierarchical structure, and reflects the semantic and structural characteristics of various metadata. Layer 1 performs both an external function, serving as an anchor for semantic association between metadata elements, and an internal function, providing semantic categories that can encompass detailed elements. Layer 2 is an arbitrary layer composed of substantial elements from existing metadata and performs a function in which different metadata elements describing the same or similar aspects of information resources are associated with the semantic categories of Layer 1. Layer 3 implements the semantic relationships between Layer 1 and Layer 2 through the Resource Description Framework syntax. With this structure, the pidgin metadata framework can establish the criteria for semantic connection between different elements and fully reflect the complexity and heterogeneity among various metadata. Additionally, it is expected to provide a bibliographic environment that can achieve more reliable metadata interoperability than existing approaches by securing the communication between metadata.
  5. Ahmed, M.; Mukhopadhyay, M.; Mukhopadhyay, P.: Automated knowledge organization : AI ML based subject indexing system for libraries (2023) 0.00
    0.0014872681 = product of:
      0.0059490725 = sum of:
        0.0059490725 = weight(_text_:information in 977) [ClassicSimilarity], result of:
          0.0059490725 = score(doc=977,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.09697737 = fieldWeight in 977, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=977)
      0.25 = coord(1/4)
    
    Source
    DESIDOC journal of library and information technology. 43(2023) no.1, S.45-54
  6. Miller, E.; Schloss. B.; Lassila, O.; Swick, R.R.: Resource Description Framework (RDF) : model and syntax (1997) 0.00
    0.0014723204 = product of:
      0.0058892816 = sum of:
        0.0058892816 = weight(_text_:information in 5903) [ClassicSimilarity], result of:
          0.0058892816 = score(doc=5903,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.0960027 = fieldWeight in 5903, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5903)
      0.25 = coord(1/4)
    
    Abstract
    RDF - the Resource Description Framework - is a foundation for processing metadata; it provides interoperability between applications that exchange machine-understandable information on the Web. RDF emphasizes facilities to enable automated processing of Web resources. RDF metadata can be used in a variety of application areas; for example: in resource discovery to provide better search engine capabilities; in cataloging for describing the content and content relationships available at a particular Web site, page, or digital library; by intelligent software agents to facilitate knowledge sharing and exchange; in content rating; in describing collections of pages that represent a single logical "document"; for describing intellectual property rights of Web pages, and in many others. RDF with digital signatures will be key to building the "Web of Trust" for electronic commerce, collaboration, and other applications. Metadata is "data about data" or specifically in the context of RDF "data describing web resources." The distinction between "data" and "metadata" is not an absolute one; it is a distinction created primarily by a particular application. Many times the same resource will be interpreted in both ways simultaneously. RDF encourages this view by using XML as the encoding syntax for the metadata. The resources being described by RDF are, in general, anything that can be named via a URI. The broad goal of RDF is to define a mechanism for describing resources that makes no assumptions about a particular application domain, nor defines the semantics of any application domain. The definition of the mechanism should be domain neutral, yet the mechanism should be suitable for describing information about any domain. This document introduces a model for representing RDF metadata and one syntax for expressing and transporting this metadata in a manner that maximizes the interoperability of independently developed web servers and clients. The syntax described in this document is best considered as a "serialization syntax" for the underlying RDF representation model. The serialization syntax is XML, XML being the W3C's work-in-progress to define a richer Web syntax for a variety of applications. RDF and XML are complementary; there will be alternate ways to represent the same RDF data model, some more suitable for direct human authoring. Future work may lead to including such alternatives in this document.
  7. Isaac, A.: Aligning thesauri for an integrated access to Cultural Heritage Resources (2007) 0.00
    0.0014723204 = product of:
      0.0058892816 = sum of:
        0.0058892816 = weight(_text_:information in 553) [ClassicSimilarity], result of:
          0.0058892816 = score(doc=553,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.0960027 = fieldWeight in 553, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=553)
      0.25 = coord(1/4)
    
    Abstract
    Currently, a number of efforts are being carried out to integrate collections from different institutions and containing heterogeneous material. Examples of such projects are The European Library [1] and the Memory of the Netherlands [2]. A crucial point for the success of these is the availability to provide a unified access on top of the different collections, e.g. using one single vocabulary for querying or browsing the objects they contain. This is made difficult by the fact that the objects from different collections are often described using different vocabularies - thesauri, classification schemes - and are therefore not interoperable at the semantic level. To solve this problem, one can turn to semantic links - mappings - between the elements of the different vocabularies. If one knows that a concept C from a vocabulary V is semantically equivalent to a concept to a concept D from vocabulary W, then an appropriate search engine can return all the objects that were indexed against D for a query for objects described using C. We thus have an access to other collections, using a single one vocabulary. This is however an ideal situation, and hard alignment work is required to reach it. Several projects in the past have tried to implement such a solution, like MACS [3] and Renardus [4]. They have demonstrated very interesting results, but also highlighted the difficulty of aligning manually all the different vocabularies involved in practical cases, which sometimes contain hundreds of thousands of concepts. To alleviate this problem, a number of tools have been proposed in order to provide with candidate mappings between two input vocabularies, making alignment a (semi-) automatic task. Recently, the Semantic Web community has produced a lot of these alignment tools'. Several techniques are found, depending on the material they exploit: labels of concepts, structure of vocabularies, collection objects and external knowledge sources. Throughout our presentation, we will present a concrete heterogeneity case where alignment techniques have been applied to build a (pilot) browser, developed in the context of the STITCH project [5]. This browser enables a unified access to two collections of illuminated manuscripts, using the description vocabulary used in the first collection, Mandragore [6], or the one used by the second, Iconclass [7]. In our talk, we will also make the point for using unified representations the vocabulary semantic and lexical information. Additionally to ease the use of the alignment tools that have these vocabularies as input, turning to a standard representation format helps designing applications that are more generic, like the browser we demonstrate. We give pointers to SKOS [8], an open and web-enabled format currently developed by the Semantic Web community.
    Content
    Präsentation anlässlich des 'UDC Seminar: Information Access for the Global Community, The Hague, 4-5 June 2007'
  8. Veltman, K.H.: Syntactic and semantic interoperability : new approaches to knowledge and the Semantic Web (2001) 0.00
    0.0011898145 = product of:
      0.004759258 = sum of:
        0.004759258 = weight(_text_:information in 3883) [ClassicSimilarity], result of:
          0.004759258 = score(doc=3883,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.0775819 = fieldWeight in 3883, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=3883)
      0.25 = coord(1/4)
    
    Source
    New review of information networking. 7(2001) no.xx, S.xx-xx
  9. Heckner, M.; Mühlbacher, S.; Wolff, C.: Tagging tagging : a classification model for user keywords in scientific bibliography management systems (2007) 0.00
    0.0011898145 = product of:
      0.004759258 = sum of:
        0.004759258 = weight(_text_:information in 533) [ClassicSimilarity], result of:
          0.004759258 = score(doc=533,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.0775819 = fieldWeight in 533, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=533)
      0.25 = coord(1/4)
    
    Abstract
    Therefore our main research questions are as follows: - Is it possible to discover regular patterns in tag usage and to establish a stable category model? - Does a specific tagging language comparable to internet slang or chatspeak evolve? - How do social tags differ from traditional (author / expert) keywords? - To what degree are social tags taken from or findable in the full text of the tagged resource? - Do tags in a research literature context go beyond simple content description (e.g. tags indicating time or task-related information, cf. Kipp et al. 2006)?
  10. Tudhope, D.; Binding, C.: Toward terminology services : experiences with a pilot Web service thesaurus browser (2006) 0.00
    0.0011898145 = product of:
      0.004759258 = sum of:
        0.004759258 = weight(_text_:information in 1955) [ClassicSimilarity], result of:
          0.004759258 = score(doc=1955,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.0775819 = fieldWeight in 1955, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1955)
      0.25 = coord(1/4)
    
    Source
    Bulletin of the American Society for Information Science and Technology. 33(2006) no.5, S.xx-xx
  11. Angjeli, A.; Isaac, A.: Semantic web and vocabularies interoperability : an experiment with illuminations collections (2008) 0.00
    0.0011898145 = product of:
      0.004759258 = sum of:
        0.004759258 = weight(_text_:information in 2324) [ClassicSimilarity], result of:
          0.004759258 = score(doc=2324,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.0775819 = fieldWeight in 2324, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2324)
      0.25 = coord(1/4)
    
    Content
    Beitrag während: World library and information congress: 74th IFLA general conference and council, 10-14 August 2008, Québec, Canada.
  12. Isaac, A.; Schlobach, S.; Matthezing, H.; Zinn, C.: Integrated access to cultural heritage resources through representation and alignment of controlled vocabularies (2008) 0.00
    0.0011898145 = product of:
      0.004759258 = sum of:
        0.004759258 = weight(_text_:information in 3398) [ClassicSimilarity], result of:
          0.004759258 = score(doc=3398,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.0775819 = fieldWeight in 3398, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=3398)
      0.25 = coord(1/4)
    
    Content
    This paper is based on a talk given at "Information Access for the Global Community, An International Seminar on the Universal Decimal Classification" held on 4-5 June 2007 in The Hague, The Netherlands. An abstract of this talk will be published in Extensions and Corrections to the UDC, an annual publication of the UDC consortium. Beitrag eines Themenheftes "Digital libraries and the semantic web: context, applications and research".
  13. Semantic search over the Web (2012) 0.00
    0.0011898145 = product of:
      0.004759258 = sum of:
        0.004759258 = weight(_text_:information in 411) [ClassicSimilarity], result of:
          0.004759258 = score(doc=411,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.0775819 = fieldWeight in 411, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=411)
      0.25 = coord(1/4)
    
    Abstract
    The Web has become the world's largest database, with search being the main tool that allows organizations and individuals to exploit its huge amount of information. Search on the Web has been traditionally based on textual and structural similarities, ignoring to a large degree the semantic dimension, i.e., understanding the meaning of the query and of the document content. Combining search and semantics gives birth to the idea of semantic search. Traditional search engines have already advertised some semantic dimensions. Some of them, for instance, can enhance their generated result sets with documents that are semantically related to the query terms even though they may not include these terms. Nevertheless, the exploitation of the semantic search has not yet reached its full potential. In this book, Roberto De Virgilio, Francesco Guerra and Yannis Velegrakis present an extensive overview of the work done in Semantic Search and other related areas. They explore different technologies and solutions in depth, making their collection a valuable and stimulating reading for both academic and industrial researchers. The book is divided into three parts. The first introduces the readers to the basic notions of the Web of Data. It describes the different kinds of data that exist, their topology, and their storing and indexing techniques. The second part is dedicated to Web Search. It presents different types of search, like the exploratory or the path-oriented, alongside methods for their efficient and effective implementation. Other related topics included in this part are the use of uncertainty in query answering, the exploitation of ontologies, and the use of semantics in mashup design and operation. The focus of the third part is on linked data, and more specifically, on applying ideas originating in recommender systems on linked data management, and on techniques for the efficiently querying answering on linked data.
  14. Altenhöner, R; Hengel, C.; Jahns, Y.; Junger, U.; Mahnke, C.; Oehlschläger, S.; Werner, C.: Weltkongress Bibliothek und Information, 74. IFLA-Generalkonferenz in Quebec, Kanada : Aus den Veranstaltungen der Division IV Bibliographic Control, der Core Activities ICADS und UNIMARC sowie der Information Technology Section (2008) 0.00
    0.0010516574 = product of:
      0.0042066295 = sum of:
        0.0042066295 = weight(_text_:information in 2317) [ClassicSimilarity], result of:
          0.0042066295 = score(doc=2317,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.068573356 = fieldWeight in 2317, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2317)
      0.25 = coord(1/4)
    
  15. Haslhofer, B.: ¬A Web-based mapping technique for establishing metadata interoperability (2008) 0.00
    0.0010516574 = product of:
      0.0042066295 = sum of:
        0.0042066295 = weight(_text_:information in 3173) [ClassicSimilarity], result of:
          0.0042066295 = score(doc=3173,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.068573356 = fieldWeight in 3173, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3173)
      0.25 = coord(1/4)
    
    Abstract
    The integration of metadata from distinct, heterogeneous data sources requires metadata interoperability, which is a qualitative property of metadata information objects that is not given by default. The technique of metadata mapping allows domain experts to establish metadata interoperability in a certain integration scenario. Mapping solutions, as a technical manifestation of this technique, are already available for the intensively studied domain of database system interoperability, but they rarely exist for the Web. If we consider the amount of steadily increasing structured metadata and corresponding metadata schemes on theWeb, we can observe a clear need for a mapping solution that can operate in aWeb-based environment. To achieve that, we first need to build its technical core, which is a mapping model that provides the language primitives to define mapping relationships. Existing SemanticWeb languages such as RDFS and OWL define some basic mapping elements (e.g., owl:equivalentProperty, owl:sameAs), but do not address the full spectrum of semantic and structural heterogeneities that can occur among distinct, incompatible metadata information objects. Furthermore, it is still unclear how to process defined mapping relationships during run-time in order to deliver metadata to the client in a uniform way. As the main contribution of this thesis, we present an abstract mapping model, which reflects the mapping problem on a generic level and provides the means for reconciling incompatible metadata. Instance transformation functions and URIs take a central role in that model. The former cover a broad spectrum of possible structural and semantic heterogeneities, while the latter bind the complete mapping model to the architecture of the Word Wide Web. On the concrete, language-specific level we present a binding of the abstract mapping model for the RDF Vocabulary Description Language (RDFS), which allows us to create mapping specifications among incompatible metadata schemes expressed in RDFS. The mapping model is embedded in a cyclic process that categorises the requirements a mapping solution should fulfil into four subsequent phases: mapping discovery, mapping representation, mapping execution, and mapping maintenance. In this thesis, we mainly focus on mapping representation and on the transformation of mapping specifications into executable SPARQL queries. For mapping discovery support, the model provides an interface for plugging-in schema and ontology matching algorithms. For mapping maintenance we introduce the concept of a simple, but effective mapping registry. Based on the mapping model, we propose aWeb-based mediator wrapper-architecture that allows domain experts to set up mediation endpoints that provide a uniform SPARQL query interface to a set of distributed metadata sources. The involved data sources are encapsulated by wrapper components that expose the contained metadata and the schema definitions on the Web and provide a SPARQL query interface to these metadata. In this thesis, we present the OAI2LOD Server, a wrapper component for integrating metadata that are accessible via the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH). In a case study, we demonstrate how mappings can be created in aWeb environment and how our mediator wrapper architecture can easily be configured in order to integrate metadata from various heterogeneous data sources without the need to install any mapping solution or metadata integration solution in a local system environment.
  16. Hubrich, J.: Concepts in Context - Cologne Conference on Interoperability and Semantics in Knowledge Organization : Internationale Fachtagung und Abschlussworkshop des DFGProjekts CrissCross in Köln (2010) 0.00
    0.0010516574 = product of:
      0.0042066295 = sum of:
        0.0042066295 = weight(_text_:information in 4315) [ClassicSimilarity], result of:
          0.0042066295 = score(doc=4315,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.068573356 = fieldWeight in 4315, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=4315)
      0.25 = coord(1/4)
    
    Content
    Der zweite Tag begann mit einer Keynote von Dagobert Soergel von der University at Buffalo mit dem Thema Conceptual Foundations for Semantic Mapping and Semantic Search. Im Zentrum stand die Idee eines Hubs, einer semantischen Verbindungsstruktur in Form einer Kernklassifikation, die elementare Begriffe sowie semantische Relationen enthält und über die ein Mapping zwischen unterschiedlichen Wissensorganisationssystemen erfolgen soll. Die Methode wurde durch zahlreiche Beispiele veranschaulicht. Die erste Session des zweiten Tages war dem Thema Interoperabilität und Standardisierung gewidmet. Stella Dextre Clarke aus Großbritannien berichtete - ausgehend von den in zentralen Mappingprojekten erstellten Relationen zwischen Begriffen unterschiedlicher Dokumentationssprachen - über Herausforderungen und Fragestellungen bei der Entwicklung des neuen ISO-Standards 25964-2, der als Leitfaden zur Herstellung von Interoperabilität zwischen Thesauri und anderen Vokabularien fungieren soll. In dem Folgevortrag von Philipp Mayr vom GESIS Leipniz-Institut für Sozialwissenschaften wurd mit KoMoHe (Kompetenzzentrum Modellbildung und Heterogenitätsbehandlung) ein bereits abgeschlossenes Projekt vorgestellt, dessen Mehrwert für das Retrieval in heterogen erschlossenen Informationsräumen mittels eines Information-Retrieval-Tests bestätigt werden konnte. Unpräzise Trefferresultate motivierten indes zu dem Nachfolgeprojekt IRM (Value-Added Services for Information Retrieval), in dem Möglichkeiten von Suchexpansion und Re-Ranking untersucht werden.
  17. ISO 25964-2: Der Standard für die Interoperabilität von Thesauri (2013) 0.00
    0.0010410878 = product of:
      0.004164351 = sum of:
        0.004164351 = weight(_text_:information in 772) [ClassicSimilarity], result of:
          0.004164351 = score(doc=772,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.06788416 = fieldWeight in 772, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=772)
      0.25 = coord(1/4)
    
    Content
    Der vollständige Titel von Teil 2 lautet "Information and documentation - Thesauri and interoperability with other vocabularies - Teil 2: Interoperability with other vocabularies". Wichtige Themen, die der Standard behandelt, sind Strukturmodelle für das Mapping, Richtlinien für Mappingtypen und der Umgang mit Präkombination, die besonders bei Klassifikationen, Taxonomien und Schlagwortsystemen vorkommt. Das primäre Augenmerk von ISO 25964 gilt den Thesauri, und mit Ausnahme von Terminologien existieren keine vergleichbaren Standards für die anderen Vokabulartypen. Statt zu versuchen, diese normativ darzustellen, behandelt Teil 2 ausschließlich die Interoperabilität zwischen ihnen und den Thesauri. Die Kapitel für die einzelnen Vokabulartypen decken jeweils folgende Sachverhalte ab: - Schlüsseleigenschaften des Vokabulars (deskriptiv, nicht normativ) - Semantische Komponenten/Beziehungen (deskriptiv, nicht normativ) Sofern anwendbar, Empfehlungen für das Mapping zwischen Vokabular und Thesaurus (normativ).
  18. Slavic, A.: Mapping intricacies : UDC to DDC (2010) 0.00
    7.4363407E-4 = product of:
      0.0029745363 = sum of:
        0.0029745363 = weight(_text_:information in 3370) [ClassicSimilarity], result of:
          0.0029745363 = score(doc=3370,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.048488684 = fieldWeight in 3370, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3370)
      0.25 = coord(1/4)
    
    Content
    Another challenge appears when, e.g., mapping Dewey class 890 Literatures of other specific languages and language families, which does not make sense in UDC in which all languages and literatures have equal status. Standard UDC schedules do not have a selection of preferred literatures and other literatures. In principle, UDC does not allow classes entitled 'others' which do not have defined semantic content. If entities are subdivided and there is no provision for an item outside the listed subclasses then this item is subsumed to a top class or a broader class where all unspecifiied or general members of that class may be expected. If specification is needed this can be divised by adding an alphabetical extension to the broader class. Here we have to find and list in the UDC Summary all literatures that are 'unpreferred' i.e. lumped in the 890 classes and map them again as many-to-one specific-to-broader match. The example below illustrates another interesting case. Classes Dewey 061 and UDC 06 cover roughy the same semantic field but in the subdivision the Dewey Summaries lists a combination of subject and place and as an enumerative classification, provides ready made numbers for combinations of place that are most common in an average (American?) library. This is a frequent approach in the schemes created with the physical book arrangement, i.e. library schelves, in mind. UDC, designed as an indexing language for information retrieval, keeps subject and place in separate tables and allows for any concept of place such as, e.g. (7) North America to be used in combination with any subject as these may coincide in documents. Thus combinations such as Newspapers in North America, or Organizations in North America would not be offered as ready made combinations. There is no selection of 'preferred' or 'most needed countries' or languages or cultures in the standard UDC edition: <Tabelle>

Authors

Years

Languages

  • e 130
  • d 26

Types

  • a 115
  • el 34
  • m 14
  • s 7
  • x 7
  • n 2
  • r 1
  • More… Less…