Search (67 results, page 2 of 4)

  • × theme_ss:"Semantische Interoperabilität"
  • × year_i:[2000 TO 2010}
  1. Lauser, B.; Johannsen, G.; Caracciolo, C.; Hage, W.R. van; Keizer, J.; Mayr, P.: Comparing human and automatic thesaurus mapping approaches in the agricultural domain (2008) 0.02
    0.022917118 = product of:
      0.045834236 = sum of:
        0.029088326 = weight(_text_:web in 2627) [ClassicSimilarity], result of:
          0.029088326 = score(doc=2627,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 2627, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2627)
        0.01674591 = product of:
          0.03349182 = sum of:
            0.03349182 = weight(_text_:22 in 2627) [ClassicSimilarity], result of:
              0.03349182 = score(doc=2627,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.19345059 = fieldWeight in 2627, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2627)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Knowledge organization systems (KOS), like thesauri and other controlled vocabularies, are used to provide subject access to information systems across the web. Due to the heterogeneity of these systems, mapping between vocabularies becomes crucial for retrieving relevant information. However, mapping thesauri is a laborious task, and thus big efforts are being made to automate the mapping process. This paper examines two mapping approaches involving the agricultural thesaurus AGROVOC, one machine-created and one human created. We are addressing the basic question "What are the pros and cons of human and automatic mapping and how can they complement each other?" By pointing out the difficulties in specific cases or groups of cases and grouping the sample into simple and difficult types of mappings, we show the limitations of current automatic methods and come up with some basic recommendations on what approach to use when.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  2. Si, L.E.; O'Brien, A.; Probets, S.: Integration of distributed terminology resources to facilitate subject cross-browsing for library portal systems (2009) 0.02
    0.022917118 = product of:
      0.045834236 = sum of:
        0.029088326 = weight(_text_:web in 3628) [ClassicSimilarity], result of:
          0.029088326 = score(doc=3628,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 3628, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3628)
        0.01674591 = product of:
          0.03349182 = sum of:
            0.03349182 = weight(_text_:22 in 3628) [ClassicSimilarity], result of:
              0.03349182 = score(doc=3628,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.19345059 = fieldWeight in 3628, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3628)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose: To develop a prototype middleware framework between different terminology resources in order to provide a subject cross-browsing service for library portal systems. Design/methodology/approach: Nine terminology experts were interviewed to collect appropriate knowledge to support the development of a theoretical framework for the research. Based on this, a simplified software-based prototype system was constructed incorporating the knowledge acquired. The prototype involved mappings between the computer science schedule of the Dewey Decimal Classification (which acted as a spine) and two controlled vocabularies UKAT and ACM Computing Classification. Subsequently, six further experts in the field were invited to evaluate the prototype system and provide feedback to improve the framework. Findings: The major findings showed that given the large variety of terminology resources distributed on the web, the proposed middleware service is essential to integrate technically and semantically the different terminology resources in order to facilitate subject cross-browsing. A set of recommendations are also made outlining the important approaches and features that support such a cross browsing middleware service.
    Content
    This paper is a pre-print version presented at the ISKO UK 2009 conference, 22-23 June, prior to peer review and editing. For published proceedings see special issue of Aslib Proceedings journal.
  3. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.02
    0.022902532 = product of:
      0.091610126 = sum of:
        0.091610126 = product of:
          0.27483037 = sum of:
            0.27483037 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.27483037 = score(doc=306,freq=2.0), product of:
                0.41914827 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049439456 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  4. Burstein, M.; McDermott, D.V.: Ontology translation for interoperability among Semantic Web services (2005) 0.02
    0.01781289 = product of:
      0.07125156 = sum of:
        0.07125156 = weight(_text_:web in 2661) [ClassicSimilarity], result of:
          0.07125156 = score(doc=2661,freq=12.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.4416067 = fieldWeight in 2661, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2661)
      0.25 = coord(1/4)
    
    Abstract
    Research on semantic web services promises greater interoperability among software agents and web services by enabling content-based automated service discovery and interaction and by utilizing. Although this is to be based on use of shared ontologies published on the semantic web, services produced and described by different developers may well use different, perhaps partly overlapping, sets of ontologies. Interoperability will depend on ontology mappings and architectures supporting the associated translation processes. The question we ask is, does the traditional approach of introducing mediator agents to translate messages between requestors and services work in such an open environment? This article reviews some of the processing assumptions that were made in the development of the semantic web service modeling ontology OWL-S and argues that, as a practical matter, the translation function cannot always be isolated in mediators. Ontology mappings need to be published on the semantic web just as ontologies themselves are. The translation for service discovery, service process model interpretation, task negotiation, service invocation, and response interpretation may then be distributed to various places in the architecture so that translation can be done in the specific goal-oriented informational contexts of the agents performing these processes. We present arguments for assigning translation responsibility to particular agents in the cases of service invocation, response translation, and match- making.
  5. Krause, J.: Heterogenität und Integration : Zur Weiterentwicklung von Inhaltserschließung und Retrieval in sich veränderten Kontexten (2001) 0.02
    0.016496718 = product of:
      0.06598687 = sum of:
        0.06598687 = weight(_text_:search in 6071) [ClassicSimilarity], result of:
          0.06598687 = score(doc=6071,freq=8.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.3840117 = fieldWeight in 6071, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6071)
      0.25 = coord(1/4)
    
    Abstract
    As an important support tool in science research, specialized information systems are rapidly changing their character. The potential for improvement compared with today's usual systems is enormous. This fact will be demonstrated by means of two problem complexes: - WWW search engines, which were developed without any government grants, are increasingly dominating the scene. Does the WWW displace information centers with their high quality databases? What are the results we can get nowadays using general WWW search engines? - In addition to the WWW and specialized databases, scientists now use WWW library catalogues of digital libraries, which combine the catalogues from an entire region or a country. At the same time, however, they are faced with highly decentralized heterogeneous databases which contain the widest range of textual sources and data, e.g. from surveys. One consequence is the presence of serious inconsistencies in quality, relevance and content analysis. Thus, the main problem to be solved is as follows: users must be supplied with heterogeneous data from different sources, modalities and content development processes via a visual user interface without inconsistencies in content development, for example, seriously impairing the quality of the search results, e. g. when phrasing their search inquiry in the terminology to which they are accustomed
  6. Budin, G.: Kommunikation in Netzwerken : Terminologiemanagement (2006) 0.02
    0.016454842 = product of:
      0.06581937 = sum of:
        0.06581937 = weight(_text_:web in 5700) [ClassicSimilarity], result of:
          0.06581937 = score(doc=5700,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.4079388 = fieldWeight in 5700, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=5700)
      0.25 = coord(1/4)
    
    Abstract
    Dieses Kapitel gibt einen Überblick über Ziele, Methoden, und Anwendungskontexte des Terminologiemanagements. Eine Definition von "Terminologie" leitet über zu einem terminologischen Wissensmodell, mit dem die Dynamik und Komplexität von begrifflichen Wissensstrukturen und entsprechenden lexikalischen Repräsentationsformen beschrieben werden kann. Ziel der terminologischen Wissensmodellierung ist die Erarbeitung von sprachlichen und begrifflichen Voraussetzungen für präzise Fachkommunikation sowie für die semantische Interoperabilität im künftigen "Semantic Web".
    Source
    Semantic Web: Wege zur vernetzten Wissensgesellschaft. Hrsg.: T. Pellegrini, u. A. Blumauer
  7. Liang, A.; Salokhe, G.; Sini, M.; Keizer, J.: Towards an infrastructure for semantic applications : methodologies for semantic integration of heterogeneous resources (2006) 0.02
    0.015114739 = product of:
      0.060458954 = sum of:
        0.060458954 = weight(_text_:web in 241) [ClassicSimilarity], result of:
          0.060458954 = score(doc=241,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.37471575 = fieldWeight in 241, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=241)
      0.25 = coord(1/4)
    
    Abstract
    The semantic heterogeneity presented by Web information in the Agricultural domain presents tremendous information retrieval challenges. This article presents work taking place at the Food and Agriculture Organizations (FAO) which addresses this challenge. Based on the analysis of resources in the domain of agriculture, this paper proposes (a) an application profile (AP) for dealing with the problem of heterogeneity originating from differences in terminologies, domain coverage, and domain modelling, and (b) a root application ontology (AAO) based on the application profile which can serve as a basis for extending knowledge of the domain. The paper explains how even a small investment in the enhancement of relations between vocabularies, both metadata and domain-specific, yields a relatively large return on investment.
    Footnote
    Simultaneously published as Knitting the Semantic Web
    Theme
    Semantic Web
  8. Koutsomitropoulos, D.A.; Solomou, G.D.; Alexopoulos, A.D.; Papatheodorou, T.S.: Semantic metadata interoperability and inference-based querying in digital repositories (2009) 0.02
    0.015114739 = product of:
      0.060458954 = sum of:
        0.060458954 = weight(_text_:web in 3731) [ClassicSimilarity], result of:
          0.060458954 = score(doc=3731,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.37471575 = fieldWeight in 3731, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3731)
      0.25 = coord(1/4)
    
    Abstract
    Metadata applications have evolved in time into highly structured "islands of information" about digital resources, often bearing a strong semantic interpretation. Scarcely however are these semantics being communicated in machine readable and understandable ways. At the same time, the process for transforming the implied metadata knowledge into explicit Semantic Web descriptions can be problematic and is not always evident. In this article we take upon the well-established Dublin Core metadata standard as well as other metadata schemata, which often appear in digital repositories set-ups, and suggest a proper Semantic Web OWL ontology. In this process the authors cope with discrepancies and incompatibilities, indicative of such attempts, in novel ways. Moreover, we show the potential and necessity of this approach by demonstrating inferences on the resulting ontology, instantiated with actual metadata records. The authors conclude by presenting a working prototype that provides for inference-based querying on top of digital repositories.
    Theme
    Semantic Web
  9. Vizine-Goetz, D.; Houghton, A.; Childress, E.: Web services for controlled vocabularies (2006) 0.01
    0.014544163 = product of:
      0.05817665 = sum of:
        0.05817665 = weight(_text_:web in 1171) [ClassicSimilarity], result of:
          0.05817665 = score(doc=1171,freq=8.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.36057037 = fieldWeight in 1171, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1171)
      0.25 = coord(1/4)
    
    Abstract
    Amid the debates about whether folksonomies will supplant controlled vocabularies and whether the Library of Congress Subject Headings (LCSH) and Dewey Decimal Classification (DDC) system have outlived their usefulness, libraries, museums and other organizations continue to require efficient, effective access to controlled vocabularies for creating consistent metadata for their collections . In this article, we present an approach for using Web services to interact with controlled vocabularies. Services are implemented within a service-oriented architecture (SOA) framework. SOA is an approach to distributed computing where services are loosely coupled and discoverable on the network. A set of experimental services for controlled vocabularies is provided through the Microsoft Office (MS) Research task pane (a small window or sidebar that opens up next to Internet Explorer (IE) and other Microsoft Office applications). The research task pane is a built-in feature of IE when MS Office 2003 is loaded. The research pane enables a user to take advantage of a number of research and reference services accessible over the Internet. Web browsers, such as Mozilla Firefox and Opera, also provide sidebars which could be used to deliver similar, loosely-coupled Web services.
  10. Tang, J.; Liang, B.-Y.; Li, J.-Z.: Toward detecting mapping strategies for ontology interoperability (2005) 0.01
    0.014544163 = product of:
      0.05817665 = sum of:
        0.05817665 = weight(_text_:web in 3367) [ClassicSimilarity], result of:
          0.05817665 = score(doc=3367,freq=8.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.36057037 = fieldWeight in 3367, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3367)
      0.25 = coord(1/4)
    
    Abstract
    Ontology mapping is one of the core tasks for ontology interoperability. It is aimed to find semantic relationships between entities (i.e. concept, attribute, and relation) of two ontologies. It benefits many applications, such as integration of ontology based web data sources, interoperability of agents or web services. To reduce the amount of users' effort as much as possible, (semi-) automatic ontology mapping is becoming more and more important to bring it into fruition. In the existing literature, many approaches have found considerable interest by combining several different similar/mapping strategies (namely multi-strategy based mapping). However, experiments show that the multi-strategy based mapping does not always outperform its single-strategy counterpart. In this paper, we mainly aim to deal with two problems: (1) for a new, unseen mapping task, should we select a multi-strategy based algorithm or just one single-strategy based algorithm? (2) if the task is suitable for multi-strategy, then how to select the strategies into the final combined scenario? We propose an approach of multiple strategies detections for ontology mapping. The results obtained so far show that multi-strategy detection improves on precision and recall significantly.
    Content
    Beitrag anlässlich: Workshop on The Semantic Computing Initiative (SeC 2005) --- From Semantic Web to Semantic World --- to be held in conjunction with The 14th Int'l Conf. on World Wide Web (WWW2005); vgl.: http://www.instsec.org/2005ws/.
  11. McCulloch, E.; Shiri, A.; Nicholson, A.D.: Subject searching requirements : the HILT II experience (2004) 0.01
    0.013997929 = product of:
      0.055991717 = sum of:
        0.055991717 = weight(_text_:search in 4758) [ClassicSimilarity], result of:
          0.055991717 = score(doc=4758,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.3258447 = fieldWeight in 4758, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=4758)
      0.25 = coord(1/4)
    
    Abstract
    The HILT Phase II project aimed to develop a pilot terminologies server with a view to improving cross-sectoral information retrieval. In order to inform this process, it was first necessary to examine how a representative group of users approached a range of information-related tasks. This paper focuses on exploratory interviews conducted to investigate the proposed ideal and actual strategies of a group of 30 users in relation to eight separate information tasks. In addition, users were asked to give examples of search terms they may employ and to describe how they would formulate search queries in each scenario. The interview process undertaken and the results compiled are outlined, and associated implications for the development of a pilot terminologies server are discussed.
  12. Landry, P.: Providing multilingual subject access through linking of subject heading languages : the MACS approach (2009) 0.01
    0.0131973745 = product of:
      0.052789498 = sum of:
        0.052789498 = weight(_text_:search in 2787) [ClassicSimilarity], result of:
          0.052789498 = score(doc=2787,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.30720934 = fieldWeight in 2787, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0625 = fieldNorm(doc=2787)
      0.25 = coord(1/4)
    
    Abstract
    The MACS project aims at providing multilingual subject access to library catalogues through the use of concordances between subject headings from LCSH, RAMEAU and SWD. The manual approach, as used by MACS, has been up to now the most reliable method for ensuring accurate multilingual subject access to bibliographic data. The presentation will give an overview on the development of the project and will outline the strategy and methods used by the MACS project. The presentation will also include a demonstration of the search interface developed by The European Library (TEL).
  13. Haslhofer, B.: ¬A Web-based mapping technique for establishing metadata interoperability (2008) 0.01
    0.012059384 = product of:
      0.048237536 = sum of:
        0.048237536 = weight(_text_:web in 3173) [ClassicSimilarity], result of:
          0.048237536 = score(doc=3173,freq=22.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.29896918 = fieldWeight in 3173, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3173)
      0.25 = coord(1/4)
    
    Abstract
    The integration of metadata from distinct, heterogeneous data sources requires metadata interoperability, which is a qualitative property of metadata information objects that is not given by default. The technique of metadata mapping allows domain experts to establish metadata interoperability in a certain integration scenario. Mapping solutions, as a technical manifestation of this technique, are already available for the intensively studied domain of database system interoperability, but they rarely exist for the Web. If we consider the amount of steadily increasing structured metadata and corresponding metadata schemes on theWeb, we can observe a clear need for a mapping solution that can operate in aWeb-based environment. To achieve that, we first need to build its technical core, which is a mapping model that provides the language primitives to define mapping relationships. Existing SemanticWeb languages such as RDFS and OWL define some basic mapping elements (e.g., owl:equivalentProperty, owl:sameAs), but do not address the full spectrum of semantic and structural heterogeneities that can occur among distinct, incompatible metadata information objects. Furthermore, it is still unclear how to process defined mapping relationships during run-time in order to deliver metadata to the client in a uniform way. As the main contribution of this thesis, we present an abstract mapping model, which reflects the mapping problem on a generic level and provides the means for reconciling incompatible metadata. Instance transformation functions and URIs take a central role in that model. The former cover a broad spectrum of possible structural and semantic heterogeneities, while the latter bind the complete mapping model to the architecture of the Word Wide Web. On the concrete, language-specific level we present a binding of the abstract mapping model for the RDF Vocabulary Description Language (RDFS), which allows us to create mapping specifications among incompatible metadata schemes expressed in RDFS. The mapping model is embedded in a cyclic process that categorises the requirements a mapping solution should fulfil into four subsequent phases: mapping discovery, mapping representation, mapping execution, and mapping maintenance. In this thesis, we mainly focus on mapping representation and on the transformation of mapping specifications into executable SPARQL queries. For mapping discovery support, the model provides an interface for plugging-in schema and ontology matching algorithms. For mapping maintenance we introduce the concept of a simple, but effective mapping registry. Based on the mapping model, we propose aWeb-based mediator wrapper-architecture that allows domain experts to set up mediation endpoints that provide a uniform SPARQL query interface to a set of distributed metadata sources. The involved data sources are encapsulated by wrapper components that expose the contained metadata and the schema definitions on the Web and provide a SPARQL query interface to these metadata. In this thesis, we present the OAI2LOD Server, a wrapper component for integrating metadata that are accessible via the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH). In a case study, we demonstrate how mappings can be created in aWeb environment and how our mediator wrapper architecture can easily be configured in order to integrate metadata from various heterogeneous data sources without the need to install any mapping solution or metadata integration solution in a local system environment.
    Content
    Die Integration von Metadaten aus unterschiedlichen, heterogenen Datenquellen erfordert Metadaten-Interoperabilität, eine Eigenschaft die nicht standardmäßig gegeben ist. Metadaten Mapping Verfahren ermöglichen es Domänenexperten Metadaten-Interoperabilität in einem bestimmten Integrationskontext herzustellen. Mapping Lösungen sollen dabei die notwendige Unterstützung bieten. Während diese für den etablierten Bereich interoperabler Datenbanken bereits existieren, ist dies für Web-Umgebungen nicht der Fall. Betrachtet man das Ausmaß ständig wachsender strukturierter Metadaten und Metadatenschemata im Web, so zeichnet sich ein Bedarf nach Web-basierten Mapping Lösungen ab. Den Kern einer solchen Lösung bildet ein Mappingmodell, das die zur Spezifikation von Mappings notwendigen Sprachkonstrukte definiert. Existierende Semantic Web Sprachen wie beispielsweise RDFS oder OWL bieten zwar grundlegende Mappingelemente (z.B.: owl:equivalentProperty, owl:sameAs), adressieren jedoch nicht das gesamte Sprektrum möglicher semantischer und struktureller Heterogenitäten, die zwischen unterschiedlichen, inkompatiblen Metadatenobjekten auftreten können. Außerdem fehlen technische Lösungsansätze zur Überführung zuvor definierter Mappings in ausfu¨hrbare Abfragen. Als zentraler wissenschaftlicher Beitrag dieser Dissertation, wird ein abstraktes Mappingmodell pr¨asentiert, welches das Mappingproblem auf generischer Ebene reflektiert und Lösungsansätze zum Abgleich inkompatibler Schemata bietet. Instanztransformationsfunktionen und URIs nehmen in diesem Modell eine zentrale Rolle ein. Erstere überbrücken ein breites Spektrum möglicher semantischer und struktureller Heterogenitäten, während letztere das Mappingmodell in die Architektur des World Wide Webs einbinden. Auf einer konkreten, sprachspezifischen Ebene wird die Anbindung des abstrakten Modells an die RDF Vocabulary Description Language (RDFS) präsentiert, wodurch ein Mapping zwischen unterschiedlichen, in RDFS ausgedrückten Metadatenschemata ermöglicht wird. Das Mappingmodell ist in einen zyklischen Mappingprozess eingebunden, der die Anforderungen an Mappinglösungen in vier aufeinanderfolgende Phasen kategorisiert: mapping discovery, mapping representation, mapping execution und mapping maintenance. Im Rahmen dieser Dissertation beschäftigen wir uns hauptsächlich mit der Representation-Phase sowie mit der Transformation von Mappingspezifikationen in ausführbare SPARQL-Abfragen. Zur Unterstützung der Discovery-Phase bietet das Mappingmodell eine Schnittstelle zur Einbindung von Schema- oder Ontologymatching-Algorithmen. Für die Maintenance-Phase präsentieren wir ein einfaches, aber seinen Zweck erfüllendes Mapping-Registry Konzept. Auf Basis des Mappingmodells stellen wir eine Web-basierte Mediator-Wrapper Architektur vor, die Domänenexperten die Möglichkeit bietet, SPARQL-Mediationsschnittstellen zu definieren. Die zu integrierenden Datenquellen müssen dafür durch Wrapper-Komponenen gekapselt werden, welche die enthaltenen Metadaten im Web exponieren und SPARQL-Zugriff ermöglichen. Als beipielhafte Wrapper Komponente präsentieren wir den OAI2LOD Server, mit dessen Hilfe Datenquellen eingebunden werden können, die ihre Metadaten über das Open Archives Initative Protocol for Metadata Harvesting (OAI-PMH) exponieren. Im Rahmen einer Fallstudie zeigen wir, wie Mappings in Web-Umgebungen erstellt werden können und wie unsere Mediator-Wrapper Architektur nach wenigen, einfachen Konfigurationsschritten Metadaten aus unterschiedlichen, heterogenen Datenquellen integrieren kann, ohne dass dadurch die Notwendigkeit entsteht, eine Mapping Lösung in einer lokalen Systemumgebung zu installieren.
  14. Mayr, P.; Walter, A.-K.: Einsatzmöglichkeiten von Crosskonkordanzen (2007) 0.01
    0.0116353305 = product of:
      0.046541322 = sum of:
        0.046541322 = weight(_text_:web in 162) [ClassicSimilarity], result of:
          0.046541322 = score(doc=162,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.2884563 = fieldWeight in 162, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=162)
      0.25 = coord(1/4)
    
    Abstract
    Der Beitrag stellt Einsatzmöglichkeiten und spezifische Problembereiche von Crosskonkordanzen (CK) im Projekt "Kompetenznetzwerk Modellbildung und Heterogenitätsbehand lung" (KoMoHe) so wie das Netz der bis dato entstandenen Terminologie-Überstiege vor. Die am IZ entstandenen CK sollen künftig über einen Terminologie-Service als Web Service genutzt werden, dieser wird im Beitrag exemplarisch vorgestellt. Des Weiteren wird ein Testszenario samt Evaluationsdesign beschrieben über das der Mehrwert von Crosskonkordanzen empirisch untersucht werden kann.
  15. McCulloch, E.: Multiple terminologies : an obstacle to information retrieval (2004) 0.01
    0.011547703 = product of:
      0.046190813 = sum of:
        0.046190813 = weight(_text_:search in 2798) [ClassicSimilarity], result of:
          0.046190813 = score(doc=2798,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.2688082 = fieldWeight in 2798, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2798)
      0.25 = coord(1/4)
    
    Abstract
    An issue currently at the forefront of digital library research is the prevalence of disparate terminologies and the associated limitations imposed on user searching. It is thought that semantic interoperability is achievable by improving the compatibility between terminologies and classification schemes, enabling users to search multiple resources simultaneously and improve retrieval effectiveness through the use of associated terms drawn from several schemes. This column considers the terminology issue before outlining various proposed methods of tackling it, with a particular focus on terminology mapping.
  16. Sigel, A.: Wissensorganisation, Topic Maps und Ontology Engineering : Die Verbindung bewährter Begriffsstrukturen mit aktueller XML Technologie (2004) 0.01
    0.010284277 = product of:
      0.041137107 = sum of:
        0.041137107 = weight(_text_:web in 3236) [ClassicSimilarity], result of:
          0.041137107 = score(doc=3236,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.25496176 = fieldWeight in 3236, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3236)
      0.25 = coord(1/4)
    
    Abstract
    Wie können begriffliche Strukturen an Topic Maps angebunden werden? Allgemeiner. Wie kann die Wissensorganisation dazu beitragen, dass im Semantic Web eine begriffsbasierte Infrastruktur verfügbar ist? Dieser Frage hat sich die Wissensorganisation bislang noch nicht wirklich angenommen. Insgesamt ist die Berührung zwischen semantischen Wissenstechnologien und wissensorganisatorischen Fragestellungen noch sehr gering, obwohl Begriffsstrukturen, Ontologien und Topic Maps grundsätzlich gut zusammenpassen und ihre gemeinsame Betrachtung Erkenntnisse für zentrale wissensorganisatorische Fragestellungen wie z.B. semantische Interoperabilität und semantisches Retrieval erwarten lässt. Daher motiviert und skizziert dieser Beitrag die Grundidee, nach der es möglich sein müsste, eine Sprache zur Darstellung von Begriffsstrukturen in der Wissensorganisation geeignet mit Topic Maps zu verbinden. Eine genauere Untersuchung und Implementation stehen allerdings weiterhin aus. Speziell wird vermutet, dass sich der Concepto zugrunde liegende Formalismus CLF (Concept Language Formalism) mit Topic Maps vorteilhaft abbilden lässt 3 Damit können Begriffs- und Themennetze realisiert werden, die auf expliziten Begriffssystemen beruhen. Seitens der Wissensorganisation besteht die Notwendigkeit, sich mit aktuellen Entwicklungen auf dem Gebiet des Semantic Web und ontology engineering vertraut zu machen, aber auch die eigene Kompetenz stärker aktiv in diese Gebiete einzubringen. Damit dies geschehen kann, führt dieser Beitrag zum besseren Verständnis zunächst aus Sicht der Wissensorganisation knapp in Ontologien und Topic Maps ein und diskutiert wichtige Überschneidungsbereiche.
  17. Mayr, P.; Walter, A.-K.: Zum Stand der Heterogenitätsbehandlung in vascoda : Bestandsaufnahme und Ausblick (2007) 0.01
    0.010180915 = product of:
      0.04072366 = sum of:
        0.04072366 = weight(_text_:web in 59) [ClassicSimilarity], result of:
          0.04072366 = score(doc=59,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.25239927 = fieldWeight in 59, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=59)
      0.25 = coord(1/4)
    
    Abstract
    Der Beitrag stellt das Verfahren zur Erstellung von Crosskonkordanzen (CK) im Projekt "Kompetenznetzwerk Modellbildung und Heterogenitätsbehandlung" (KoMoHe)1 sowie das Netz der bis dato entstandenen Terminologie-Überstiege vor. Neben CK zwischen Indexierungssprachen innerhalb eines Anwendungsgebiets (z.B. Sozial- und Politikwissenschaften), werden Termbeispiele vorgestellt, die Fächer unterschiedlicher Fachgebiete verknüpfen. Es werden weiterhin typische Einsatzszenarien der CK innerhalb von Informationssystemen präsentiert. Die am IZ entstandenen CK sollen künftig über einen Terminologie-Service als Web Service genutzt werden. Der sog. Heterogenitätsservice, der als Term-Umschlüsselungs-Dienst fungieren soll, wird exemplarisch anhand konkreter Fragestellungen vorgeführt.
  18. Panzer, M.; Zeng, M.L.: Modeling classification systems in SKOS : Some challenges and best-practice (2009) 0.01
    0.010180915 = product of:
      0.04072366 = sum of:
        0.04072366 = weight(_text_:web in 3717) [ClassicSimilarity], result of:
          0.04072366 = score(doc=3717,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.25239927 = fieldWeight in 3717, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3717)
      0.25 = coord(1/4)
    
    Abstract
    Representing classification systems on the web for publication and exchange continues to be a challenge within the SKOS framework. This paper focuses on the differences between classification schemes and other families of KOS (knowledge organization systems) that make it difficult to express classifications without sacrificing a large amount of their semantic richness. Issues resulting from the specific set of relationships between classes and topics that defines the basic nature of any classification system are discussed. Where possible, different solutions within the frameworks of SKOS and OWL are proposed and examined.
  19. Hoffmann, P.; Médini and , L.; Ghodous, P.: Using context to improve semantic interoperability (2006) 0.01
    0.010180915 = product of:
      0.04072366 = sum of:
        0.04072366 = weight(_text_:web in 4434) [ClassicSimilarity], result of:
          0.04072366 = score(doc=4434,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.25239927 = fieldWeight in 4434, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4434)
      0.25 = coord(1/4)
    
    Source
    Leading the Web in concurrent engineering: next generation concurrent engineering. Proceeding of the 2006 ISPE Conference on Concurrent Engineering. Edited by Parisa Ghodous, Rose Dieng-Kuntz, Geilson Loureiro
  20. Wilde, E.: Semantische Interoperabilität von XML Schemas (2005) 0.01
    0.010180915 = product of:
      0.04072366 = sum of:
        0.04072366 = weight(_text_:web in 155) [ClassicSimilarity], result of:
          0.04072366 = score(doc=155,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.25239927 = fieldWeight in 155, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=155)
      0.25 = coord(1/4)
    
    Source
    XML Web services magazine. 2005, no.2, S.35-38

Languages

  • e 52
  • d 14

Types

  • a 45
  • el 26
  • x 2
  • r 1
  • More… Less…