Search (65 results, page 1 of 4)

  • × theme_ss:"Semantische Interoperabilität"
  • × year_i:[2000 TO 2010}
  1. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.11
    0.10973629 = product of:
      0.32920888 = sum of:
        0.08230222 = product of:
          0.24690665 = sum of:
            0.24690665 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.24690665 = score(doc=306,freq=2.0), product of:
                0.37656134 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.044416238 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.24690665 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.24690665 = score(doc=306,freq=2.0), product of:
            0.37656134 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.044416238 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.33333334 = coord(2/6)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  2. Krause, J.: Shell Model, Semantic Web and Web Information Retrieval (2006) 0.04
    0.042189382 = product of:
      0.12656814 = sum of:
        0.04816959 = weight(_text_:wide in 6061) [ClassicSimilarity], result of:
          0.04816959 = score(doc=6061,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.24476713 = fieldWeight in 6061, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6061)
        0.078398556 = weight(_text_:web in 6061) [ClassicSimilarity], result of:
          0.078398556 = score(doc=6061,freq=18.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.5408555 = fieldWeight in 6061, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6061)
      0.33333334 = coord(2/6)
    
    Abstract
    The middle of the 1990s are coined by the increased enthusiasm for the possibilities of the WWW, which has only recently deviated - at least in relation to scientific information - for the differentiated measuring of its advantages and disadvantages. Web Information Retrieval originated as a specialized discipline with great commercial significance (for an overview see Lewandowski 2005). Besides the new technological structure that enables the indexing and searching (in seconds) of unimaginable amounts of data worldwide, new assessment processes for the ranking of search results are being developed, which use the link structures of the Web. They are the main innovation with respect to the traditional "mother discipline" of Information Retrieval. From the beginning, link structures of Web pages are applied to commercial search engines in a wide array of variations. From the perspective of scientific information, link topology based approaches were in essence trying to solve a self-created problem: on the one hand, it quickly became clear that the openness of the Web led to an up-tonow unknown increase in available information, but this also caused the quality of the Web pages searched to become a problem - and with it the relevance of the results. The gatekeeper function of traditional information providers, which narrows down every user query to focus on high-quality sources was lacking. Therefore, the recognition of the "authoritativeness" of the Web pages by general search engines such as Google was one of the most important factors for their success.
    Theme
    Semantic Web
  3. Schneider, R.: Web 3.0 ante portas? : Integration von Social Web und Semantic Web (2008) 0.04
    0.03928657 = product of:
      0.11785971 = sum of:
        0.09679745 = weight(_text_:web in 4184) [ClassicSimilarity], result of:
          0.09679745 = score(doc=4184,freq=14.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.6677857 = fieldWeight in 4184, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4184)
        0.021062255 = product of:
          0.04212451 = sum of:
            0.04212451 = weight(_text_:22 in 4184) [ClassicSimilarity], result of:
              0.04212451 = score(doc=4184,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.2708308 = fieldWeight in 4184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4184)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Das Medium Internet ist im Wandel, und mit ihm ändern sich seine Publikations- und Rezeptionsbedingungen. Welche Chancen bieten die momentan parallel diskutierten Zukunftsentwürfe von Social Web und Semantic Web? Zur Beantwortung dieser Frage beschäftigt sich der Beitrag mit den Grundlagen beider Modelle unter den Aspekten Anwendungsbezug und Technologie, beleuchtet darüber hinaus jedoch auch deren Unzulänglichkeiten sowie den Mehrwert einer mediengerechten Kombination. Am Beispiel des grammatischen Online-Informationssystems grammis wird eine Strategie zur integrativen Nutzung der jeweiligen Stärken skizziert.
    Date
    22. 1.2011 10:38:28
    Source
    Kommunikation, Partizipation und Wirkungen im Social Web, Band 1. Hrsg.: A. Zerfaß u.a
    Theme
    Semantic Web
  4. Si, L.E.; O'Brien, A.; Probets, S.: Integration of distributed terminology resources to facilitate subject cross-browsing for library portal systems (2009) 0.04
    0.036973603 = product of:
      0.073947206 = sum of:
        0.026132854 = weight(_text_:web in 3628) [ClassicSimilarity], result of:
          0.026132854 = score(doc=3628,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.18028519 = fieldWeight in 3628, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3628)
        0.03276989 = weight(_text_:computer in 3628) [ClassicSimilarity], result of:
          0.03276989 = score(doc=3628,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.20188503 = fieldWeight in 3628, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3628)
        0.0150444675 = product of:
          0.030088935 = sum of:
            0.030088935 = weight(_text_:22 in 3628) [ClassicSimilarity], result of:
              0.030088935 = score(doc=3628,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.19345059 = fieldWeight in 3628, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3628)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    Purpose: To develop a prototype middleware framework between different terminology resources in order to provide a subject cross-browsing service for library portal systems. Design/methodology/approach: Nine terminology experts were interviewed to collect appropriate knowledge to support the development of a theoretical framework for the research. Based on this, a simplified software-based prototype system was constructed incorporating the knowledge acquired. The prototype involved mappings between the computer science schedule of the Dewey Decimal Classification (which acted as a spine) and two controlled vocabularies UKAT and ACM Computing Classification. Subsequently, six further experts in the field were invited to evaluate the prototype system and provide feedback to improve the framework. Findings: The major findings showed that given the large variety of terminology resources distributed on the web, the proposed middleware service is essential to integrate technically and semantically the different terminology resources in order to facilitate subject cross-browsing. A set of recommendations are also made outlining the important approaches and features that support such a cross browsing middleware service.
    Content
    This paper is a pre-print version presented at the ISKO UK 2009 conference, 22-23 June, prior to peer review and editing. For published proceedings see special issue of Aslib Proceedings journal.
  5. Tang, J.; Liang, B.-Y.; Li, J.-Z.: Toward detecting mapping strategies for ontology interoperability (2005) 0.03
    0.033478435 = product of:
      0.1004353 = sum of:
        0.04816959 = weight(_text_:wide in 3367) [ClassicSimilarity], result of:
          0.04816959 = score(doc=3367,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.24476713 = fieldWeight in 3367, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3367)
        0.052265707 = weight(_text_:web in 3367) [ClassicSimilarity], result of:
          0.052265707 = score(doc=3367,freq=8.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.36057037 = fieldWeight in 3367, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3367)
      0.33333334 = coord(2/6)
    
    Abstract
    Ontology mapping is one of the core tasks for ontology interoperability. It is aimed to find semantic relationships between entities (i.e. concept, attribute, and relation) of two ontologies. It benefits many applications, such as integration of ontology based web data sources, interoperability of agents or web services. To reduce the amount of users' effort as much as possible, (semi-) automatic ontology mapping is becoming more and more important to bring it into fruition. In the existing literature, many approaches have found considerable interest by combining several different similar/mapping strategies (namely multi-strategy based mapping). However, experiments show that the multi-strategy based mapping does not always outperform its single-strategy counterpart. In this paper, we mainly aim to deal with two problems: (1) for a new, unseen mapping task, should we select a multi-strategy based algorithm or just one single-strategy based algorithm? (2) if the task is suitable for multi-strategy, then how to select the strategies into the final combined scenario? We propose an approach of multiple strategies detections for ontology mapping. The results obtained so far show that multi-strategy detection improves on precision and recall significantly.
    Content
    Beitrag anlässlich: Workshop on The Semantic Computing Initiative (SeC 2005) --- From Semantic Web to Semantic World --- to be held in conjunction with The 14th Int'l Conf. on World Wide Web (WWW2005); vgl.: http://www.instsec.org/2005ws/.
  6. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.03
    0.028143687 = product of:
      0.08443106 = sum of:
        0.063368805 = weight(_text_:web in 759) [ClassicSimilarity], result of:
          0.063368805 = score(doc=759,freq=6.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.43716836 = fieldWeight in 759, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=759)
        0.021062255 = product of:
          0.04212451 = sum of:
            0.04212451 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
              0.04212451 = score(doc=759,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.2708308 = fieldWeight in 759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=759)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    XML will have a profound impact on the way data is exchanged on the Internet. An important feature of this language is the separation of content from presentation, which makes it easier to select and/or reformat the data. However, due to the likelihood of numerous industry and domain specific DTDs, those who wish to integrate information will still be faced with the problem of semantic interoperability. In this paper we discuss why this problem is not solved by XML, and then discuss why the Resource Description Framework is only a partial solution. We then present the SHOE language, which we feel has many of the features necessary to enable a semantic web, and describe an existing set of tools that make it easy to use the language.
    Date
    11. 5.2013 19:22:18
    Theme
    Semantic Web
  7. Harken, S.E.: Subject semantic interoperability. Report of the Subcommittee on Semantic Interoperability to the ALCTS Subject Analysis Committee : Final report (2006) 0.03
    0.026979826 = product of:
      0.08093948 = sum of:
        0.04816959 = weight(_text_:wide in 906) [ClassicSimilarity], result of:
          0.04816959 = score(doc=906,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.24476713 = fieldWeight in 906, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=906)
        0.03276989 = weight(_text_:computer in 906) [ClassicSimilarity], result of:
          0.03276989 = score(doc=906,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.20188503 = fieldWeight in 906, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=906)
      0.33333334 = coord(2/6)
    
    Abstract
    The need for improved semantic in teroperability between and among vocabularies and knowledge organization schemes is undeniable and growing in importance. There is an ever-increasing need to create an environment by which even multiple portals could be accessed via subject metadata using software that is neutral and available ubiquitously or directly to the user, that could be copied by libraries for use in their own environment. In order to develop or improve a knowledge organization system including emerging options in semantic interoperability, scholars and practitioners need to be able to evaluate a wide variety of projects and stay current with the professional literature. Based on its findings, the Subcommittee concludes that the development of a successful subject semantic interoperability project is a long and difficult process. It requires a substantial investment of financial, human and computer resources. The Subcommittee recommends using the information and tools in this report and its appendices to assist in developing a successful project incorporating subject semantic interoperability. Finally the Subcommittee concludes that since this field of endeavor is still relatively young and immature, it is too early to generate a set of Best Practices that could be used in developing a successful project. We are past the theoretical and basic research phase and into the development phase. Even though there are some successful projects in full production, more projects need to reach maturity and much more research needs to be done.
  8. Haslhofer, B.: ¬A Web-based mapping technique for establishing metadata interoperability (2008) 0.03
    0.02579916 = product of:
      0.07739748 = sum of:
        0.034061044 = weight(_text_:wide in 3173) [ClassicSimilarity], result of:
          0.034061044 = score(doc=3173,freq=4.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.17307651 = fieldWeight in 3173, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3173)
        0.04333644 = weight(_text_:web in 3173) [ClassicSimilarity], result of:
          0.04333644 = score(doc=3173,freq=22.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.29896918 = fieldWeight in 3173, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3173)
      0.33333334 = coord(2/6)
    
    Abstract
    The integration of metadata from distinct, heterogeneous data sources requires metadata interoperability, which is a qualitative property of metadata information objects that is not given by default. The technique of metadata mapping allows domain experts to establish metadata interoperability in a certain integration scenario. Mapping solutions, as a technical manifestation of this technique, are already available for the intensively studied domain of database system interoperability, but they rarely exist for the Web. If we consider the amount of steadily increasing structured metadata and corresponding metadata schemes on theWeb, we can observe a clear need for a mapping solution that can operate in aWeb-based environment. To achieve that, we first need to build its technical core, which is a mapping model that provides the language primitives to define mapping relationships. Existing SemanticWeb languages such as RDFS and OWL define some basic mapping elements (e.g., owl:equivalentProperty, owl:sameAs), but do not address the full spectrum of semantic and structural heterogeneities that can occur among distinct, incompatible metadata information objects. Furthermore, it is still unclear how to process defined mapping relationships during run-time in order to deliver metadata to the client in a uniform way. As the main contribution of this thesis, we present an abstract mapping model, which reflects the mapping problem on a generic level and provides the means for reconciling incompatible metadata. Instance transformation functions and URIs take a central role in that model. The former cover a broad spectrum of possible structural and semantic heterogeneities, while the latter bind the complete mapping model to the architecture of the Word Wide Web. On the concrete, language-specific level we present a binding of the abstract mapping model for the RDF Vocabulary Description Language (RDFS), which allows us to create mapping specifications among incompatible metadata schemes expressed in RDFS. The mapping model is embedded in a cyclic process that categorises the requirements a mapping solution should fulfil into four subsequent phases: mapping discovery, mapping representation, mapping execution, and mapping maintenance. In this thesis, we mainly focus on mapping representation and on the transformation of mapping specifications into executable SPARQL queries. For mapping discovery support, the model provides an interface for plugging-in schema and ontology matching algorithms. For mapping maintenance we introduce the concept of a simple, but effective mapping registry. Based on the mapping model, we propose aWeb-based mediator wrapper-architecture that allows domain experts to set up mediation endpoints that provide a uniform SPARQL query interface to a set of distributed metadata sources. The involved data sources are encapsulated by wrapper components that expose the contained metadata and the schema definitions on the Web and provide a SPARQL query interface to these metadata. In this thesis, we present the OAI2LOD Server, a wrapper component for integrating metadata that are accessible via the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH). In a case study, we demonstrate how mappings can be created in aWeb environment and how our mediator wrapper architecture can easily be configured in order to integrate metadata from various heterogeneous data sources without the need to install any mapping solution or metadata integration solution in a local system environment.
    Content
    Die Integration von Metadaten aus unterschiedlichen, heterogenen Datenquellen erfordert Metadaten-Interoperabilität, eine Eigenschaft die nicht standardmäßig gegeben ist. Metadaten Mapping Verfahren ermöglichen es Domänenexperten Metadaten-Interoperabilität in einem bestimmten Integrationskontext herzustellen. Mapping Lösungen sollen dabei die notwendige Unterstützung bieten. Während diese für den etablierten Bereich interoperabler Datenbanken bereits existieren, ist dies für Web-Umgebungen nicht der Fall. Betrachtet man das Ausmaß ständig wachsender strukturierter Metadaten und Metadatenschemata im Web, so zeichnet sich ein Bedarf nach Web-basierten Mapping Lösungen ab. Den Kern einer solchen Lösung bildet ein Mappingmodell, das die zur Spezifikation von Mappings notwendigen Sprachkonstrukte definiert. Existierende Semantic Web Sprachen wie beispielsweise RDFS oder OWL bieten zwar grundlegende Mappingelemente (z.B.: owl:equivalentProperty, owl:sameAs), adressieren jedoch nicht das gesamte Sprektrum möglicher semantischer und struktureller Heterogenitäten, die zwischen unterschiedlichen, inkompatiblen Metadatenobjekten auftreten können. Außerdem fehlen technische Lösungsansätze zur Überführung zuvor definierter Mappings in ausfu¨hrbare Abfragen. Als zentraler wissenschaftlicher Beitrag dieser Dissertation, wird ein abstraktes Mappingmodell pr¨asentiert, welches das Mappingproblem auf generischer Ebene reflektiert und Lösungsansätze zum Abgleich inkompatibler Schemata bietet. Instanztransformationsfunktionen und URIs nehmen in diesem Modell eine zentrale Rolle ein. Erstere überbrücken ein breites Spektrum möglicher semantischer und struktureller Heterogenitäten, während letztere das Mappingmodell in die Architektur des World Wide Webs einbinden. Auf einer konkreten, sprachspezifischen Ebene wird die Anbindung des abstrakten Modells an die RDF Vocabulary Description Language (RDFS) präsentiert, wodurch ein Mapping zwischen unterschiedlichen, in RDFS ausgedrückten Metadatenschemata ermöglicht wird. Das Mappingmodell ist in einen zyklischen Mappingprozess eingebunden, der die Anforderungen an Mappinglösungen in vier aufeinanderfolgende Phasen kategorisiert: mapping discovery, mapping representation, mapping execution und mapping maintenance. Im Rahmen dieser Dissertation beschäftigen wir uns hauptsächlich mit der Representation-Phase sowie mit der Transformation von Mappingspezifikationen in ausführbare SPARQL-Abfragen. Zur Unterstützung der Discovery-Phase bietet das Mappingmodell eine Schnittstelle zur Einbindung von Schema- oder Ontologymatching-Algorithmen. Für die Maintenance-Phase präsentieren wir ein einfaches, aber seinen Zweck erfüllendes Mapping-Registry Konzept. Auf Basis des Mappingmodells stellen wir eine Web-basierte Mediator-Wrapper Architektur vor, die Domänenexperten die Möglichkeit bietet, SPARQL-Mediationsschnittstellen zu definieren. Die zu integrierenden Datenquellen müssen dafür durch Wrapper-Komponenen gekapselt werden, welche die enthaltenen Metadaten im Web exponieren und SPARQL-Zugriff ermöglichen. Als beipielhafte Wrapper Komponente präsentieren wir den OAI2LOD Server, mit dessen Hilfe Datenquellen eingebunden werden können, die ihre Metadaten über das Open Archives Initative Protocol for Metadata Harvesting (OAI-PMH) exponieren. Im Rahmen einer Fallstudie zeigen wir, wie Mappings in Web-Umgebungen erstellt werden können und wie unsere Mediator-Wrapper Architektur nach wenigen, einfachen Konfigurationsschritten Metadaten aus unterschiedlichen, heterogenen Datenquellen integrieren kann, ohne dass dadurch die Notwendigkeit entsteht, eine Mapping Lösung in einer lokalen Systemumgebung zu installieren.
  9. Bittner, T.; Donnelly, M.; Winter, S.: Ontology and semantic interoperability (2006) 0.02
    0.024555234 = product of:
      0.0736657 = sum of:
        0.05561234 = weight(_text_:computer in 4820) [ClassicSimilarity], result of:
          0.05561234 = score(doc=4820,freq=4.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.34261024 = fieldWeight in 4820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=4820)
        0.01805336 = product of:
          0.03610672 = sum of:
            0.03610672 = weight(_text_:22 in 4820) [ClassicSimilarity], result of:
              0.03610672 = score(doc=4820,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.23214069 = fieldWeight in 4820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4820)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    One of the major problems facing systems for Computer Aided Design (CAD), Architecture Engineering and Construction (AEC) and Geographic Information Systems (GIS) applications today is the lack of interoperability among the various systems. When integrating software applications, substantial di culties can arise in translating information from one application to the other. In this paper, we focus on semantic di culties that arise in software integration. Applications may use di erent terminologies to describe the same domain. Even when appli-cations use the same terminology, they often associate di erent semantics with the terms. This obstructs information exchange among applications. To cir-cumvent this obstacle, we need some way of explicitly specifying the semantics for each terminology in an unambiguous fashion. Ontologies can provide such specification. It will be the task of this paper to explain what ontologies are and how they can be used to facilitate interoperability between software systems used in computer aided design, architecture engineering and construction, and geographic information processing.
    Date
    3.12.2016 18:39:22
  10. Widhalm, R.; Mueck, T.A.: Merging topics in well-formed XML topic maps (2003) 0.02
    0.023561096 = product of:
      0.070683286 = sum of:
        0.031359423 = weight(_text_:web in 2186) [ClassicSimilarity], result of:
          0.031359423 = score(doc=2186,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.21634221 = fieldWeight in 2186, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2186)
        0.039323866 = weight(_text_:computer in 2186) [ClassicSimilarity], result of:
          0.039323866 = score(doc=2186,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.24226204 = fieldWeight in 2186, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=2186)
      0.33333334 = coord(2/6)
    
    Series
    Lecture notes in computer science; vol. 2870
    Source
    The Semantic Web - ISWC 2003. Eds. D. Fensel et al
  11. Tennis, J.T.: Versioning concept schemes for persistent retrieval (2006) 0.02
    0.022700539 = product of:
      0.068101615 = sum of:
        0.03853567 = weight(_text_:wide in 1956) [ClassicSimilarity], result of:
          0.03853567 = score(doc=1956,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.1958137 = fieldWeight in 1956, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=1956)
        0.029565949 = weight(_text_:web in 1956) [ClassicSimilarity], result of:
          0.029565949 = score(doc=1956,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.2039694 = fieldWeight in 1956, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1956)
      0.33333334 = coord(2/6)
    
    Abstract
    Things change. Words change, meaning changes and use changes both words and meaning. In information access systems this means concept schemes such as thesauri or classification schemes change. They always have. Concept schemes that have survived have evolved over time, moving from one version, often called an edition, to the next. If we want to manage how words and meanings - and as a consequence use - change in an effective manner, and if we want to be able to search across versions of concept schemes, we have to track these changes. This paper explores how we might expand SKOS, a World Wide Web Consortium (W3C) draft recommendation in order to do that kind of tracking. The Simple Knowledge Organization System (SKOS) Core Guide is sponsored by the Semantic Web Best Practices and Deployment Working Group. The second draft, edited by Alistair Miles and Dan Brickley, was issued in November 2005. SKOS is a "model for expressing the basic structure and content of concept schemes such as thesauri, classification schemes, subject heading lists, taxonomies, folksonomies, other types of controlled vocabulary and also concept schemes embedded in glossaries and terminologies" in RDF. How SKOS handles version in concept schemes is an open issue. The current draft guide suggests using OWL and DCTERMS as mechanisms for concept scheme revision. As it stands an editor of a concept scheme can make notes or declare in OWL that more than one version exists. This paper adds to the SKOS Core by introducing a tracking system for changes in concept schemes. We call this tracking system vocabulary ontogeny. Ontogeny is a biological term for the development of an organism during its lifetime. Here we use the ontogeny metaphor to describe how vocabularies change over their lifetime. Our purpose here is to create a conceptual mechanism that will track these changes and in so doing enhance information retrieval and prevent document loss through versioning, thereby enabling persistent retrieval.
  12. Godby, C.J.; Smith, D.; Childress, E.: Encoding application profiles in a computational model of the crosswalk (2008) 0.02
    0.020102633 = product of:
      0.060307898 = sum of:
        0.045263432 = weight(_text_:web in 2649) [ClassicSimilarity], result of:
          0.045263432 = score(doc=2649,freq=6.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.3122631 = fieldWeight in 2649, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2649)
        0.0150444675 = product of:
          0.030088935 = sum of:
            0.030088935 = weight(_text_:22 in 2649) [ClassicSimilarity], result of:
              0.030088935 = score(doc=2649,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.19345059 = fieldWeight in 2649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2649)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    OCLC's Crosswalk Web Service (Godby, Smith and Childress, 2008) formalizes the notion of crosswalk, as defined in Gill,et al. (n.d.), by hiding technical details and permitting the semantic equivalences to emerge as the centerpiece. One outcome is that metadata experts, who are typically not programmers, can enter the translation logic into a spreadsheet that can be automatically converted into executable code. In this paper, we describe the implementation of the Dublin Core Terms application profile in the management of crosswalks involving MARC. A crosswalk that encodes an application profile extends the typical format with two columns: one that annotates the namespace to which an element belongs, and one that annotates a 'broader-narrower' relation between a pair of elements, such as Dublin Core coverage and Dublin Core Terms spatial. This information is sufficient to produce scripts written in OCLC's Semantic Equivalence Expression Language (or Seel), which are called from the Crosswalk Web Service to generate production-grade translations. With its focus on elements that can be mixed, matched, added, and redefined, the application profile (Heery and Patel, 2000) is a natural fit with the translation model of the Crosswalk Web Service, which attempts to achieve interoperability by mapping one pair of elements at a time.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  13. Mao, M.: Ontology mapping : towards semantic interoperability in distributed and heterogeneous environments (2008) 0.02
    0.019813985 = product of:
      0.059441954 = sum of:
        0.03853567 = weight(_text_:wide in 4659) [ClassicSimilarity], result of:
          0.03853567 = score(doc=4659,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.1958137 = fieldWeight in 4659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=4659)
        0.020906283 = weight(_text_:web in 4659) [ClassicSimilarity], result of:
          0.020906283 = score(doc=4659,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.14422815 = fieldWeight in 4659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=4659)
      0.33333334 = coord(2/6)
    
    Abstract
    This dissertation studies ontology mapping: the problem of finding semantic correspondences between similar elements of different ontologies. In the dissertation, elements denote classes or properties of ontologies. The goal of this research is to use ontology mapping to make heterogeneous information more accessible. The World Wide Web (WWW) now is widely used as a universal medium for information exchange. Semantic interoperability among different information systems in the WWW is limited due to information heterogeneity, and the non semantic nature of HTML and URLs. Ontologies have been suggested as a way to solve the problem of information heterogeneity by providing formal, explicit definitions of data and reasoning ability over related concepts. Given that no universal ontology exists for the WWW, work has focused on finding semantic correspondences between similar elements of different ontologies, i.e., ontology mapping. Ontology mapping can be done either by hand or using automated tools. Manual mapping becomes impractical as the size and complexity of ontologies increases. Full or semi-automated mapping approaches have been examined by several research studies. Previous full or semiautomated mapping approaches include analyzing linguistic information of elements in ontologies, treating ontologies as structural graphs, applying heuristic rules and machine learning techniques, and using probabilistic and reasoning methods etc. In this paper, two generic ontology mapping approaches are proposed. One is the PRIOR+ approach, which utilizes both information retrieval and artificial intelligence techniques in the context of ontology mapping. The other is the non-instance learning based approach, which experimentally explores machine learning algorithms to solve ontology mapping problem without requesting any instance. The results of the PRIOR+ on different tests at OAEI ontology matching campaign 2007 are encouraging. The non-instance learning based approach has shown potential for solving ontology mapping problem on OAEI benchmark tests.
  14. Panzer, M.: Relationships, spaces, and the two faces of Dewey (2008) 0.02
    0.018240947 = product of:
      0.054722838 = sum of:
        0.035060905 = weight(_text_:web in 2127) [ClassicSimilarity], result of:
          0.035060905 = score(doc=2127,freq=10.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.24187797 = fieldWeight in 2127, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2127)
        0.019661933 = weight(_text_:computer in 2127) [ClassicSimilarity], result of:
          0.019661933 = score(doc=2127,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.12113102 = fieldWeight in 2127, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2127)
      0.33333334 = coord(2/6)
    
    Content
    What are those "other" relationships that Dewey possesses and that seem so important to surface? Firstly, there is the relationship of concepts to resources. Dewey has been used for a long time, and over 200,000 numbers are assigned to information resources each year and added to WorldCat by the Library of Congress and the German National Library alone. Secondly, we have relationships between concepts in the scheme itself. Dewey provides a rich set of non-hierarchical relations, indicating other relevant and related subjects across disciplinary boundaries. Thirdly, perhaps most importantly, there is the relationship between the same concepts across different languages. Dewey has been translated extensively, and current versions are available in French, German, Hebrew, Italian, Spanish, and Vietnamese. Briefer representations of the top-three levels (the DDC Summaries) are available in several languages in the DeweyBrowser. This multilingual nature of the scheme allows searchers to access a broader range of resources or to switch the language of--and thus localize--subject metadata seamlessly. MelvilClass, a Dewey front-end developed by the German National Library for the German translation, could be used as a common interface to the DDC in any language, as it is built upon the standard DDC data format. It is not hard to give an example of the basic terminology of a class pulled together in a multilingual way: <class/794.8> a skos:Concept ; skos:notation "794.8"^^ddc:notation ; skos:prefLabel "Computer games"@en ; skos:prefLabel "Computerspiele"@de ; skos:prefLabel "Jeux sur ordinateur"@fr ; skos:prefLabel "Juegos por computador"@es .
    Expressed in such manner, the Dewey number provides a language-independent representation of a Dewey concept, accompanied by language-dependent assertions about the concept. This information, identified by a URI, can be easily consumed by semantic web agents and used in various metadata scenarios. Fourthly, as we have seen, it is important to play well with others, i.e., establishing and maintaining relationships to other KOS and making the scheme available in different formats. As noted in the Dewey blog post "Tags and Dewey," since no single scheme is ever going to be the be-all, end-all solution for knowledge discovery, DDC concepts have been extensively mapped to other vocabularies and taxonomies, sometimes bridging them and acting as a backbone, sometimes using them as additional access vocabulary to be able to do more work "behind the scenes." To enable other applications and schemes to make use of those relationships, the full Dewey database is available in XML format; RDF-based formats and a web service are forthcoming. Pulling those relationships together under a common surface will be the next challenge going forward. In the semantic web community the concept of Linked Data (http://en.wikipedia.org/wiki/Linked_Data) currently receives some attention, with its emphasis on exposing and connecting data using technologies like URIs, HTTP and RDF to improve information discovery on the web. With its focus on relationships and discovery, it seems that Dewey will be well prepared to become part of this big linked data set. Now it is about putting the classification back into the world!"
    Theme
    Semantic Web
  15. Dobrev, P.; Kalaydjiev, O.; Angelova, G.: From conceptual structures to semantic interoperability of content (2007) 0.02
    0.015938118 = product of:
      0.047814354 = sum of:
        0.03276989 = weight(_text_:computer in 4607) [ClassicSimilarity], result of:
          0.03276989 = score(doc=4607,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.20188503 = fieldWeight in 4607, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4607)
        0.0150444675 = product of:
          0.030088935 = sum of:
            0.030088935 = weight(_text_:22 in 4607) [ClassicSimilarity], result of:
              0.030088935 = score(doc=4607,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.19345059 = fieldWeight in 4607, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4607)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Series
    Lecture notes in computer science: Lecture notes in artificial intelligence ; 4604
    Source
    Conceptual structures: knowledge architectures for smart applications: 15th International Conference on Conceptual Structures, ICCS 2007, Sheffield, UK, July 22 - 27, 2007 ; proceedings. Eds.: U. Priss u.a
  16. Krause, J.: Semantic heterogeneity : comparing new semantic web approaches with those of digital libraries (2008) 0.02
    0.01570389 = product of:
      0.09422334 = sum of:
        0.09422334 = weight(_text_:web in 1908) [ClassicSimilarity], result of:
          0.09422334 = score(doc=1908,freq=26.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.65002745 = fieldWeight in 1908, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1908)
      0.16666667 = coord(1/6)
    
    Abstract
    Purpose - To demonstrate that newer developments in the semantic web community, particularly those based on ontologies (simple knowledge organization system and others) mitigate common arguments from the digital library (DL) community against participation in the Semantic web. Design/methodology/approach - The approach is a semantic web discussion focusing on the weak structure of the Web and the lack of consideration given to the semantic content during indexing. Findings - The points criticised by the semantic web and ontology approaches are the same as those of the DL "Shell model approach" from the mid-1990s, with emphasis on the centrality of its heterogeneity components (used, for example, in vascoda). The Shell model argument began with the "invisible web", necessitating the restructuring of DL approaches. The conclusion is that both approaches fit well together and that the Shell model, with its semantic heterogeneity components, can be reformulated on the semantic web basis. Practical implications - A reinterpretation of the DL approaches of semantic heterogeneity and adapting to standards and tools supported by the W3C should be the best solution. It is therefore recommended that - although most of the semantic web standards are not technologically refined for commercial applications at present - all individual DL developments should be checked for their adaptability to the W3C standards of the semantic web. Originality/value - A unique conceptual analysis of the parallel developments emanating from the digital library and semantic web communities.
    Footnote
    Beitrag eines Themenheftes "Digital libraries and the semantic web: context, applications and research".
    Theme
    Semantic Web
  17. Lauser, B.; Johannsen, G.; Caracciolo, C.; Hage, W.R. van; Keizer, J.; Mayr, P.: Comparing human and automatic thesaurus mapping approaches in the agricultural domain (2008) 0.01
    0.013725774 = product of:
      0.04117732 = sum of:
        0.026132854 = weight(_text_:web in 2627) [ClassicSimilarity], result of:
          0.026132854 = score(doc=2627,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.18028519 = fieldWeight in 2627, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2627)
        0.0150444675 = product of:
          0.030088935 = sum of:
            0.030088935 = weight(_text_:22 in 2627) [ClassicSimilarity], result of:
              0.030088935 = score(doc=2627,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.19345059 = fieldWeight in 2627, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2627)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Knowledge organization systems (KOS), like thesauri and other controlled vocabularies, are used to provide subject access to information systems across the web. Due to the heterogeneity of these systems, mapping between vocabularies becomes crucial for retrieving relevant information. However, mapping thesauri is a laborious task, and thus big efforts are being made to automate the mapping process. This paper examines two mapping approaches involving the agricultural thesaurus AGROVOC, one machine-created and one human created. We are addressing the basic question "What are the pros and cons of human and automatic mapping and how can they complement each other?" By pointing out the difficulties in specific cases or groups of cases and grouping the sample into simple and difficult types of mappings, we show the limitations of current automatic methods and come up with some basic recommendations on what approach to use when.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  18. Nicholson, D.: High-Level Thesaurus (HILT) project : interoperability and cross-searching distributed services (200?) 0.01
    0.012845224 = product of:
      0.07707134 = sum of:
        0.07707134 = weight(_text_:wide in 5966) [ClassicSimilarity], result of:
          0.07707134 = score(doc=5966,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.3916274 = fieldWeight in 5966, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0625 = fieldNorm(doc=5966)
      0.16666667 = coord(1/6)
    
    Abstract
    My presentation is about the HILT, High Level Thesaurus Project, which is looking, very roughly speaking, at how we might deal with interoperability problems relating to cross-searching distributed services by subject. The aims of HILT are to study and report on the problem of cross-searching and browsing by subject across a range of communities, services, and service or resource types in the UK given the wide range of subject schemes and associated practices in place
  19. Liang, A.C.; Sini, M.: Mapping AGROVOC and the Chinese Agricultural Thesaurus : definitions, tools, procedures (2006) 0.01
    0.0112395715 = product of:
      0.067437425 = sum of:
        0.067437425 = weight(_text_:wide in 5707) [ClassicSimilarity], result of:
          0.067437425 = score(doc=5707,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.342674 = fieldWeight in 5707, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5707)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper describes the procedures for a concept-based mapping of two agricultural thesauri, the multilingual AGROVOC, created and maintained by the Food and Agricultural Organization, and the bilingual Chinese Agricultural Thesaurus, created and maintained by the Chinese Academy of Agricultural Science. Conducted under the auspices of FAO's Agricultural Ontology Service, the mapping project aims to extend AGROVOC with an additional set of perspectives on the agricultural domains, enrich its domain and language coverage, and make use of AGROVOC as a common data model for data exchange among a wide range of multilingual repositories within agriculture.
  20. Burstein, M.; McDermott, D.V.: Ontology translation for interoperability among Semantic Web services (2005) 0.01
    0.010668693 = product of:
      0.064012155 = sum of:
        0.064012155 = weight(_text_:web in 2661) [ClassicSimilarity], result of:
          0.064012155 = score(doc=2661,freq=12.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.4416067 = fieldWeight in 2661, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2661)
      0.16666667 = coord(1/6)
    
    Abstract
    Research on semantic web services promises greater interoperability among software agents and web services by enabling content-based automated service discovery and interaction and by utilizing. Although this is to be based on use of shared ontologies published on the semantic web, services produced and described by different developers may well use different, perhaps partly overlapping, sets of ontologies. Interoperability will depend on ontology mappings and architectures supporting the associated translation processes. The question we ask is, does the traditional approach of introducing mediator agents to translate messages between requestors and services work in such an open environment? This article reviews some of the processing assumptions that were made in the development of the semantic web service modeling ontology OWL-S and argues that, as a practical matter, the translation function cannot always be isolated in mediators. Ontology mappings need to be published on the semantic web just as ontologies themselves are. The translation for service discovery, service process model interpretation, task negotiation, service invocation, and response interpretation may then be distributed to various places in the architecture so that translation can be done in the specific goal-oriented informational contexts of the agents performing these processes. We present arguments for assigning translation responsibility to particular agents in the cases of service invocation, response translation, and match- making.

Languages

  • e 51
  • d 13

Types

  • a 41
  • el 25
  • r 2
  • x 2
  • More… Less…