Search (212 results, page 11 of 11)

  • × theme_ss:"Semantische Interoperabilität"
  • × type_ss:"a"
  1. Tudhope, D.; Binding, C.: Toward terminology services : experiences with a pilot Web service thesaurus browser (2006) 0.00
    0.0011898974 = product of:
      0.0071393843 = sum of:
        0.0071393843 = weight(_text_:in in 1955) [ClassicSimilarity], result of:
          0.0071393843 = score(doc=1955,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.120230645 = fieldWeight in 1955, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=1955)
      0.16666667 = coord(1/6)
    
    Abstract
    Dublin Core recommends controlled terminology for the subject of a resource. Knowledge organization systems (KOS), such as classifications, gazetteers, taxonomies and thesauri, provide controlled vocabularies that organize and structure concepts for indexing, classifying, browsing and search. For example, a thesaurus employs a set of standard semantic relationships (ISO 2788, ISO 5964), and major thesauri have a large entry vocabulary of terms considered equivalent for retrieval purposes. Many KOS have been made available for Web-based access. However, they are often not fully integrated into indexing and search systems and the full potential for networked and programmatic access remains untapped. The lack of standardized access and interchange formats impedes wider use of KOS resources. We developed a Web demonstrator (www.comp.glam.ac.uk/~FACET/webdemo/) for the FACET project (www.comp.glam.ac.uk/~facet/facetproject.html) that explored thesaurus-based query expansion with the Getty Art and Architecture Thesaurus. A Web demonstrator was implemented via Active Server Pages (ASP) with server-side scripting and compiled server-side components for database access, and cascading style sheets for presentation. The browser-based interactive interface permits dynamic control of query term expansion. However, being based on a custom thesaurus representation and API, the techniques cannot be applied directly to thesauri in other formats on the Web. General programmatic access requires commonly agreed protocols, for example, building on Web and Grid services. The development of common KOS representation formats and service protocols are closely linked. Linda Hill and colleagues argued in 2002 for a general KOS service protocol from which protocols for specific types of KOS can be derived. Thus, in the future, a combination of thesaurus and query protocols might permit a thesaurus to be used with a choice of search tools on various kinds of databases. Service-oriented architectures bring an opportunity for moving toward a clearer separation of interface components from the underlying data sources. In our view, basing distributed protocol services on the atomic elements of thesaurus data structures and relationships is not necessarily the best approach because client operations that require multiple client-server calls would carry too much overhead. This would limit the interfaces that could be offered by applications following such a protocol. Advanced interactive interfaces require protocols that group primitive thesaurus data elements (via their relationships) into composites to achieve reasonable response.
  2. Stamou, G.; Chortaras, A.: Ontological query answering over semantic data (2017) 0.00
    0.0011898974 = product of:
      0.0071393843 = sum of:
        0.0071393843 = weight(_text_:in in 3926) [ClassicSimilarity], result of:
          0.0071393843 = score(doc=3926,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.120230645 = fieldWeight in 3926, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=3926)
      0.16666667 = coord(1/6)
    
    Series
    Lecture Notes in Computer Scienc;10370) (Information Systems and Applications, incl. Internet/Web, and HCI
  3. Park, J.-r.: Semantic interoperability and metadata quality : an analysis of metadata item records of digital image collections (2006) 0.00
    0.0010517307 = product of:
      0.006310384 = sum of:
        0.006310384 = weight(_text_:in in 172) [ClassicSimilarity], result of:
          0.006310384 = score(doc=172,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.10626988 = fieldWeight in 172, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=172)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper is a current assessment of the status of metadata creation and mapping between catalogerdefined field names and Dublin Core (DC) metadata elements across three digital image collections. The metadata elements that evince the most frequently inaccurate, inconsistent and incomplete DC metadata application are identified. As well, the most frequently occurring locally added metadata elements and associated pattern development are examined. For this, a randomly collected sample of 659 metadata item records from three digital image collections is analyzed. Implications and issues drawn from the evaluation of the current status of metadata creation and mapping are also discussed in relation to the issue of semantic interoperability of concept representation across digital image collections. The findings of the study suggest that conceptual ambiguities and semantic overlaps inherent among some DC metadata elements hinder semantic interoperability. The DC metadata scheme needs to be refined in order to disambiguate semantic relations of certain DC metadata elements that present semantic overlaps and conceptual ambiguities between element names and their corresponding definitions. The findings of the study also suggest that the development of mediation mechanisms such as concept networks that facilitate the metadata creation and mapping process are critically needed for enhancing metadata quality.
  4. Vizine-Goetz, D.; Houghton, A.; Childress, E.: Web services for controlled vocabularies (2006) 0.00
    0.0010517307 = product of:
      0.006310384 = sum of:
        0.006310384 = weight(_text_:in in 1171) [ClassicSimilarity], result of:
          0.006310384 = score(doc=1171,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.10626988 = fieldWeight in 1171, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1171)
      0.16666667 = coord(1/6)
    
    Abstract
    Amid the debates about whether folksonomies will supplant controlled vocabularies and whether the Library of Congress Subject Headings (LCSH) and Dewey Decimal Classification (DDC) system have outlived their usefulness, libraries, museums and other organizations continue to require efficient, effective access to controlled vocabularies for creating consistent metadata for their collections . In this article, we present an approach for using Web services to interact with controlled vocabularies. Services are implemented within a service-oriented architecture (SOA) framework. SOA is an approach to distributed computing where services are loosely coupled and discoverable on the network. A set of experimental services for controlled vocabularies is provided through the Microsoft Office (MS) Research task pane (a small window or sidebar that opens up next to Internet Explorer (IE) and other Microsoft Office applications). The research task pane is a built-in feature of IE when MS Office 2003 is loaded. The research pane enables a user to take advantage of a number of research and reference services accessible over the Internet. Web browsers, such as Mozilla Firefox and Opera, also provide sidebars which could be used to deliver similar, loosely-coupled Web services.
  5. Panzer, M.; Zeng, M.L.: Modeling classification systems in SKOS : Some challenges and best-practice (2009) 0.00
    0.0010411602 = product of:
      0.006246961 = sum of:
        0.006246961 = weight(_text_:in in 3717) [ClassicSimilarity], result of:
          0.006246961 = score(doc=3717,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.10520181 = fieldWeight in 3717, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3717)
      0.16666667 = coord(1/6)
    
  6. Wilde, E.: Semantische Interoperabilität von XML Schemas (2005) 0.00
    0.0010411602 = product of:
      0.006246961 = sum of:
        0.006246961 = weight(_text_:in in 155) [ClassicSimilarity], result of:
          0.006246961 = score(doc=155,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.10520181 = fieldWeight in 155, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=155)
      0.16666667 = coord(1/6)
    
    Abstract
    XML bietet zwar durchaus das allgemein akzeptierte Verfahren zum Austausch strukturierter Daten, das in vielen Anwendungen benötigt wird, ist aber dennoch nicht ausreichend, Interoperabilität zwischen anwendungen sicherzustellen. Probleme können auf verschiedenen Ebenen entstehen, beginned bei so grundlegenden Dingen wie Zeichenkodierungen, bis hin zu Problemen des inhaltlichen Verständnisses von XML Dokumenten. Im vorliegeneden Artikel soll auf den letzteren Aspekt näher eingegangen werden, also die Frage, was notwendig ist, damit der Austausch von XML nicht nur syntaktisch funktioniert, sondern auch auf einem gemeinsamen Verständnis beider Seiten basiert.
  7. Sfakakis, M.; Zapounidou, S.; Papatheodorou, C.: Mapping derivative relationships from BIBFRAME 2.0 to RDA (2020) 0.00
    0.0010411602 = product of:
      0.006246961 = sum of:
        0.006246961 = weight(_text_:in in 294) [ClassicSimilarity], result of:
          0.006246961 = score(doc=294,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.10520181 = fieldWeight in 294, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=294)
      0.16666667 = coord(1/6)
    
    Abstract
    The mapping from BIBFRAME 2.0 to Resource Description and Access (RDA) is studied focusing on core entities, inherent relationships, and derivative relationships. The proposed mapping rules are evaluated with two gold datasets. Findings indicate that 1) core entities, inherent and derivative relationships may be mapped to RDA, 2) the use of the bf:hasExpression property may cluster bf:Works with the same ideational content and enable their mapping to RDA Works with their Expressions, and 3) cataloging policies have a significant impact on the interoperability between RDA and BIBFRAME datasets. This work complements the investigation of semantic interoperability between the two models previously presented in this journal.
  8. Lange, C.; Mossakowski, T.; Galinski, C.; Kutz, O.: Making heterogeneous ontologies interoperable through standardisation : a Meta Ontology Language to be standardised: Ontology Integration and Interoperability (OntoIOp) (2011) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 50) [ClassicSimilarity], result of:
          0.005354538 = score(doc=50,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 50, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=50)
      0.16666667 = coord(1/6)
    
    Abstract
    Assistive technology, especially for persons with disabilities, increasingly relies on electronic communication among users, between users and their devices, and among these devices. Making such ICT accessible and inclusive often requires remedial programming, which tends to be costly or even impossible. We, therefore, aim at more interoperable devices, services accessing these devices, and content delivered by these services, at the levels of 1. data and metadata, 2. datamodels and data modelling methods and 3. metamodels as well as a meta ontology language. Even though ontologies are widely being used to enable content interoperability, there is currently no unified framework for ontology interoperability itself. This paper outlines the design considerations underlying OntoIOp (Ontology Integration and Interoperability), a new standardisation activity in ISO/TC 37/SC 3 to become an international standard, which aims at filling this gap.
  9. Panzer, M.: Increasing patient findability of medical research : annotating clinical trials using standard vocabularies (2017) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 2783) [ClassicSimilarity], result of:
          0.005354538 = score(doc=2783,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 2783, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2783)
      0.16666667 = coord(1/6)
    
    Abstract
    Multiple groups at Mayo Clinic organize knowledge with the aid of metadata for a variety of purposes. The ontology group focuses on consumer-oriented health information using several controlled vocabularies to support and coordinate care providers, consumers, clinical knowledge and, as part of its research management, information on clinical trials. Poor findability, inconsistent indexing and specialized language undermined the goal of increasing trial participation. The ontology group designed a metadata framework addressing disorders and procedures, investigational drugs and clinical departments, adopted and translated the clinical terminology of SNOMED CT and RxNorm vocabularies to consumer language and coordinated terminology with Mayo's Consumer Health Vocabulary. The result enables retrieval of clinical trial information from multiple access points including conditions, procedures, drug names, organizations involved and trial phase. The jump in inquiries since the search site was revised and vocabularies were modified show evidence of success.
  10. Kim, J.-M.; Shin, H.; Kim, H.-J.: Schema and constraints-based matching and merging of Topic Maps (2007) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 922) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=922,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 922, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=922)
      0.16666667 = coord(1/6)
    
    Abstract
    In this paper, we propose a multi-strategic matching and merging approach to find correspondences between ontologies based on the syntactic or semantic characteristics and constraints of the Topic Maps. Our multi-strategic matching approach consists of a linguistic module and a Topic Map constraints-based module. A linguistic module computes similarities between concepts using morphological analysis, string normalization and tokenization and language-dependent heuristics. A Topic Map constraints-based module takes advantage of several Topic Maps-dependent techniques such as a topic property-based matching, a hierarchy-based matching, and an association-based matching. This is a composite matching procedure and need not generate a cross-pair of all topics from the ontologies because unmatched pairs of topics can be removed by characteristics and constraints of the Topic Maps. Merging between Topic Maps follows the matching operations. We set up the MERGE function to integrate two Topic Maps into a new Topic Map, which satisfies such merge requirements as entity preservation, property preservation, relation preservation, and conflict resolution. For our experiments, we used oriental philosophy ontologies, western philosophy ontologies, Yahoo western philosophy dictionary, and Wikipedia philosophy ontology as input ontologies. Our experiments show that the automatically generated matching results conform to the outputs generated manually by domain experts and can be of great benefit to the following merging operations.
  11. Baker, T.; Sutton, S.A.: Linked data and the charm of weak semantics : Introduction: the strengths of weak semantics (2015) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 2022) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=2022,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 2022, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2022)
      0.16666667 = coord(1/6)
    
    Abstract
    Logic and precision are fundamental to ontologies underlying the semantic web and, by extension, to linked data. This special section focuses on the interaction of semantics, ontologies and linked data. The discussion presents the Simple Knowledge Organization Scheme (SKOS) as a less formal strategy for expressing concept hierarchies and associations and questions the value of deep domain ontologies in favor of simpler vocabularies that are more open to reuse, albeit risking illogical outcomes. RDF ontologies harbor another unexpected drawback. While structurally sound, they leave validation gaps permitting illogical uses, a problem being addressed by a W3C Working Group. Data models based on RDF graphs and properties may replace traditional library catalog models geared to predefined entities, with relationships between RDF classes providing the semantic connections. The BIBFRAME Initiative takes a different and streamlined approach to linking data, building rich networks of information resources rather than relying on a strict underlying structure and vocabulary. Taken together, the articles illustrate the trend toward a pragmatic approach to a Semantic Web, sacrificing some specificity for greater flexibility and partial interoperability.
  12. Lee, S.: Pidgin metadata framework as a mediator for metadata interoperability (2021) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 654) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=654,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=654)
      0.16666667 = coord(1/6)
    
    Abstract
    A pidgin metadata framework based on the concept of pidgin metadata is proposed to complement the limitations of existing approaches to metadata interoperability and to achieve more reliable metadata interoperability. The framework consists of three layers, with a hierarchical structure, and reflects the semantic and structural characteristics of various metadata. Layer 1 performs both an external function, serving as an anchor for semantic association between metadata elements, and an internal function, providing semantic categories that can encompass detailed elements. Layer 2 is an arbitrary layer composed of substantial elements from existing metadata and performs a function in which different metadata elements describing the same or similar aspects of information resources are associated with the semantic categories of Layer 1. Layer 3 implements the semantic relationships between Layer 1 and Layer 2 through the Resource Description Framework syntax. With this structure, the pidgin metadata framework can establish the criteria for semantic connection between different elements and fully reflect the complexity and heterogeneity among various metadata. Additionally, it is expected to provide a bibliographic environment that can achieve more reliable metadata interoperability than existing approaches by securing the communication between metadata.

Years

Languages

  • e 154
  • d 57
  • pt 1
  • More… Less…