Search (119 results, page 1 of 6)

  • × theme_ss:"Semantische Interoperabilität"
  1. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.19
    0.19451872 = product of:
      0.5187166 = sum of:
        0.07410237 = product of:
          0.22230712 = sum of:
            0.22230712 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.22230712 = score(doc=306,freq=2.0), product of:
                0.33904418 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039991006 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.22230712 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.22230712 = score(doc=306,freq=2.0), product of:
            0.33904418 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039991006 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.22230712 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.22230712 = score(doc=306,freq=2.0), product of:
            0.33904418 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039991006 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.375 = coord(3/8)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  2. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.14
    0.13894194 = product of:
      0.37051186 = sum of:
        0.052930266 = product of:
          0.1587908 = sum of:
            0.1587908 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.1587908 = score(doc=1000,freq=2.0), product of:
                0.33904418 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.039991006 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.1587908 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.1587908 = score(doc=1000,freq=2.0), product of:
            0.33904418 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039991006 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.1587908 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.1587908 = score(doc=1000,freq=2.0), product of:
            0.33904418 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.039991006 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.375 = coord(3/8)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  3. Landry, P.: ¬The evolution of subject heading languages in Europe and their impact on subject access interoperability (2008) 0.04
    0.03919031 = product of:
      0.15676124 = sum of:
        0.13909864 = weight(_text_:europe in 2192) [ClassicSimilarity], result of:
          0.13909864 = score(doc=2192,freq=4.0), product of:
            0.24358861 = queryWeight, product of:
              6.091085 = idf(docFreq=271, maxDocs=44218)
              0.039991006 = queryNorm
            0.5710392 = fieldWeight in 2192, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.091085 = idf(docFreq=271, maxDocs=44218)
              0.046875 = fieldNorm(doc=2192)
        0.017662605 = product of:
          0.03532521 = sum of:
            0.03532521 = weight(_text_:resources in 2192) [ClassicSimilarity], result of:
              0.03532521 = score(doc=2192,freq=2.0), product of:
                0.14598069 = queryWeight, product of:
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.039991006 = queryNorm
                0.2419855 = fieldWeight in 2192, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2192)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Work in establishing interoperability between Subject Heading Languages (SHLs) in Europe is fairly recent and much work is still needed before users can successfully conduct subject searches across information resources in European libraries. Over the last 25 years many subject heading lists were created or developed from existing ones. Obstacles for effective interoperability have been progressively lifted which has paved the way for interoperability projects to achieve some encouraging results. This paper will look at interoperability approaches in the area of subject indexing tools and will present a short overview of the development of European SHLs. It will then look at the conditions necessary for effective and comprehensive interoperability using the method of linking subject headings, as used by the »Multilingual Access to Subject Headings project« (MACS).
  4. Hoekstra, R.: BestMap: context-aware SKOS vocabulary mappings in OWL 2 (2009) 0.04
    0.03618863 = product of:
      0.14475451 = sum of:
        0.124148145 = weight(_text_:property in 1574) [ClassicSimilarity], result of:
          0.124148145 = score(doc=1574,freq=2.0), product of:
            0.25336683 = queryWeight, product of:
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.039991006 = queryNorm
            0.4899937 = fieldWeight in 1574, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1574)
        0.020606373 = product of:
          0.041212745 = sum of:
            0.041212745 = weight(_text_:resources in 1574) [ClassicSimilarity], result of:
              0.041212745 = score(doc=1574,freq=2.0), product of:
                0.14598069 = queryWeight, product of:
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.039991006 = queryNorm
                0.28231642 = fieldWeight in 1574, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1574)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    This paper describes an approach to SKOS vocabulary mapping that takes into account the context in which vocabulary terms are used in annotations. The standard vocabulary mapping properties in SKOS only allow for binary mappings between concepts. In the BestMap ontology, annotated resources are the contexts in which annotations coincide and allow for a more fine grained control over when mappings hold. A mapping between two vocabularies is defined as a class that groups descriptions of a resource. We use the OWL 2 features for property chains, disjoint properties, union, intersection and negation together with careful use of equivalence and subsumption to specify these mappings.
  5. Si, L.E.; O'Brien, A.; Probets, S.: Integration of distributed terminology resources to facilitate subject cross-browsing for library portal systems (2009) 0.03
    0.028867872 = product of:
      0.11547149 = sum of:
        0.029504994 = weight(_text_:computer in 3628) [ClassicSimilarity], result of:
          0.029504994 = score(doc=3628,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.20188503 = fieldWeight in 3628, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3628)
        0.0859665 = sum of:
          0.05887535 = weight(_text_:resources in 3628) [ClassicSimilarity], result of:
            0.05887535 = score(doc=3628,freq=8.0), product of:
              0.14598069 = queryWeight, product of:
                3.650338 = idf(docFreq=3122, maxDocs=44218)
                0.039991006 = queryNorm
              0.40330917 = fieldWeight in 3628, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.650338 = idf(docFreq=3122, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3628)
          0.027091147 = weight(_text_:22 in 3628) [ClassicSimilarity], result of:
            0.027091147 = score(doc=3628,freq=2.0), product of:
              0.1400417 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.039991006 = queryNorm
              0.19345059 = fieldWeight in 3628, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3628)
      0.25 = coord(2/8)
    
    Abstract
    Purpose: To develop a prototype middleware framework between different terminology resources in order to provide a subject cross-browsing service for library portal systems. Design/methodology/approach: Nine terminology experts were interviewed to collect appropriate knowledge to support the development of a theoretical framework for the research. Based on this, a simplified software-based prototype system was constructed incorporating the knowledge acquired. The prototype involved mappings between the computer science schedule of the Dewey Decimal Classification (which acted as a spine) and two controlled vocabularies UKAT and ACM Computing Classification. Subsequently, six further experts in the field were invited to evaluate the prototype system and provide feedback to improve the framework. Findings: The major findings showed that given the large variety of terminology resources distributed on the web, the proposed middleware service is essential to integrate technically and semantically the different terminology resources in order to facilitate subject cross-browsing. A set of recommendations are also made outlining the important approaches and features that support such a cross browsing middleware service.
    Content
    This paper is a pre-print version presented at the ISKO UK 2009 conference, 22-23 June, prior to peer review and editing. For published proceedings see special issue of Aslib Proceedings journal.
  6. Marcondes, C.H.: Towards a vocabulary to implement culturally relevant relationships between digital collections in heritage institutions (2020) 0.03
    0.025555706 = product of:
      0.10222282 = sum of:
        0.08867725 = weight(_text_:property in 5757) [ClassicSimilarity], result of:
          0.08867725 = score(doc=5757,freq=2.0), product of:
            0.25336683 = queryWeight, product of:
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.039991006 = queryNorm
            0.3499955 = fieldWeight in 5757, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5757)
        0.013545574 = product of:
          0.027091147 = sum of:
            0.027091147 = weight(_text_:22 in 5757) [ClassicSimilarity], result of:
              0.027091147 = score(doc=5757,freq=2.0), product of:
                0.1400417 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039991006 = queryNorm
                0.19345059 = fieldWeight in 5757, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5757)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Cultural heritage institutions are publishing their digital collections over the web as LOD. This is is a new step in the patrimonialization and curatorial processes developed by such institutions. Many of these collections are thematically superimposed and complementary. Frequently, objects in these collections present culturally relevant relationships, such as a book about a painting, or a draft or sketch of a famous painting, etc. LOD technology enables such heritage records to be interlinked, achieving interoperability and adding value to digital collections, thus empowering heritage institutions. An aim of this research is characterizing such culturally relevant relationships and organizing them in a vocabulary. Use cases or examples of relationships between objects suggested by curators or mentioned in literature and in the conceptual models as FRBR/LRM, CIDOC CRM and RiC-CM, were collected and used as examples or inspiration of cultural relevant relationships. Relationships identified are collated and compared for identifying those with the same or similar meaning, synthesized and normalized. A set of thirty-three culturally relevant relationships are identified and formalized as a LOD property vocabulary to be used by digital curators to interlink digital collections. The results presented are provisional and a starting point to be discussed, tested, and enhanced.
    Date
    4. 3.2020 14:22:41
  7. Miller, E.; Schloss. B.; Lassila, O.; Swick, R.R.: Resource Description Framework (RDF) : model and syntax (1997) 0.02
    0.021827906 = product of:
      0.087311625 = sum of:
        0.062074073 = weight(_text_:property in 5903) [ClassicSimilarity], result of:
          0.062074073 = score(doc=5903,freq=2.0), product of:
            0.25336683 = queryWeight, product of:
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.039991006 = queryNorm
            0.24499685 = fieldWeight in 5903, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5903)
        0.02523755 = product of:
          0.0504751 = sum of:
            0.0504751 = weight(_text_:resources in 5903) [ClassicSimilarity], result of:
              0.0504751 = score(doc=5903,freq=12.0), product of:
                0.14598069 = queryWeight, product of:
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.039991006 = queryNorm
                0.3457656 = fieldWeight in 5903, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5903)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    RDF - the Resource Description Framework - is a foundation for processing metadata; it provides interoperability between applications that exchange machine-understandable information on the Web. RDF emphasizes facilities to enable automated processing of Web resources. RDF metadata can be used in a variety of application areas; for example: in resource discovery to provide better search engine capabilities; in cataloging for describing the content and content relationships available at a particular Web site, page, or digital library; by intelligent software agents to facilitate knowledge sharing and exchange; in content rating; in describing collections of pages that represent a single logical "document"; for describing intellectual property rights of Web pages, and in many others. RDF with digital signatures will be key to building the "Web of Trust" for electronic commerce, collaboration, and other applications. Metadata is "data about data" or specifically in the context of RDF "data describing web resources." The distinction between "data" and "metadata" is not an absolute one; it is a distinction created primarily by a particular application. Many times the same resource will be interpreted in both ways simultaneously. RDF encourages this view by using XML as the encoding syntax for the metadata. The resources being described by RDF are, in general, anything that can be named via a URI. The broad goal of RDF is to define a mechanism for describing resources that makes no assumptions about a particular application domain, nor defines the semantics of any application domain. The definition of the mechanism should be domain neutral, yet the mechanism should be suitable for describing information about any domain. This document introduces a model for representing RDF metadata and one syntax for expressing and transporting this metadata in a manner that maximizes the interoperability of independently developed web servers and clients. The syntax described in this document is best considered as a "serialization syntax" for the underlying RDF representation model. The serialization syntax is XML, XML being the W3C's work-in-progress to define a richer Web syntax for a variety of applications. RDF and XML are complementary; there will be alternate ways to represent the same RDF data model, some more suitable for direct human authoring. Future work may lead to including such alternatives in this document.
    Content
    RDF Data Model At the core of RDF is a model for representing named properties and their values. These properties serve both to represent attributes of resources (and in this sense correspond to usual attribute-value-pairs) and to represent relationships between resources. The RDF data model is a syntax-independent way of representing RDF statements. RDF statements that are syntactically very different could mean the same thing. This concept of equivalence in meaning is very important when performing queries, aggregation and a number of other tasks at which RDF is aimed. The equivalence is defined in a clean machine understandable way. Two pieces of RDF are equivalent if and only if their corresponding data model representations are the same. Table of contents 1. Introduction 2. RDF Data Model 3. RDF Grammar 4. Signed RDF 5. Examples 6. Appendix A: Brief Explanation of XML Namespaces
  8. Celli, F. et al.: Enabling multilingual search through controlled vocabularies : the AGRIS approach (2016) 0.02
    0.021525284 = product of:
      0.08610114 = sum of:
        0.059009988 = weight(_text_:computer in 3278) [ClassicSimilarity], result of:
          0.059009988 = score(doc=3278,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.40377006 = fieldWeight in 3278, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.078125 = fieldNorm(doc=3278)
        0.027091147 = product of:
          0.054182295 = sum of:
            0.054182295 = weight(_text_:22 in 3278) [ClassicSimilarity], result of:
              0.054182295 = score(doc=3278,freq=2.0), product of:
                0.1400417 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039991006 = queryNorm
                0.38690117 = fieldWeight in 3278, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3278)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Series
    Communications in computer and information science; 672
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  9. Mayr, P.; Petras, V.: Building a Terminology Network for Search : the KoMoHe project (2008) 0.02
    0.02007595 = product of:
      0.0803038 = sum of:
        0.061340004 = weight(_text_:network in 2618) [ClassicSimilarity], result of:
          0.061340004 = score(doc=2618,freq=2.0), product of:
            0.17809492 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.039991006 = queryNorm
            0.3444231 = fieldWeight in 2618, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2618)
        0.018963803 = product of:
          0.037927605 = sum of:
            0.037927605 = weight(_text_:22 in 2618) [ClassicSimilarity], result of:
              0.037927605 = score(doc=2618,freq=2.0), product of:
                0.1400417 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039991006 = queryNorm
                0.2708308 = fieldWeight in 2618, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2618)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  10. Isaac, A.; Raemy, J.A.; Meijers, E.; Valk, S. De; Freire, N.: Metadata aggregation via linked data : results of the Europeana Common Culture project (2020) 0.02
    0.01938896 = product of:
      0.07755584 = sum of:
        0.052577145 = weight(_text_:network in 39) [ClassicSimilarity], result of:
          0.052577145 = score(doc=39,freq=2.0), product of:
            0.17809492 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.039991006 = queryNorm
            0.29521978 = fieldWeight in 39, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.046875 = fieldNorm(doc=39)
        0.024978695 = product of:
          0.04995739 = sum of:
            0.04995739 = weight(_text_:resources in 39) [ClassicSimilarity], result of:
              0.04995739 = score(doc=39,freq=4.0), product of:
                0.14598069 = queryWeight, product of:
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.039991006 = queryNorm
                0.34221917 = fieldWeight in 39, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.046875 = fieldNorm(doc=39)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Digital cultural heritage resources are widely available on the web through the digital libraries of heritage institutions. To address the difficulties of discoverability in cultural heritage, the common practice is metadata aggregation, where centralized efforts like Europeana facilitate discoverability by collecting the resources' metadata. We present the results of the linked data aggregation task conducted within the Europeana Common Culture project, which attempted an innovative approach to aggregation based on linked data made available by cultural heritage institutions. This task ran for one year with participation of eleven organizations, involving the three member roles of the Europeana network: data providers, intermediary aggregators, and the central aggregation hub, Europeana. We report on the challenges that were faced by data providers, the standards and specifications applied, and the resulting aggregated metadata.
  11. Galinski, C.: Fragen der semantischen Interoperabilität brechen jetzt überall auf (o.J.) 0.02
    0.01658158 = product of:
      0.06632632 = sum of:
        0.05007163 = weight(_text_:computer in 4183) [ClassicSimilarity], result of:
          0.05007163 = score(doc=4183,freq=4.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.34261024 = fieldWeight in 4183, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=4183)
        0.016254688 = product of:
          0.032509375 = sum of:
            0.032509375 = weight(_text_:22 in 4183) [ClassicSimilarity], result of:
              0.032509375 = score(doc=4183,freq=2.0), product of:
                0.1400417 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039991006 = queryNorm
                0.23214069 = fieldWeight in 4183, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4183)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Content
    "Der Begriff der semantischen Interoperabilität ist aufgetreten mit dem Semantic Web, einer Konzeption von Tim Berners-Lee, der sagt, das zunehmend die Computer miteinander über hochstandardisierte Sprachen, die wenig mit Natürlichsprachlichkeit zu tun haben, kommunizieren werden. Was er nicht sieht, ist dass rein technische Interoperabilität nicht ausreicht, um die semantische Interoperabilität herzustellen." ... "Der Begriff der semantischen Interoperabilität ist aufgetreten mit dem Semantic Web, einer Konzeption von Tim Berners-Lee, der sagt, das zunehmend die Computer miteinander über hochstandardisierte Sprachen, die wenig mit Natürlichsprachlichkeit zu tun haben, kommunizieren werden. Was er nicht sieht, ist dass rein technische Interoperabilität nicht ausreicht, um die semantische Interoperabilität herzustellen."
    Date
    22. 1.2011 10:16:32
  12. Bittner, T.; Donnelly, M.; Winter, S.: Ontology and semantic interoperability (2006) 0.02
    0.01658158 = product of:
      0.06632632 = sum of:
        0.05007163 = weight(_text_:computer in 4820) [ClassicSimilarity], result of:
          0.05007163 = score(doc=4820,freq=4.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.34261024 = fieldWeight in 4820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=4820)
        0.016254688 = product of:
          0.032509375 = sum of:
            0.032509375 = weight(_text_:22 in 4820) [ClassicSimilarity], result of:
              0.032509375 = score(doc=4820,freq=2.0), product of:
                0.1400417 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039991006 = queryNorm
                0.23214069 = fieldWeight in 4820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4820)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    One of the major problems facing systems for Computer Aided Design (CAD), Architecture Engineering and Construction (AEC) and Geographic Information Systems (GIS) applications today is the lack of interoperability among the various systems. When integrating software applications, substantial di culties can arise in translating information from one application to the other. In this paper, we focus on semantic di culties that arise in software integration. Applications may use di erent terminologies to describe the same domain. Even when appli-cations use the same terminology, they often associate di erent semantics with the terms. This obstructs information exchange among applications. To cir-cumvent this obstacle, we need some way of explicitly specifying the semantics for each terminology in an unambiguous fashion. Ontologies can provide such specification. It will be the task of this paper to explain what ontologies are and how they can be used to facilitate interoperability between software systems used in computer aided design, architecture engineering and construction, and geographic information processing.
    Date
    3.12.2016 18:39:22
  13. Rasmussen Pennington, D.; Cagnazzo, L.: Relationship status : libraries and linked data in Europe (2018) 0.02
    0.016392933 = product of:
      0.13114347 = sum of:
        0.13114347 = weight(_text_:europe in 4814) [ClassicSimilarity], result of:
          0.13114347 = score(doc=4814,freq=2.0), product of:
            0.24358861 = queryWeight, product of:
              6.091085 = idf(docFreq=271, maxDocs=44218)
              0.039991006 = queryNorm
            0.5383809 = fieldWeight in 4814, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.091085 = idf(docFreq=271, maxDocs=44218)
              0.0625 = fieldNorm(doc=4814)
      0.125 = coord(1/8)
    
  14. Kim, J.-M.; Shin, H.; Kim, H.-J.: Schema and constraints-based matching and merging of Topic Maps (2007) 0.02
    0.01567607 = product of:
      0.12540856 = sum of:
        0.12540856 = weight(_text_:property in 922) [ClassicSimilarity], result of:
          0.12540856 = score(doc=922,freq=4.0), product of:
            0.25336683 = queryWeight, product of:
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.039991006 = queryNorm
            0.49496835 = fieldWeight in 922, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.0390625 = fieldNorm(doc=922)
      0.125 = coord(1/8)
    
    Abstract
    In this paper, we propose a multi-strategic matching and merging approach to find correspondences between ontologies based on the syntactic or semantic characteristics and constraints of the Topic Maps. Our multi-strategic matching approach consists of a linguistic module and a Topic Map constraints-based module. A linguistic module computes similarities between concepts using morphological analysis, string normalization and tokenization and language-dependent heuristics. A Topic Map constraints-based module takes advantage of several Topic Maps-dependent techniques such as a topic property-based matching, a hierarchy-based matching, and an association-based matching. This is a composite matching procedure and need not generate a cross-pair of all topics from the ontologies because unmatched pairs of topics can be removed by characteristics and constraints of the Topic Maps. Merging between Topic Maps follows the matching operations. We set up the MERGE function to integrate two Topic Maps into a new Topic Map, which satisfies such merge requirements as entity preservation, property preservation, relation preservation, and conflict resolution. For our experiments, we used oriental philosophy ontologies, western philosophy ontologies, Yahoo western philosophy dictionary, and Wikipedia philosophy ontology as input ontologies. Our experiments show that the automatically generated matching results conform to the outputs generated manually by domain experts and can be of great benefit to the following merging operations.
  15. Si, L.E.; O'Brien, A.; Probets, S.: Integration of distributed terminology resources to facilitate subject cross-browsing for library portal systems (2010) 0.02
    0.015604328 = product of:
      0.062417313 = sum of:
        0.029504994 = weight(_text_:computer in 3944) [ClassicSimilarity], result of:
          0.029504994 = score(doc=3944,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.20188503 = fieldWeight in 3944, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3944)
        0.03291232 = product of:
          0.06582464 = sum of:
            0.06582464 = weight(_text_:resources in 3944) [ClassicSimilarity], result of:
              0.06582464 = score(doc=3944,freq=10.0), product of:
                0.14598069 = queryWeight, product of:
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.039991006 = queryNorm
                0.45091337 = fieldWeight in 3944, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3944)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Purpose - The paper aims to develop a prototype middleware framework between different terminology resources in order to provide a subject cross-browsing service for library portal systems. Design/methodology/approach - Nine terminology experts were interviewed to collect appropriate knowledge to support the development of a theoretical framework for the research. Based on this, a simplified software-based prototype system was constructed incorporating the knowledge acquired. The prototype involved mappings between the computer science schedule of the Dewey Decimal Classification (which acted as a spine) and two controlled vocabularies, UKAT and ACM Computing Classification. Subsequently, six further experts in the field were invited to evaluate the prototype system and provide feedback to improve the framework. Findings - The major findings showed that, given the large variety of terminology resources distributed throughout the web, the proposed middleware service is essential to integrate technically and semantically the different terminology resources in order to facilitate subject cross-browsing. A set of recommendations are also made, outlining the important approaches and features that support such a cross-browsing middleware service. Originality/value - Cross-browsing features are lacking in current library portal meta-search systems. Users are therefore deprived of this valuable retrieval provision. This research investigated the case for such a system and developed a prototype to fill this gap.
    Footnote
    Beitrag in einem Special Issue: Content architecture: exploiting and managing diverse resources: proceedings of the first national conference of the United Kingdom chapter of the International Society for Knowedge Organization (ISKO)
  16. Zapounidou, S.; Sfakakis, M.; Papatheodorou, C.: Mapping derivative relationships from RDA to BIBFRAME 2 (2019) 0.02
    0.015518518 = product of:
      0.124148145 = sum of:
        0.124148145 = weight(_text_:property in 5479) [ClassicSimilarity], result of:
          0.124148145 = score(doc=5479,freq=2.0), product of:
            0.25336683 = queryWeight, product of:
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.039991006 = queryNorm
            0.4899937 = fieldWeight in 5479, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5479)
      0.125 = coord(1/8)
    
    Abstract
    Semantic interoperability between Resource Description and Access (RDA) and BIBFRAME models is of great interest to the library community. In this context, this work investigates the mapping of core entities, inherent and derivative relationships from RDA to BIBFRAME, and proposes mapping rules assessed using two gold datasets. Findings indicate that RDA core entities and inherent relationships can be successfully mapped to BIBFRAME using the bf:hasExpression property, while extending bf:hasExpression as transitive simplifies BIBFRAME representations. Moreover, mapping derivative relationships between RDA Expressions was successful with loss of specificity in non-translation cases. The mapping of derivative relationships between RDA Works produced "noisy" bf:hasDerivative occurrences in BIBFRAME.
  17. Sfakakis, M.; Zapounidou, S.; Papatheodorou, C.: Mapping derivative relationships from BIBFRAME 2.0 to RDA (2020) 0.02
    0.015518518 = product of:
      0.124148145 = sum of:
        0.124148145 = weight(_text_:property in 294) [ClassicSimilarity], result of:
          0.124148145 = score(doc=294,freq=2.0), product of:
            0.25336683 = queryWeight, product of:
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.039991006 = queryNorm
            0.4899937 = fieldWeight in 294, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.335595 = idf(docFreq=212, maxDocs=44218)
              0.0546875 = fieldNorm(doc=294)
      0.125 = coord(1/8)
    
    Abstract
    The mapping from BIBFRAME 2.0 to Resource Description and Access (RDA) is studied focusing on core entities, inherent relationships, and derivative relationships. The proposed mapping rules are evaluated with two gold datasets. Findings indicate that 1) core entities, inherent and derivative relationships may be mapped to RDA, 2) the use of the bf:hasExpression property may cluster bf:Works with the same ideational content and enable their mapping to RDA Works with their Expressions, and 3) cataloging policies have a significant impact on the interoperability between RDA and BIBFRAME datasets. This work complements the investigation of semantic interoperability between the two models previously presented in this journal.
  18. Metadata and semantics research : 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings (2016) 0.02
    0.0150676975 = product of:
      0.06027079 = sum of:
        0.041306987 = weight(_text_:computer in 3283) [ClassicSimilarity], result of:
          0.041306987 = score(doc=3283,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.28263903 = fieldWeight in 3283, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3283)
        0.018963803 = product of:
          0.037927605 = sum of:
            0.037927605 = weight(_text_:22 in 3283) [ClassicSimilarity], result of:
              0.037927605 = score(doc=3283,freq=2.0), product of:
                0.1400417 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039991006 = queryNorm
                0.2708308 = fieldWeight in 3283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3283)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Series
    Communications in computer and information science; 672
  19. Widhalm, R.; Mueck, T.A.: Merging topics in well-formed XML topic maps (2003) 0.01
    0.01326715 = product of:
      0.0530686 = sum of:
        0.035405993 = weight(_text_:computer in 2186) [ClassicSimilarity], result of:
          0.035405993 = score(doc=2186,freq=2.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.24226204 = fieldWeight in 2186, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=2186)
        0.017662605 = product of:
          0.03532521 = sum of:
            0.03532521 = weight(_text_:resources in 2186) [ClassicSimilarity], result of:
              0.03532521 = score(doc=2186,freq=2.0), product of:
                0.14598069 = queryWeight, product of:
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.039991006 = queryNorm
                0.2419855 = fieldWeight in 2186, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.650338 = idf(docFreq=3122, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2186)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Topic Maps are a standardized modelling approach for the semantic annotation and description of WWW resources. They enable an improved search and navigational access on information objects stored in semi-structured information spaces like the WWW. However, the according standards ISO 13250 and XTM (XML Topic Maps) lack formal semantics, several questions concerning e.g. subclassing, inheritance or merging of topics are left open. The proposed TMUML meta model, directly derived from the well known UML meta model, is a meta model for Topic Maps which enables semantic constraints to be formulated in OCL (object constraint language) in order to answer such open questions and overcome possible inconsistencies in Topic Map repositories. We will examine the XTM merging conditions and show, in several examples, how the TMUML meta model enables semantic constraints for Topic Map merging to be formulated in OCL. Finally, we will show how the TM validation process, i.e., checking if a Topic Map is well formed, includes our merging conditions.
    Series
    Lecture notes in computer science; vol. 2870
  20. Euzenat, J.; Shvaiko, P.: Ontology matching (2010) 0.01
    0.012929944 = product of:
      0.051719777 = sum of:
        0.040883318 = weight(_text_:computer in 168) [ClassicSimilarity], result of:
          0.040883318 = score(doc=168,freq=6.0), product of:
            0.1461475 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.039991006 = queryNorm
            0.2797401 = fieldWeight in 168, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.010836459 = product of:
          0.021672918 = sum of:
            0.021672918 = weight(_text_:22 in 168) [ClassicSimilarity], result of:
              0.021672918 = score(doc=168,freq=2.0), product of:
                0.1400417 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.039991006 = queryNorm
                0.15476047 = fieldWeight in 168, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=168)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Ontologies are viewed as the silver bullet for many applications, but in open or evolving systems, different parties can adopt different ontologies. This increases heterogeneity problems rather than reducing heterogeneity. This book proposes ontology matching as a solution to the problem of semantic heterogeneity, offering researchers and practitioners a uniform framework of reference to currently available work. The techniques presented apply to database schema matching, catalog integration, XML schema matching and more. Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko's book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, artificial intelligence. With Ontology Matching, researchers and practitioners will find a reference book which presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can equally be applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a detailed account of matching techniques and matching systems in a systematic way from theoretical, practical and application perspectives.
    Date
    20. 6.2012 19:08:22
    LCSH
    Semantic integration (Computer systems)
    Subject
    Semantic integration (Computer systems)

Authors

Years

Languages

  • e 106
  • d 12

Types

  • a 77
  • el 37
  • m 12
  • s 7
  • x 5
  • n 1
  • p 1
  • r 1
  • More… Less…