Search (97 results, page 1 of 5)

  • × theme_ss:"Semantische Interoperabilität"
  1. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.28
    0.27750945 = product of:
      0.37001258 = sum of:
        0.086425416 = product of:
          0.25927624 = sum of:
            0.25927624 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.25927624 = score(doc=306,freq=2.0), product of:
                0.3954264 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04664141 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.25927624 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.25927624 = score(doc=306,freq=2.0), product of:
            0.3954264 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04664141 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.024310911 = product of:
          0.048621822 = sum of:
            0.048621822 = weight(_text_:services in 306) [ClassicSimilarity], result of:
              0.048621822 = score(doc=306,freq=2.0), product of:
                0.1712379 = queryWeight, product of:
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.04664141 = queryNorm
                0.28394312 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Although service-oriented architectures go a long way toward providing interoperability in distributed, heterogeneous environments, managing semantic differences in such environments remains a challenge. We give an overview of the issue of semantic interoperability (integration), provide a semantic characterization of services, and discuss the role of ontologies. Then we analyze four basic models of semantic interoperability that differ in respect to their mapping between service descriptions and ontologies and in respect to where the evaluation of the integration logic is performed. We also provide some guidelines for selecting one of the possible interoperability models.
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  2. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.12
    0.123464875 = product of:
      0.24692975 = sum of:
        0.061732437 = product of:
          0.18519731 = sum of:
            0.18519731 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.18519731 = score(doc=1000,freq=2.0), product of:
                0.3954264 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04664141 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.18519731 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.18519731 = score(doc=1000,freq=2.0), product of:
            0.3954264 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04664141 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.5 = coord(2/4)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  3. Euzenat, J.; Shvaiko, P.: Ontology matching (2010) 0.05
    0.050655212 = product of:
      0.101310425 = sum of:
        0.04824946 = weight(_text_:reference in 168) [ClassicSimilarity], result of:
          0.04824946 = score(doc=168,freq=4.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.2542731 = fieldWeight in 168, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.053060964 = sum of:
          0.027783897 = weight(_text_:services in 168) [ClassicSimilarity], result of:
            0.027783897 = score(doc=168,freq=2.0), product of:
              0.1712379 = queryWeight, product of:
                3.6713707 = idf(docFreq=3057, maxDocs=44218)
                0.04664141 = queryNorm
              0.1622532 = fieldWeight in 168, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6713707 = idf(docFreq=3057, maxDocs=44218)
                0.03125 = fieldNorm(doc=168)
          0.025277069 = weight(_text_:22 in 168) [ClassicSimilarity], result of:
            0.025277069 = score(doc=168,freq=2.0), product of:
              0.16333027 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04664141 = queryNorm
              0.15476047 = fieldWeight in 168, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=168)
      0.5 = coord(2/4)
    
    Abstract
    Ontologies are viewed as the silver bullet for many applications, but in open or evolving systems, different parties can adopt different ontologies. This increases heterogeneity problems rather than reducing heterogeneity. This book proposes ontology matching as a solution to the problem of semantic heterogeneity, offering researchers and practitioners a uniform framework of reference to currently available work. The techniques presented apply to database schema matching, catalog integration, XML schema matching and more. Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko's book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, artificial intelligence. With Ontology Matching, researchers and practitioners will find a reference book which presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can equally be applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a detailed account of matching techniques and matching systems in a systematic way from theoretical, practical and application perspectives.
    Date
    20. 6.2012 19:08:22
  4. Vizine-Goetz, D.; Houghton, A.; Childress, E.: Web services for controlled vocabularies (2006) 0.04
    0.044295102 = product of:
      0.088590205 = sum of:
        0.0426469 = weight(_text_:reference in 1171) [ClassicSimilarity], result of:
          0.0426469 = score(doc=1171,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.22474778 = fieldWeight in 1171, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1171)
        0.0459433 = product of:
          0.0918866 = sum of:
            0.0918866 = weight(_text_:services in 1171) [ClassicSimilarity], result of:
              0.0918866 = score(doc=1171,freq=14.0), product of:
                0.1712379 = queryWeight, product of:
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.04664141 = queryNorm
                0.536602 = fieldWeight in 1171, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1171)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Amid the debates about whether folksonomies will supplant controlled vocabularies and whether the Library of Congress Subject Headings (LCSH) and Dewey Decimal Classification (DDC) system have outlived their usefulness, libraries, museums and other organizations continue to require efficient, effective access to controlled vocabularies for creating consistent metadata for their collections . In this article, we present an approach for using Web services to interact with controlled vocabularies. Services are implemented within a service-oriented architecture (SOA) framework. SOA is an approach to distributed computing where services are loosely coupled and discoverable on the network. A set of experimental services for controlled vocabularies is provided through the Microsoft Office (MS) Research task pane (a small window or sidebar that opens up next to Internet Explorer (IE) and other Microsoft Office applications). The research task pane is a built-in feature of IE when MS Office 2003 is loaded. The research pane enables a user to take advantage of a number of research and reference services accessible over the Internet. Web browsers, such as Mozilla Firefox and Opera, also provide sidebars which could be used to deliver similar, loosely-coupled Web services.
  5. Zeng, M.L.; Chan, L.M.: Trends and issues in establishing interoperability among knowledge organization systems (2004) 0.04
    0.0360071 = product of:
      0.0720142 = sum of:
        0.051176272 = weight(_text_:reference in 2224) [ClassicSimilarity], result of:
          0.051176272 = score(doc=2224,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.2696973 = fieldWeight in 2224, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.046875 = fieldNorm(doc=2224)
        0.020837924 = product of:
          0.041675847 = sum of:
            0.041675847 = weight(_text_:services in 2224) [ClassicSimilarity], result of:
              0.041675847 = score(doc=2224,freq=2.0), product of:
                0.1712379 = queryWeight, product of:
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.04664141 = queryNorm
                0.2433798 = fieldWeight in 2224, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2224)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This report analyzes the methodologies used in establishing interoperability among knowledge organization systems (KOS) such as controlled vocabularies and classification schemes that present the organized interpretation of knowledge structures. The development and trends of KOS are discussed with reference to the online era and the Internet era. Selected current projects and activities addressing KOS interoperability issues are reviewed in terms of the languages and structures involved. The methodological analysis encompasses both conventional and new methods that have proven to be widely accepted, including derivation/modeling, translation/adaptation, satellite and leaf node linking, direct mapping, co-occurrence mapping, switching, linking through a temporary union list, and linking through a thesaurus server protocol. Methods used in link storage and management, as weIl as common issues regarding mapping and methodological options, are also presented. It is concluded that interoperability of KOS is an unavoidable issue and process in today's networked environment. There have been and will be many multilingual products and services, with many involving various structured systems. Results from recent efforts are encouraging.
  6. Haslhofer, B.: Uniform SPARQL access to interlinked (digital library) sources (2007) 0.03
    0.032284718 = product of:
      0.12913887 = sum of:
        0.12913887 = sum of:
          0.07858473 = weight(_text_:services in 541) [ClassicSimilarity], result of:
            0.07858473 = score(doc=541,freq=4.0), product of:
              0.1712379 = queryWeight, product of:
                3.6713707 = idf(docFreq=3057, maxDocs=44218)
                0.04664141 = queryNorm
              0.45892134 = fieldWeight in 541, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.6713707 = idf(docFreq=3057, maxDocs=44218)
                0.0625 = fieldNorm(doc=541)
          0.050554138 = weight(_text_:22 in 541) [ClassicSimilarity], result of:
            0.050554138 = score(doc=541,freq=2.0), product of:
              0.16333027 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04664141 = queryNorm
              0.30952093 = fieldWeight in 541, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=541)
      0.25 = coord(1/4)
    
    Abstract
    In this presentation, we therefore focus on a solution for providing uniform access to Digital Libraries and other online services. In order to enable uniform query access to heterogeneous sources, we must provide metadata interoperability in a way that a query language - in this case SPARQL - can cope with the incompatibility of the metadata in various sources without changing their already existing information models.
    Content
    Präsentation während der Veranstaltung "Networked Knowledge Organization Systems and Services: The 6th European Networked Knowledge Organization Systems (NKOS) Workshop, Workshop at the 11th ECDL Conference, Budapest, Hungary, September 21st 2007".
    Date
    26.12.2011 13:22:46
  7. Dunsire, G.; Nicholson, D.: Signposting the crossroads : terminology Web services and classification-based interoperability (2010) 0.03
    0.030870736 = product of:
      0.12348294 = sum of:
        0.12348294 = sum of:
          0.0918866 = weight(_text_:services in 4066) [ClassicSimilarity], result of:
            0.0918866 = score(doc=4066,freq=14.0), product of:
              0.1712379 = queryWeight, product of:
                3.6713707 = idf(docFreq=3057, maxDocs=44218)
                0.04664141 = queryNorm
              0.536602 = fieldWeight in 4066, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                3.6713707 = idf(docFreq=3057, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4066)
          0.031596337 = weight(_text_:22 in 4066) [ClassicSimilarity], result of:
            0.031596337 = score(doc=4066,freq=2.0), product of:
              0.16333027 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04664141 = queryNorm
              0.19345059 = fieldWeight in 4066, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4066)
      0.25 = coord(1/4)
    
    Abstract
    The focus of this paper is the provision of terminology- and classification-based terminologies interoperability data via web services, initially using interoperability data based on the use of a Dewey Decimal Classification (DDC) spine, but with an aim to explore other possibilities in time, including the use of other spines. The High-Level Thesaurus Project (HILT) Phase IV developed pilot web services based on SRW/U, SOAP, and SKOS to deliver machine-readable terminology and crossterminology mappings data likely to be useful to information services wishing to enhance their subject search or browse services. It also developed an associated toolkit to help information services technical staff to embed HILT-related functionality within service interfaces. Several UK information services have created illustrative user interface enhancements using HILT functionality and these will demonstrate what is possible. HILT currently has the following subject schemes mounted and available: DDC, CAB, GCMD, HASSET, IPSV, LCSH, MeSH, NMR, SCAS, UNESCO, and AAT. It also has high level mappings between some of these schemes and DDC and some deeper pilot mappings available.
    Date
    6. 1.2011 19:22:48
  8. Golub, K.; Tudhope, D.; Zeng, M.L.; Zumer, M.: Terminology registries for knowledge organization systems : functionality, use, and attributes (2014) 0.02
    0.024213538 = product of:
      0.09685415 = sum of:
        0.09685415 = sum of:
          0.058938548 = weight(_text_:services in 1347) [ClassicSimilarity], result of:
            0.058938548 = score(doc=1347,freq=4.0), product of:
              0.1712379 = queryWeight, product of:
                3.6713707 = idf(docFreq=3057, maxDocs=44218)
                0.04664141 = queryNorm
              0.344191 = fieldWeight in 1347, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.6713707 = idf(docFreq=3057, maxDocs=44218)
                0.046875 = fieldNorm(doc=1347)
          0.037915602 = weight(_text_:22 in 1347) [ClassicSimilarity], result of:
            0.037915602 = score(doc=1347,freq=2.0), product of:
              0.16333027 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04664141 = queryNorm
              0.23214069 = fieldWeight in 1347, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1347)
      0.25 = coord(1/4)
    
    Abstract
    Terminology registries (TRs) are a crucial element of the infrastructure required for resource discovery services, digital libraries, Linked Data, and semantic interoperability generally. They can make the content of knowledge organization systems (KOS) available both for human and machine access. The paper describes the attributes and functionality for a TR, based on a review of published literature, existing TRs, and a survey of experts. A domain model based on user tasks is constructed and a set of core metadata elements for use in TRs is proposed. Ideally, the TR should allow searching as well as browsing for a KOS, matching a user's search while also providing information about existing terminology services, accessible to both humans and machines. The issues surrounding metadata for KOS are also discussed, together with the rationale for different aspects and the importance of a core set of KOS metadata for future machine-based access; a possible core set of metadata elements is proposed. This is dealt with in terms of practical experience and in relation to the Dublin Core Application Profile.
    Date
    22. 8.2014 17:12:54
  9. Si, L.: Encoding formats and consideration of requirements for mapping (2007) 0.02
    0.023214173 = product of:
      0.09285669 = sum of:
        0.09285669 = sum of:
          0.048621822 = weight(_text_:services in 540) [ClassicSimilarity], result of:
            0.048621822 = score(doc=540,freq=2.0), product of:
              0.1712379 = queryWeight, product of:
                3.6713707 = idf(docFreq=3057, maxDocs=44218)
                0.04664141 = queryNorm
              0.28394312 = fieldWeight in 540, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6713707 = idf(docFreq=3057, maxDocs=44218)
                0.0546875 = fieldNorm(doc=540)
          0.04423487 = weight(_text_:22 in 540) [ClassicSimilarity], result of:
            0.04423487 = score(doc=540,freq=2.0), product of:
              0.16333027 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04664141 = queryNorm
              0.2708308 = fieldWeight in 540, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=540)
      0.25 = coord(1/4)
    
    Content
    Präsentation während der Veranstaltung "Networked Knowledge Organization Systems and Services: The 6th European Networked Knowledge Organization Systems (NKOS) Workshop, Workshop at the 11th ECDL Conference, Budapest, Hungary, September 21st 2007".
    Date
    26.12.2011 13:22:27
  10. Hubrich, J.: Intersystem relations : Characteristics and functionalities (2011) 0.02
    0.01705876 = product of:
      0.06823504 = sum of:
        0.06823504 = weight(_text_:reference in 4780) [ClassicSimilarity], result of:
          0.06823504 = score(doc=4780,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.35959643 = fieldWeight in 4780, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0625 = fieldNorm(doc=4780)
      0.25 = coord(1/4)
    
    Abstract
    Within the frame of the methodological support of the CrissCross project and the research conducted in the Reseda project, a tiered model of semantic interoperability was developed. This correlates methods of establishing semantic interoperability and types of intersystem relations to search functionalities in retrieval scenarios. In this article the model is outlined and exemplified with reference to respective selective alignment projects.
  11. Dunsire, G.: Enhancing information services using machine-to-machine terminology services (2011) 0.02
    0.016080156 = product of:
      0.064320624 = sum of:
        0.064320624 = product of:
          0.12864125 = sum of:
            0.12864125 = weight(_text_:services in 1805) [ClassicSimilarity], result of:
              0.12864125 = score(doc=1805,freq=14.0), product of:
                0.1712379 = queryWeight, product of:
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.04664141 = queryNorm
                0.7512428 = fieldWeight in 1805, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1805)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This paper describes the basic concepts of terminology services and their role in information retrieval interfaces. Terminology services are consumed by other software applications using machine-to-machine protocols, rather than directly by end-users. An example of a terminology service is the pilot developed by the High Level Thesaurus (HILT) project which has successfully demonstrated its potential for enhancing subject retrieval in operational services. Examples of enhancements in three such services are given. The paper discusses the future development of terminology services in relation to the Semantic Web.
  12. Nicholson, D.: Help us make HILT's terminology services useful in your information service (2008) 0.02
    0.015652541 = product of:
      0.062610164 = sum of:
        0.062610164 = product of:
          0.12522033 = sum of:
            0.12522033 = weight(_text_:services in 3654) [ClassicSimilarity], result of:
              0.12522033 = score(doc=3654,freq=26.0), product of:
                0.1712379 = queryWeight, product of:
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.04664141 = queryNorm
                0.73126525 = fieldWeight in 3654, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3654)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The JISC-funded HILT project is looking to make contact with staff in information services or projects interested in helping it test and refine its developing terminology services. The project is currently working to create pilot web services that will deliver machine-readable terminology and cross-terminology mappings data likely to be useful to information services wishing to extend or enhance the efficacy of their subject search or browse services. Based on SRW/U, SOAP, and SKOS, the HILT facilities, when fully operational, will permit such services to improve their own subject search and browse mechanisms by using HILT data in a fashion transparent to their users. On request, HILT will serve up machine-processable data on individual subject schemes (broader terms, narrower terms, hierarchy information, preferred and non-preferred terms, and so on) and interoperability data (usually intellectual or automated mappings between schemes, but the architecture allows for the use of other methods) - data that can be used to enhance user services. The project is also developing an associated toolkit that will help service technical staff to embed HILT-related functionality into their services. The primary aim is to serve JISC funded information services or services at JISC institutions, but information services outside the JISC domain may also find the proposed services useful and wish to participate in the test and refine process.
  13. Binding, C.; Tudhope, D.: Improving interoperability using vocabulary linked data (2015) 0.02
    0.015077956 = product of:
      0.060311824 = sum of:
        0.060311824 = weight(_text_:reference in 2205) [ClassicSimilarity], result of:
          0.060311824 = score(doc=2205,freq=4.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.31784135 = fieldWeight in 2205, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2205)
      0.25 = coord(1/4)
    
    Abstract
    The concept of Linked Data has been an emerging theme within the computing and digital heritage areas in recent years. The growth and scale of Linked Data has underlined the need for greater commonality in concept referencing, to avoid local redefinition and duplication of reference resources. Achieving domain-wide agreement on common vocabularies would be an unreasonable expectation; however, datasets often already have local vocabulary resources defined, and so the prospects for large-scale interoperability can be substantially improved by creating alignment links from these local vocabularies out to common external reference resources. The ARIADNE project is undertaking large-scale integration of archaeology dataset metadata records, to create a cross-searchable research repository resource. Key to enabling this cross search will be the 'subject' metadata originating from multiple data providers, containing terms from multiple multilingual controlled vocabularies. This paper discusses various aspects of vocabulary mapping. Experience from the previous SENESCHAL project in the publication of controlled vocabularies as Linked Open Data is discussed, emphasizing the importance of unique URI identifiers for vocabulary concepts. There is a need to align legacy indexing data to the uniquely defined concepts and examples are discussed of SENESCHAL data alignment work. A case study for the ARIADNE project presents work on mapping between vocabularies, based on the Getty Art and Architecture Thesaurus as a central hub and employing an interactive vocabulary mapping tool developed for the project, which generates SKOS mapping relationships in JSON and other formats. The potential use of such vocabulary mappings to assist cross search over archaeological datasets from different countries is illustrated in a pilot experiment. The results demonstrate the enhanced opportunities for interoperability and cross searching that the approach offers.
  14. Bandholtz, T.; Schulte-Coerne, T.; Glaser, R.; Fock, J.; Keller, T.: iQvoc - open source SKOS(XL) maintenance and publishing tool (2010) 0.01
    0.014926414 = product of:
      0.059705656 = sum of:
        0.059705656 = weight(_text_:reference in 604) [ClassicSimilarity], result of:
          0.059705656 = score(doc=604,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.31464687 = fieldWeight in 604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0546875 = fieldNorm(doc=604)
      0.25 = coord(1/4)
    
    Abstract
    iQvoc is a new open source SKOS-XL vocabulary management tool developed by the Federal Environment Agency, Germany, and innoQ Deutschland GmbH. Its immediate purpose is maintaining and publishing reference vocabularies in the upcoming Linked Data cloud of environmental information, but it may be easily adapted to host any SKOS- XL compliant vocabulary. iQvoc is implemented as a Ruby on Rails application running on top of JRuby - the Java implementation of the Ruby Programming Language. To increase the user experience when editing content, iQvoc uses heavily the JavaScript library jQuery.
  15. Wicaksana, I.W.S.; Wahyudi, B.: Comparison Latent Semantic and WordNet approach for semantic similarity calculation (2011) 0.01
    0.014926414 = product of:
      0.059705656 = sum of:
        0.059705656 = weight(_text_:reference in 689) [ClassicSimilarity], result of:
          0.059705656 = score(doc=689,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.31464687 = fieldWeight in 689, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0546875 = fieldNorm(doc=689)
      0.25 = coord(1/4)
    
    Abstract
    Information exchange among many sources in Internet is more autonomous, dynamic and free. The situation drive difference view of concepts among sources. For example, word 'bank' has meaning as economic institution for economy domain, but for ecology domain it will be defined as slope of river or lake. In this paper, we will evaluate latent semantic and WordNet approach to calculate semantic similarity. The evaluation will be run for some concepts from different domain with reference by expert or human. Result of the evaluation can provide a contribution for mapping of concept, query rewriting, interoperability, etc.
  16. Hollink, L.; Assem, M. van; Wang, S.; Isaac, A.; Schreiber, G.: Two variations on ontology alignment evaluation : methodological issues (2008) 0.01
    0.012794068 = product of:
      0.051176272 = sum of:
        0.051176272 = weight(_text_:reference in 4645) [ClassicSimilarity], result of:
          0.051176272 = score(doc=4645,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.2696973 = fieldWeight in 4645, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.046875 = fieldNorm(doc=4645)
      0.25 = coord(1/4)
    
    Abstract
    Evaluation of ontology alignments is in practice done in two ways: (1) assessing individual correspondences and (2) comparing the alignment to a reference alignment. However, this type of evaluation does not guarantee that an application which uses the alignment will perform well. In this paper, we contribute to the current ontology alignment evaluation practices by proposing two alternative evaluation methods that take into account some characteristics of a usage scenario without doing a full-fledged end-to-end evaluation. We compare different evaluation approaches in three case studies, focussing on methodological issues. Each case study considers an alignment between a different pair of ontologies, ranging from rich and well-structured to small and poorly structured. This enables us to conclude on the use of different evaluation approaches in different settings.
  17. Nicholson, D.: High-Level Thesaurus (HILT) project : interoperability and cross-searching distributed services (200?) 0.01
    0.012030781 = product of:
      0.048123125 = sum of:
        0.048123125 = product of:
          0.09624625 = sum of:
            0.09624625 = weight(_text_:services in 5966) [ClassicSimilarity], result of:
              0.09624625 = score(doc=5966,freq=6.0), product of:
                0.1712379 = queryWeight, product of:
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.04664141 = queryNorm
                0.5620616 = fieldWeight in 5966, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5966)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    My presentation is about the HILT, High Level Thesaurus Project, which is looking, very roughly speaking, at how we might deal with interoperability problems relating to cross-searching distributed services by subject. The aims of HILT are to study and report on the problem of cross-searching and browsing by subject across a range of communities, services, and service or resource types in the UK given the wide range of subject schemes and associated practices in place
  18. Mayr, P.; Schaer, P.; Mutschke, P.: ¬A science model driven retrieval prototype (2011) 0.01
    0.011648753 = product of:
      0.04659501 = sum of:
        0.04659501 = product of:
          0.09319002 = sum of:
            0.09319002 = weight(_text_:services in 649) [ClassicSimilarity], result of:
              0.09319002 = score(doc=649,freq=10.0), product of:
                0.1712379 = queryWeight, product of:
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.04664141 = queryNorm
                0.5442138 = fieldWeight in 649, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.6713707 = idf(docFreq=3057, maxDocs=44218)
                  0.046875 = fieldNorm(doc=649)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This paper is about a better understanding of the structure and dynamics of science and the usage of these insights for compensating the typical problems that arises in metadata-driven Digital Libraries. Three science model driven retrieval services are presented: co-word analysis based query expansion, re-ranking via Bradfordizing and author centrality. The services are evaluated with relevance assessments from which two important implications emerge: (1) precision values of the retrieval services are the same or better than the tf-idf retrieval baseline and (2) each service retrieved a disjoint set of documents. The different services each favor quite other - but still relevant - documents than pure term-frequency based rankings. The proposed models and derived retrieval services therefore open up new viewpoints on the scientific knowledge space and provide an alternative framework to structure scholarly information systems.
  19. Hafner, R.; Schelling, B.: Automatisierung der Sacherschließung mit Semantic Web Technologie (2015) 0.01
    0.011058717 = product of:
      0.04423487 = sum of:
        0.04423487 = product of:
          0.08846974 = sum of:
            0.08846974 = weight(_text_:22 in 8365) [ClassicSimilarity], result of:
              0.08846974 = score(doc=8365,freq=2.0), product of:
                0.16333027 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04664141 = queryNorm
                0.5416616 = fieldWeight in 8365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=8365)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 6.2015 16:08:38
  20. Lumsden, J.; Hall, H.; Cruickshank, P.: Ontology definition and construction, and epistemological adequacy for systems interoperability : a practitioner analysis (2011) 0.01
    0.010661725 = product of:
      0.0426469 = sum of:
        0.0426469 = weight(_text_:reference in 4801) [ClassicSimilarity], result of:
          0.0426469 = score(doc=4801,freq=2.0), product of:
            0.18975449 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.04664141 = queryNorm
            0.22474778 = fieldWeight in 4801, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4801)
      0.25 = coord(1/4)
    
    Abstract
    Ontology development is considered to be a useful approach to the design and implementation of interoperable systems. This literature review and commentary examines the current state of knowledge in this field with particular reference to processes involved in assuring epistemological adequacy. It takes the perspective of the information systems practitioner keen to adopt a systematic approach to in-house ontology design, taking into consideration previously published work. The study arises from author involvement in an integration/interoperability project on systems that support Scottish Common Housing Registers in which, ultimately, ontological modelling was not deployed. Issues concerning the agreement of meaning, and the implications for the creation of interoperable systems, are discussed. The extent to which those theories, methods and frameworks provide practitioners with a usable set of tools is explored, and examples of practical applications of ontological modelling are noted. The findings from the review of the literature demonstrate a number of difficulties faced by information systems practitioners keen to develop and deploy domain ontologies. A major problem is deciding which broad approach to take: to rely on automatic ontology construction techniques, or to rely on key words and domain experts to develop ontologies.

Languages

  • e 84
  • d 13

Types

  • a 61
  • el 38
  • m 5
  • s 3
  • x 3
  • r 1
  • More… Less…