Search (224 results, page 2 of 12)

  • × language_ss:"e"
  • × theme_ss:"Semantische Interoperabilität"
  1. Dunsire, G.; Nicholson, D.: Signposting the crossroads : terminology Web services and classification-based interoperability (2010) 0.02
    0.018529613 = product of:
      0.037059225 = sum of:
        0.037059225 = sum of:
          0.005858987 = weight(_text_:a in 4066) [ClassicSimilarity], result of:
            0.005858987 = score(doc=4066,freq=6.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.11032722 = fieldWeight in 4066, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4066)
          0.03120024 = weight(_text_:22 in 4066) [ClassicSimilarity], result of:
            0.03120024 = score(doc=4066,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.19345059 = fieldWeight in 4066, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4066)
      0.5 = coord(1/2)
    
    Abstract
    The focus of this paper is the provision of terminology- and classification-based terminologies interoperability data via web services, initially using interoperability data based on the use of a Dewey Decimal Classification (DDC) spine, but with an aim to explore other possibilities in time, including the use of other spines. The High-Level Thesaurus Project (HILT) Phase IV developed pilot web services based on SRW/U, SOAP, and SKOS to deliver machine-readable terminology and crossterminology mappings data likely to be useful to information services wishing to enhance their subject search or browse services. It also developed an associated toolkit to help information services technical staff to embed HILT-related functionality within service interfaces. Several UK information services have created illustrative user interface enhancements using HILT functionality and these will demonstrate what is possible. HILT currently has the following subject schemes mounted and available: DDC, CAB, GCMD, HASSET, IPSV, LCSH, MeSH, NMR, SCAS, UNESCO, and AAT. It also has high level mappings between some of these schemes and DDC and some deeper pilot mappings available.
    Content
    Teil von: Papers from Classification at a Crossroads: Multiple Directions to Usability: International UDC Seminar 2009-Part 2
    Date
    6. 1.2011 19:22:48
    Type
    a
  2. Concepts in Context : Proceedings of the Cologne Conference on Interoperability and Semantics in Knowledge Organization July 19th - 20th, 2010 (2011) 0.02
    0.018529613 = product of:
      0.037059225 = sum of:
        0.037059225 = sum of:
          0.005858987 = weight(_text_:a in 628) [ClassicSimilarity], result of:
            0.005858987 = score(doc=628,freq=6.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.11032722 = fieldWeight in 628, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=628)
          0.03120024 = weight(_text_:22 in 628) [ClassicSimilarity], result of:
            0.03120024 = score(doc=628,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.19345059 = fieldWeight in 628, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=628)
      0.5 = coord(1/2)
    
    Content
    Winfried Gödert: Programmatic Issues and Introduction - Dagobert Soergel: Conceptual Foundations for Semantic Mapping and Semantic Search - Jan-Helge Jacobs, Tina Mengel, Katrin Müller: Insights and Outlooks: A Retrospective View on the CrissCross Project - Yvonne Jahns, Helga Karg: Translingual Retrieval: Moving between Vocabularies - MACS 2010 - Jessica Hubrich: Intersystem Relations: Characteristics and Functionalities - Stella G Dextre Clarke: In Pursuit of Interoperability: Can We Standardize Mapping Types? - Philipp Mayr, Philipp Schaer, Peter Mutschke: A Science Model Driven Retrieval Prototype - Claudia Effenberger, Julia Hauser: Would an Explicit Versioning of the DDC Bring Advantages for Retrieval? - Gordon Dunsire: Interoperability and Semantics in RDF Representations of FRBR, FRAD and FRSAD - Maja Zumer: FRSAD: Challenges of Modeling the Aboutness - Michael Panzer: Two Tales of a Concept: Aligning FRSAD with SKOS - Felix Boteram: Integrating Semantic Interoperability into FRSAD
    Date
    22. 2.2013 11:34:18
  3. Lauser, B.; Johannsen, G.; Caracciolo, C.; Hage, W.R. van; Keizer, J.; Mayr, P.: Comparing human and automatic thesaurus mapping approaches in the agricultural domain (2008) 0.02
    0.017992042 = product of:
      0.035984084 = sum of:
        0.035984084 = sum of:
          0.0047838427 = weight(_text_:a in 2627) [ClassicSimilarity], result of:
            0.0047838427 = score(doc=2627,freq=4.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.090081796 = fieldWeight in 2627, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2627)
          0.03120024 = weight(_text_:22 in 2627) [ClassicSimilarity], result of:
            0.03120024 = score(doc=2627,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.19345059 = fieldWeight in 2627, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2627)
      0.5 = coord(1/2)
    
    Abstract
    Knowledge organization systems (KOS), like thesauri and other controlled vocabularies, are used to provide subject access to information systems across the web. Due to the heterogeneity of these systems, mapping between vocabularies becomes crucial for retrieving relevant information. However, mapping thesauri is a laborious task, and thus big efforts are being made to automate the mapping process. This paper examines two mapping approaches involving the agricultural thesaurus AGROVOC, one machine-created and one human created. We are addressing the basic question "What are the pros and cons of human and automatic mapping and how can they complement each other?" By pointing out the difficulties in specific cases or groups of cases and grouping the sample into simple and difficult types of mappings, we show the limitations of current automatic methods and come up with some basic recommendations on what approach to use when.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
    Type
    a
  4. Gracy, K.F.; Zeng, M.L.; Skirvin, L.: Exploring methods to improve access to Music resources by aligning library Data with Linked Data : a report of methodologies and preliminary findings (2013) 0.02
    0.016758895 = product of:
      0.03351779 = sum of:
        0.03351779 = sum of:
          0.008557598 = weight(_text_:a in 1096) [ClassicSimilarity], result of:
            0.008557598 = score(doc=1096,freq=20.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.16114321 = fieldWeight in 1096, product of:
                4.472136 = tf(freq=20.0), with freq of:
                  20.0 = termFreq=20.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=1096)
          0.02496019 = weight(_text_:22 in 1096) [ClassicSimilarity], result of:
            0.02496019 = score(doc=1096,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.15476047 = fieldWeight in 1096, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1096)
      0.5 = coord(1/2)
    
    Abstract
    As a part of a research project aiming to connect library data to the unfamiliar data sets available in the Linked Data (LD) community's CKAN Data Hub (thedatahub.org), this project collected, analyzed, and mapped properties used in describing and accessing music recordings, scores, and music-related information used by selected music LD data sets, library catalogs, and various digital collections created by libraries and other cultural institutions. This article reviews current efforts to connect music data through the Semantic Web, with an emphasis on the Music Ontology (MO) and ontology alignment approaches; it also presents a framework for understanding the life cycle of a musical work, focusing on the central activities of composition, performance, and use. The project studied metadata structures and properties of 11 music-related LD data sets and mapped them to the descriptions commonly used in the library cataloging records for sound recordings and musical scores (including MARC records and their extended schema.org markup), and records from 20 collections of digitized music recordings and scores (featuring a variety of metadata structures). The analysis resulted in a set of crosswalks and a unified crosswalk that aligns these properties. The paper reports on detailed methodologies used and discusses research findings and issues. Topics of particular concern include (a) the challenges of mapping between the overgeneralized descriptions found in library data and the specialized, music-oriented properties present in the LD data sets; (b) the hidden information and access points in library data; and (c) the potential benefits of enriching library data through the mapping of properties found in library catalogs to similar properties used by LD data sets.
    Date
    28.10.2013 17:22:17
    Type
    a
  5. Euzenat, J.; Shvaiko, P.: Ontology matching (2010) 0.02
    0.01630717 = product of:
      0.03261434 = sum of:
        0.03261434 = sum of:
          0.007654148 = weight(_text_:a in 168) [ClassicSimilarity], result of:
            0.007654148 = score(doc=168,freq=16.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.14413087 = fieldWeight in 168, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=168)
          0.02496019 = weight(_text_:22 in 168) [ClassicSimilarity], result of:
            0.02496019 = score(doc=168,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.15476047 = fieldWeight in 168, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=168)
      0.5 = coord(1/2)
    
    Abstract
    Ontologies are viewed as the silver bullet for many applications, but in open or evolving systems, different parties can adopt different ontologies. This increases heterogeneity problems rather than reducing heterogeneity. This book proposes ontology matching as a solution to the problem of semantic heterogeneity, offering researchers and practitioners a uniform framework of reference to currently available work. The techniques presented apply to database schema matching, catalog integration, XML schema matching and more. Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko's book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, artificial intelligence. With Ontology Matching, researchers and practitioners will find a reference book which presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can equally be applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a detailed account of matching techniques and matching systems in a systematic way from theoretical, practical and application perspectives.
    Date
    20. 6.2012 19:08:22
  6. Landry, P.: MACS: multilingual access to subject and link management : Extending the Multilingual Capacity of TEL in the EDL Project (2007) 0.02
    0.01560012 = product of:
      0.03120024 = sum of:
        0.03120024 = product of:
          0.06240048 = sum of:
            0.06240048 = weight(_text_:22 in 1287) [ClassicSimilarity], result of:
              0.06240048 = score(doc=1287,freq=2.0), product of:
                0.16128273 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046056706 = queryNorm
                0.38690117 = fieldWeight in 1287, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1287)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  7. Woldering, B.: ¬Die Europäische Digitale Bibliothek nimmt Gestalt an (2007) 0.01
    0.01383317 = product of:
      0.02766634 = sum of:
        0.02766634 = sum of:
          0.00270615 = weight(_text_:a in 2439) [ClassicSimilarity], result of:
            0.00270615 = score(doc=2439,freq=2.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.050957955 = fieldWeight in 2439, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=2439)
          0.02496019 = weight(_text_:22 in 2439) [ClassicSimilarity], result of:
            0.02496019 = score(doc=2439,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.15476047 = fieldWeight in 2439, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2439)
      0.5 = coord(1/2)
    
    Date
    22. 2.2009 19:10:56
    Type
    a
  8. Faro, S.; Francesconi, E.; Marinai, E.; Sandrucci, V.: Report on execution and results of the interoperability tests (2008) 0.01
    0.012480095 = product of:
      0.02496019 = sum of:
        0.02496019 = product of:
          0.04992038 = sum of:
            0.04992038 = weight(_text_:22 in 7411) [ClassicSimilarity], result of:
              0.04992038 = score(doc=7411,freq=2.0), product of:
                0.16128273 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046056706 = queryNorm
                0.30952093 = fieldWeight in 7411, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7411)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    7.11.2008 10:40:22
  9. Faro, S.; Francesconi, E.; Sandrucci, V.: Thesauri KOS analysis and selected thesaurus mapping methodology on the project case-study (2007) 0.01
    0.012480095 = product of:
      0.02496019 = sum of:
        0.02496019 = product of:
          0.04992038 = sum of:
            0.04992038 = weight(_text_:22 in 2227) [ClassicSimilarity], result of:
              0.04992038 = score(doc=2227,freq=2.0), product of:
                0.16128273 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046056706 = queryNorm
                0.30952093 = fieldWeight in 2227, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2227)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    7.11.2008 10:40:22
  10. Si, L.: Encoding formats and consideration of requirements for mapping (2007) 0.01
    0.010920083 = product of:
      0.021840166 = sum of:
        0.021840166 = product of:
          0.043680333 = sum of:
            0.043680333 = weight(_text_:22 in 540) [ClassicSimilarity], result of:
              0.043680333 = score(doc=540,freq=2.0), product of:
                0.16128273 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2708308 = fieldWeight in 540, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=540)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    26.12.2011 13:22:27
  11. Metadata and semantics research : 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings (2016) 0.01
    0.010920083 = product of:
      0.021840166 = sum of:
        0.021840166 = product of:
          0.043680333 = sum of:
            0.043680333 = weight(_text_:22 in 3283) [ClassicSimilarity], result of:
              0.043680333 = score(doc=3283,freq=2.0), product of:
                0.16128273 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2708308 = fieldWeight in 3283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3283)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  12. Kahlawi, A,: ¬An ontology driven ESCO LOD quality enhancement (2020) 0.00
    0.0030444188 = product of:
      0.0060888375 = sum of:
        0.0060888375 = product of:
          0.012177675 = sum of:
            0.012177675 = weight(_text_:a in 5959) [ClassicSimilarity], result of:
              0.012177675 = score(doc=5959,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22931081 = fieldWeight in 5959, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5959)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The labor market is a system that is complex and difficult to manage. To overcome this challenge, the European Union has launched the ESCO project which is a language that aims to describe this labor market. In order to support the spread of this project, its dataset was presented as linked open data (LOD). Since LOD is usable and reusable, a set of conditions have to be met. First, LOD must be feasible and high quality. In addition, it must provide the user with the right answers, and it has to be built according to a clear and correct structure. This study investigates the LOD of ESCO, focusing on data quality and data structure. The former is evaluated through applying a set of SPARQL queries. This provides solutions to improve its quality via a set of rules built in first order logic. This process was conducted based on a new proposed ESCO ontology.
    Type
    a
  13. Ledl, A.: ¬The BAsel Register of Thesauri, Ontologies & Classifications (BARTOC) (2015) 0.00
    0.0029294936 = product of:
      0.005858987 = sum of:
        0.005858987 = product of:
          0.011717974 = sum of:
            0.011717974 = weight(_text_:a in 2308) [ClassicSimilarity], result of:
              0.011717974 = score(doc=2308,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22065444 = fieldWeight in 2308, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2308)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Classification and authority control: expanding resource discovery: proceedings of the International UDC Seminar 2015, 29-30 October 2015, Lisbon, Portugal. Eds.: Slavic, A. u. M.I. Cordeiro
    Type
    a
  14. Peponakis, M.; Mastora, A.; Kapidakis, S.; Doerr, M.: Expressiveness and machine processability of Knowledge Organization Systems (KOS) : an analysis of concepts and relations (2020) 0.00
    0.0029294936 = product of:
      0.005858987 = sum of:
        0.005858987 = product of:
          0.011717974 = sum of:
            0.011717974 = weight(_text_:a in 5787) [ClassicSimilarity], result of:
              0.011717974 = score(doc=5787,freq=24.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22065444 = fieldWeight in 5787, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5787)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study considers the expressiveness (that is the expressive power or expressivity) of different types of Knowledge Organization Systems (KOS) and discusses its potential to be machine-processable in the context of the Semantic Web. For this purpose, the theoretical foundations of KOS are reviewed based on conceptualizations introduced by the Functional Requirements for Subject Authority Data (FRSAD) and the Simple Knowledge Organization System (SKOS); natural language processing techniques are also implemented. Applying a comparative analysis, the dataset comprises a thesaurus (Eurovoc), a subject headings system (LCSH) and a classification scheme (DDC). These are compared with an ontology (CIDOC-CRM) by focusing on how they define and handle concepts and relations. It was observed that LCSH and DDC focus on the formalism of character strings (nomens) rather than on the modelling of semantics; their definition of what constitutes a concept is quite fuzzy, and they comprise a large number of complex concepts. By contrast, thesauri have a coherent definition of what constitutes a concept, and apply a systematic approach to the modelling of relations. Ontologies explicitly define diverse types of relations, and are by their nature machine-processable. The paper concludes that the potential of both the expressiveness and machine processability of each KOS is extensively regulated by its structural rules. It is harder to represent subject headings and classification schemes as semantic networks with nodes and arcs, while thesauri are more suitable for such a representation. In addition, a paradigm shift is revealed which focuses on the modelling of relations between concepts, rather than the concepts themselves.
  15. Posch, L.; Schaer, P.; Bleier, A.; Strohmaier, M.: ¬A system for probabilistic linking of thesauri and classification systems (2015) 0.00
    0.0029000505 = product of:
      0.005800101 = sum of:
        0.005800101 = product of:
          0.011600202 = sum of:
            0.011600202 = weight(_text_:a in 2515) [ClassicSimilarity], result of:
              0.011600202 = score(doc=2515,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.21843673 = fieldWeight in 2515, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2515)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents a system which creates and visualizes probabilistic semantic links between concepts in a thesaurus and classes in a classification system. For creating the links, we build on the Polylingual Labeled Topic Model (PLL-TM) (Posch et al., in KI 2015: advances in artificial intelligence, 2015). PLL-TM identifies probable thesaurus descriptors for each class in the classification system by using information from the natural language text of documents, their assigned thesaurus descriptors and their designated classes. The links are then presented to users of the system in an interactive visualization, providing them with an automatically generated overview of the relations between the thesaurus and the classification system.
    Type
    a
  16. McCulloch, E.; Shiri, A.; Nicholson, A.D.: Subject searching requirements : the HILT II experience (2004) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 4758) [ClassicSimilarity], result of:
              0.011481222 = score(doc=4758,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 4758, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4758)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The HILT Phase II project aimed to develop a pilot terminologies server with a view to improving cross-sectoral information retrieval. In order to inform this process, it was first necessary to examine how a representative group of users approached a range of information-related tasks. This paper focuses on exploratory interviews conducted to investigate the proposed ideal and actual strategies of a group of 30 users in relation to eight separate information tasks. In addition, users were asked to give examples of search terms they may employ and to describe how they would formulate search queries in each scenario. The interview process undertaken and the results compiled are outlined, and associated implications for the development of a pilot terminologies server are discussed.
    Type
    a
  17. Kalfoglou, Y.; Schorlemmer, M.: Ontology mapping : the state of the art 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 171) [ClassicSimilarity], result of:
              0.011481222 = score(doc=171,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 171, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=171)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Ontology mapping is seen as a solution provider in today's landscape of ontology research. As the number of ontologies that are made publicly available and accessible on the Web increases steadily, so does the need for applications to use them. A single ontology is no longer enough to support the tasks envisaged by a distributed environment like the Semantic Web. Multiple ontologies need to be accessed from several applications. Mapping could provide a common layer from which several ontologies could be accessed and hence could exchange information in semantically sound manners. Developing such mappings has been the focus of a variety of works originating from diverse communities over a number of years. In this article we comprehensively review and present these works. We also provide insights on the pragmatics of ontology mapping and elaborate on a theoretical approach for defining ontology mapping.
    Type
    a
  18. Khiat, A.; Benaissa, M.: Approach for instance-based ontology alignment : using argument and event structures of generative lexicon (2014) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 1577) [ClassicSimilarity], result of:
              0.011481222 = score(doc=1577,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 1577, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1577)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Ontology alignment became a very important problem to ensure semantic interoperability for different sources of information heterogeneous and distributed. Instance-based ontology alignment represents a very promising technique to find semantic correspondences between entities of different ontologies when they contain a lot of instances. In this paper, we describe a new approach to manage ontologies that do not share common instances.This approach extracts the argument and event structures from a set of instances of the concept of the source ontology and compared them with other semantic features extracted from a set of instances of the concept of the target ontology using Generative Lexicon Theory. We show that it is theoretically powerful because it is based on linguistic semantics and useful in practice. We present the experimental results obtained by running our approach on Biblio test of Benchmark series of OAEI 2011. The results show the good performance of our approach.
    Type
    a
  19. Kim, J.-M.; Shin, H.; Kim, H.-J.: Schema and constraints-based matching and merging of Topic Maps (2007) 0.00
    0.0028047764 = product of:
      0.005609553 = sum of:
        0.005609553 = product of:
          0.011219106 = sum of:
            0.011219106 = weight(_text_:a in 922) [ClassicSimilarity], result of:
              0.011219106 = score(doc=922,freq=22.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.21126054 = fieldWeight in 922, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=922)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper, we propose a multi-strategic matching and merging approach to find correspondences between ontologies based on the syntactic or semantic characteristics and constraints of the Topic Maps. Our multi-strategic matching approach consists of a linguistic module and a Topic Map constraints-based module. A linguistic module computes similarities between concepts using morphological analysis, string normalization and tokenization and language-dependent heuristics. A Topic Map constraints-based module takes advantage of several Topic Maps-dependent techniques such as a topic property-based matching, a hierarchy-based matching, and an association-based matching. This is a composite matching procedure and need not generate a cross-pair of all topics from the ontologies because unmatched pairs of topics can be removed by characteristics and constraints of the Topic Maps. Merging between Topic Maps follows the matching operations. We set up the MERGE function to integrate two Topic Maps into a new Topic Map, which satisfies such merge requirements as entity preservation, property preservation, relation preservation, and conflict resolution. For our experiments, we used oriental philosophy ontologies, western philosophy ontologies, Yahoo western philosophy dictionary, and Wikipedia philosophy ontology as input ontologies. Our experiments show that the automatically generated matching results conform to the outputs generated manually by domain experts and can be of great benefit to the following merging operations.
    Type
    a
  20. Si, L.E.; O'Brien, A.; Probets, S.: Integration of distributed terminology resources to facilitate subject cross-browsing for library portal systems (2010) 0.00
    0.0028047764 = product of:
      0.005609553 = sum of:
        0.005609553 = product of:
          0.011219106 = sum of:
            0.011219106 = weight(_text_:a in 3944) [ClassicSimilarity], result of:
              0.011219106 = score(doc=3944,freq=22.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.21126054 = fieldWeight in 3944, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3944)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - The paper aims to develop a prototype middleware framework between different terminology resources in order to provide a subject cross-browsing service for library portal systems. Design/methodology/approach - Nine terminology experts were interviewed to collect appropriate knowledge to support the development of a theoretical framework for the research. Based on this, a simplified software-based prototype system was constructed incorporating the knowledge acquired. The prototype involved mappings between the computer science schedule of the Dewey Decimal Classification (which acted as a spine) and two controlled vocabularies, UKAT and ACM Computing Classification. Subsequently, six further experts in the field were invited to evaluate the prototype system and provide feedback to improve the framework. Findings - The major findings showed that, given the large variety of terminology resources distributed throughout the web, the proposed middleware service is essential to integrate technically and semantically the different terminology resources in order to facilitate subject cross-browsing. A set of recommendations are also made, outlining the important approaches and features that support such a cross-browsing middleware service. Originality/value - Cross-browsing features are lacking in current library portal meta-search systems. Users are therefore deprived of this valuable retrieval provision. This research investigated the case for such a system and developed a prototype to fill this gap.
    Type
    a

Years

Types

  • a 158
  • el 74
  • m 12
  • s 5
  • x 4
  • r 3
  • p 2
  • n 1
  • More… Less…