Search (51 results, page 3 of 3)

  • × theme_ss:"Semantische Interoperabilität"
  • × type_ss:"el"
  1. Shaw, R.; Rabinowitz, A.; Golden, P.; Kansa, E.: Report on and demonstration of the PeriodO period gazetteer (2015) 0.00
    0.001153389 = product of:
      0.010380501 = sum of:
        0.010380501 = product of:
          0.020761002 = sum of:
            0.020761002 = weight(_text_:web in 2249) [ClassicSimilarity], result of:
              0.020761002 = score(doc=2249,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.21634221 = fieldWeight in 2249, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2249)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    The PeriodO period gazetteer documents definitions of historical period names. Each entry of the gazetteer identifies the definition of a single period. To be included in the gazetteer, a definition must a) give the period a name, b) impose some temporal bounds on the period, c) have some implicit or explicit association with a geographical region, and d) have been formally or informally published in some citable source. Much care has been put into giving period definitions stable identifiers that can be resolved to RDF representations of period definitions. Anyone can propose additions of new definitions to PeriodO, and we have implemented an open source web service and browser-based client for distributed versioning and collaborative maintenance of the gazetteer.
  2. Isaac, A.; Raemy, J.A.; Meijers, E.; Valk, S. De; Freire, N.: Metadata aggregation via linked data : results of the Europeana Common Culture project (2020) 0.00
    0.001153389 = product of:
      0.010380501 = sum of:
        0.010380501 = product of:
          0.020761002 = sum of:
            0.020761002 = weight(_text_:web in 39) [ClassicSimilarity], result of:
              0.020761002 = score(doc=39,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.21634221 = fieldWeight in 39, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=39)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    Digital cultural heritage resources are widely available on the web through the digital libraries of heritage institutions. To address the difficulties of discoverability in cultural heritage, the common practice is metadata aggregation, where centralized efforts like Europeana facilitate discoverability by collecting the resources' metadata. We present the results of the linked data aggregation task conducted within the Europeana Common Culture project, which attempted an innovative approach to aggregation based on linked data made available by cultural heritage institutions. This task ran for one year with participation of eleven organizations, involving the three member roles of the Europeana network: data providers, intermediary aggregators, and the central aggregation hub, Europeana. We report on the challenges that were faced by data providers, the standards and specifications applied, and the resulting aggregated metadata.
  3. Angjeli, A.; Isaac, A.: Semantic web and vocabularies interoperability : an experiment with illuminations collections (2008) 0.00
    0.0010874257 = product of:
      0.009786831 = sum of:
        0.009786831 = product of:
          0.019573662 = sum of:
            0.019573662 = weight(_text_:web in 2324) [ClassicSimilarity], result of:
              0.019573662 = score(doc=2324,freq=4.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.2039694 = fieldWeight in 2324, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2324)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    During the years 2006 and 2007, the BnF has collaborated with the National Library of the Netherlands within the framework of the Dutch project STITCH. This project, through concrete experiments, investigates semantic interoperability, especially in relation to searching. How can we conduct semantic searches across several digital heritage collections? The metadata related to content analysis are often heterogeneous. Beyond using manual mapping of semantically similar entities, STITCH explores the techniques of the semantic web, particularly ontology mapping. This paper is about an experiment made on two digital iconographic collections: Mandragore, iconographic database of the Manuscript Department of the BnF, and the Medieval Illuminated manuscripts collection of the KB. While the content of these two collections is similar, they have been processed differently and the vocabularies used to index their content is very different. Vocabularies in Mandragore and Iconclass are both controlled and hierarchical but they do not have the same semantic and structure. This difference is of particular interest to the STITCH project, as it aims to study automatic alignment of two vocabularies. The collaborative experiment started with a precise analysis of each of the vocabularies; that included concepts and their representation, lexical properties of the terms used, semantic relationships, etc. The team of Dutch researchers then studied and implemented mechanisms of alignment of the two vocabularies. The initial models being different, there had to be a common standard in order to enable procedures of alignment. RDF and SKOS were selected for that. The experiment lead to building a prototype that allows for querying in both databases at the same time through a single interface. The descriptors of each vocabulary are used as search terms for all images regardless of the collection they belong to. This experiment is only one step in the search for solutions that aim at making navigation easier between heritage collections that have heterogeneous metadata.
  4. Concepts in Context : Cologne Conference on Interoperability and Semantics in Knowledge Organization 0.00
    0.0010874257 = product of:
      0.009786831 = sum of:
        0.009786831 = product of:
          0.019573662 = sum of:
            0.019573662 = weight(_text_:web in 4038) [ClassicSimilarity], result of:
              0.019573662 = score(doc=4038,freq=4.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.2039694 = fieldWeight in 4038, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4038)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    Am 19. und 20. Juli 2010 richten das Institut für Informationsmanagement (IIM) der Fachhochschule Köln und die Deutsche Nationalbibliothek (DNB) im Rahmen der Projekte CrissCross und RESEDA die Fachtagung Concepts in Context - Cologne Conference on Interoperability and Semantics in Knowledge Organization aus. Die Tagung findet in der Fachhochschule Köln statt und widmet sich Fragen der Interoperabilität und semantischer Informationen in der Wissensorganisation. Die Konferenz bietet Experten, Anwendern und Interessierten die Möglichkeit, verschiedene Modelle und Strategien der Wissensorganisation zu diskutieren und sich über neue Entwicklungen im Bereich der Standardisierung und Implementierung solcher Modelle zu informieren und auszutauschen. Der erste Tag ist als Abschlussworkshop für das DFG-Projekt CrissCross konzipiert und bietet neben einem umfassenden Überblick über das Projekt auch weitere praktische Anwendungsbeispiele für semantische Interoperabilität und mögliche Szenarien für ihre Anwendung in Online-Katalogen und im Rahmen des Semantic Web. Eine vertiefte Auseinandersetzung mit neueren Entwicklungen im Bereich der Interoperabilität unterschiedlicher Begriffssysteme sowie mit zukunftsträchtigen Modellen der semantischen Wissensorganisation findet am zweiten Tag statt. Aktuelle thematische Schwerpunkte werden hier die Functional Requirements for Bibliographic Records (FRBR) und die Functional Requirements for Subject Authority Data (FRSAD) sein. Zur Konferenz werden Informationsspezialisten aus mehreren Ländern erwartet.
    Content
    Beiträge: Insights and Outlooks: A Retrospective View on the CrissCross Project - Jan-Helge Jacobs, Tina Mengel, Katrin Müller Translingual Retrieval: Moving between Vocabularies - MACS 2010 - Helga Karg und Yvonne Jahns Intersystem Relations: Characteristics and Functionalities - Jessica Hubrich Would an Explicit Versioning of the DDC Bring Advantages for Retrieval? - Claudia Effenberger und Julia Hauser A Semantic Web View on Concepts and their Alignments - From Specific Library Cases to a Wider Linked Data Perspective - Antoine Isaac Conceptual Foundations for Semantic Mapping and Semantic Search - Dagobert Soergel In Pursuit of Cross-Vocabulary Interoperability: Can We Standardize Mapping Types? - Stella Dextre Clarke Searching in a Multi-Thesauri-Scenario - Experiences with SKOS and Terminology Mappings - Philipp Mayr Interoperability and Semantics in RDF Representations of FRBR, FRAD and FRSAD - Gordon Dunsire FRSAD: Challenges of Modelling the Aboutness - Maja Zumer Integrating Interoperability into FRSAD - Felix Boteram
  5. Kaczmarek, M.; Kruk, S.R.; Gzella, A.: Collaborative building of controlled vocabulary crosswalks (2007) 0.00
    9.611576E-4 = product of:
      0.008650418 = sum of:
        0.008650418 = product of:
          0.017300837 = sum of:
            0.017300837 = weight(_text_:web in 543) [ClassicSimilarity], result of:
              0.017300837 = score(doc=543,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.18028519 = fieldWeight in 543, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=543)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    One of the main features of classic libraries is metadata, which also is the key aspect of the Semantic Web. Librarians in the process of resources annotation use different kinds of Knowledge Organization Systems; KOS range from controlled vocabularies to classifications and categories (e.g., taxonomies) and to relationship lists (e.g., thesauri). The diversity of controlled vocabularies, used by various libraries and organizations, became a bottleneck for efficient information exchange between different entities. Even though a simple one-to-one mapping could be established, based on the similarities between names of concepts, we cannot derive information about the hierarchy between concepts from two different KOS. One of the solutions to this problem is to create an algorithm based on data delivered by large community of users using many classification schemata at once. The rationale behind it is that similar resources can be described by equivalent concepts taken from different taxonomies. The more annotations are collected, the more precise the result of this crosswalk will be.
  6. Mitchell, J.S.; Panzer, M.: Dewey linked data : Making connections with old friends and new acquaintances (2012) 0.00
    9.611576E-4 = product of:
      0.008650418 = sum of:
        0.008650418 = product of:
          0.017300837 = sum of:
            0.017300837 = weight(_text_:web in 305) [ClassicSimilarity], result of:
              0.017300837 = score(doc=305,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.18028519 = fieldWeight in 305, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=305)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    This paper explores the history, uses cases, and future plans associated with availability of the Dewey Decimal Classification (DDC) system as linked data. Parts of the Dewey Decimal Classification (DDC) system have been available as linked data since 2009. Initial efforts included the DDC Summaries (the top three levels of the DDC) in eleven languages exposed as linked data in dewey.info. In 2010, the content of dewey.info was further extended by the addition of assignable numbers and captions from the Abridged Edition 14 data files in English, Italian, and Vietnamese. During 2012, we will add assignable numbers and captions from the latest full edition database, DDC 23. In addition to the "old friends" of different Dewey language versions, institutions such as the British Library and Deutsche Nationalbibliothek have made use of Dewey linked data in bibliographic records and authority files, and AGROVOC has linked to our data at a general level. We expect to extend our linked data network shortly to "new acquaintances" such as GeoNames, ISO 639-3 language codes, and Mathematics Subject Classification. In particular, we will examine the linking process to GeoNames as an example of cross-domain vocabulary alignment. In addition to linking plans, we report on use cases that facilitate machine-assisted categorization and support discovery in the Semantic Web environment.
  7. Kempf, A.O.; Neubert, J.; Faden, M.: ¬The missing link : a vocabulary mapping effort in economics (2015) 0.00
    9.611576E-4 = product of:
      0.008650418 = sum of:
        0.008650418 = product of:
          0.017300837 = sum of:
            0.017300837 = weight(_text_:web in 2251) [ClassicSimilarity], result of:
              0.017300837 = score(doc=2251,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.18028519 = fieldWeight in 2251, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2251)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    In economics there exists an internationally established classification system. Research literature is usually classified according to the JEL classification codes, a classification system originated by the Journal of Economic Literature and published by the American Economic Association (AEA). Complementarily to keywords which are usually assigned freely, economists widely use the JEL codes when classifying their publications. In cooperation with KU Leuven, ZBW - Leibniz Information Centre for Economics has published an unofficial multilingual version of JEL in SKOS format. In addition to this, exists the STW Thesaurus for Economics a bilingual domain-specific controlled vocabulary maintained by the German National Library of Economics (ZBW). Developed in the mid-1990s and since then constantly updated according to the current terminology usage in the latest international research literature in economics it covers all sub-fields both in the economics as well as in business economics and business practice containing subject headings which are clearly delimited from each other. It has been published on the web as Linked Open Data in the year 2009.
  8. Nicholson, D.: Help us make HILT's terminology services useful in your information service (2008) 0.00
    9.611576E-4 = product of:
      0.008650418 = sum of:
        0.008650418 = product of:
          0.017300837 = sum of:
            0.017300837 = weight(_text_:web in 3654) [ClassicSimilarity], result of:
              0.017300837 = score(doc=3654,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.18028519 = fieldWeight in 3654, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3654)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    The JISC-funded HILT project is looking to make contact with staff in information services or projects interested in helping it test and refine its developing terminology services. The project is currently working to create pilot web services that will deliver machine-readable terminology and cross-terminology mappings data likely to be useful to information services wishing to extend or enhance the efficacy of their subject search or browse services. Based on SRW/U, SOAP, and SKOS, the HILT facilities, when fully operational, will permit such services to improve their own subject search and browse mechanisms by using HILT data in a fashion transparent to their users. On request, HILT will serve up machine-processable data on individual subject schemes (broader terms, narrower terms, hierarchy information, preferred and non-preferred terms, and so on) and interoperability data (usually intellectual or automated mappings between schemes, but the architecture allows for the use of other methods) - data that can be used to enhance user services. The project is also developing an associated toolkit that will help service technical staff to embed HILT-related functionality into their services. The primary aim is to serve JISC funded information services or services at JISC institutions, but information services outside the JISC domain may also find the proposed services useful and wish to participate in the test and refine process.
  9. Strötgen, R.: Anfragetransfers zur Integration von Internetquellen in Digitalen Bibliotheken auf der Grundlage statistischer Termrelationen (2007) 0.00
    7.6892605E-4 = product of:
      0.0069203344 = sum of:
        0.0069203344 = product of:
          0.013840669 = sum of:
            0.013840669 = weight(_text_:web in 588) [ClassicSimilarity], result of:
              0.013840669 = score(doc=588,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.14422815 = fieldWeight in 588, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=588)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    In Digitalen Bibliotheken als integrierten Zugängen zu in der Regel mehreren verschiedenen Dokumentsammlungen tritt Heterogenität in vielerlei Spielarten auf: - als technische Heterogenität durch das Zusammenspiel verschiedener Betriebs-, Datenbank- oder Softwaresysteme, - als strukturelle Heterogenität durch das Auftreten verschiedener Dokumentstrukturen und Metadaten-Standards und schließlich - als semantische Heterogenität, wenn Dokumente mit Hilfe unterschiedlicher Ontologien (hier verwendet im weiteren Sinn von Dokumentationssprachen wie Thesauri und Klassifikationen) erschlossen wurden oder aber Dokumente überhaupt nicht mit Metadaten ausgezeichnet wurden. Semantische Heterogenität lässt sich behandeln, indem die Standardisierung von Metadaten (z.B. von der Dublin Core Metadata Initiative oder das Resource Description Framework (RDF) im Kontext des Semantic Web) vorangetrieben und ihre Verwendung gefördert wird. Allerdings besteht auf Grund der unterschiedlichen Interessen aller beteiligten Partner (u.a. Bibliotheken, Dokumentationsstellen, Datenbankproduzenten, "freie" Anbieter von Dokumentsammlungen und Datenbanken) kaum die Aussicht, dass sich durch diese Standardisierung semantische Heterogenität restlos beseitigen lässt. Insbesondere ist eine einheitliche Verwendung von Vokabularen und Ontologien nicht in Sicht. Im Projekt CARMEN wurde unter anderem das Problem der semantischen Heterogenität einerseits durch die automatische Extraktion von Metadaten aus Internetdokumenten und andererseits durch Systeme zur Transformation von Anfragen über Cross-Konkordanzen und statistisch erzeugte Relationen angegangen. Ein Teil der Ergebnisse der Arbeiten am IZ Sozialwissenschaften waren statistische Relationen zwischen Deskriptoren, die mittels Kookurrenzbeziehungen berechnet wurden. Diese Relationen wurden dann für die Übersetzung von Anfragen genutzt, um zwischen verschiedenen Ontologien oder auch Freitexttermen zu vermitteln. Das Ziel dieser Übersetzung ist die Verbesserung des (automatischen) Überstiegs zwischen unterschiedlich erschlossenen Dokumentbeständen, z.B. Fachdatenbanken und Internetdokumenten, als Lösungsansatz zur Behandlung semantischer Heterogenität.
  10. Euzenat, J.; Bach, T.Le; Barrasa, J.; Bouquet, P.; Bo, J.De; Dieng, R.; Ehrig, M.; Hauswirth, M.; Jarrar, M.; Lara, R.; Maynard, D.; Napoli, A.; Stamou, G.; Stuckenschmidt, H.; Shvaiko, P.; Tessaris, S.; Acker, S. Van; Zaihrayeu, I.: State of the art on ontology alignment (2004) 0.00
    7.6892605E-4 = product of:
      0.0069203344 = sum of:
        0.0069203344 = product of:
          0.013840669 = sum of:
            0.013840669 = weight(_text_:web in 172) [ClassicSimilarity], result of:
              0.013840669 = score(doc=172,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.14422815 = fieldWeight in 172, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=172)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    In this document we provide an overall view of the state of the art in ontology alignment. It is organised as a description of the need for ontology alignment, a presentation of the techniques currently in use for ontology alignment and a presentation of existing systems. The state of the art is not restricted to any discipline and consider as some form of ontology alignment the work made on schema matching within the database area for instance. Heterogeneity problems on the semantic web can be solved, for some of them, by aligning heterogeneous ontologies. This is illustrated through a number of use cases of ontology alignment. Aligning ontologies consists of providing the corresponding entities in these ontologies. This process is precisely defined in deliverable D2.2.1. The current deliverable presents the many techniques currently used for implementing this process. These techniques are classified along the many features that can be found in ontologies (labels, structures, instances, semantics). They resort to many different disciplines such as statistics, machine learning or data analysis. The alignment itself is obtained by combining these techniques towards a particular goal (obtaining an alignment with particular features, optimising some criterion). Several combination techniques are also presented. Finally, these techniques have been experimented in various systems for ontology alignment or schema matching. Several such systems are presented briefly in the last section and characterized by the above techniques they rely on. The conclusion is that many techniques are available for achieving ontology alignment and many systems have been developed based on these techniques. However, few comparisons and few integration is actually provided by these implementations. This deliverable serves as a basis for considering further action along these two lines. It provide a first inventory of what should be evaluated and suggests what evaluation criterion can be used.
  11. Mayr, P.; Petras, V.; Walter, A.-K.: Results from a German terminology mapping effort : intra- and interdisciplinary cross-concordances between controlled vocabularies (2007) 0.00
    6.728103E-4 = product of:
      0.0060552927 = sum of:
        0.0060552927 = product of:
          0.012110585 = sum of:
            0.012110585 = weight(_text_:web in 542) [ClassicSimilarity], result of:
              0.012110585 = score(doc=542,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.12619963 = fieldWeight in 542, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=542)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    In 2004, the German Federal Ministry for Education and Research funded a major terminology mapping initiative at the GESIS Social Science Information Centre in Bonn (GESIS-IZ), which will find its conclusion this year. The task of this terminology mapping initiative was to organize, create and manage 'crossconcordances' between major controlled vocabularies (thesauri, classification systems, subject heading lists) centred around the social sciences but quickly extending to other subject areas. Cross-concordances are intellectually (manually) created crosswalks that determine equivalence, hierarchy, and association relations between terms from two controlled vocabularies. Most vocabularies have been related bilaterally, that is, there is a cross-concordance relating terms from vocabulary A to vocabulary B as well as a cross-concordance relating terms from vocabulary B to vocabulary A (bilateral relations are not necessarily symmetrical). Till August 2007, 24 controlled vocabularies from 11 disciplines will be connected with vocabulary sizes ranging from 2,000 - 17,000 terms per vocabulary. To date more than 260,000 relations are generated. A database including all vocabularies and cross-concordances was built and a 'heterogeneity service' developed, a web service, which makes the cross-concordances available for other applications. Many cross-concordances are already implemented and utilized for the German Social Science Information Portal Sowiport (www.sowiport.de), which searches bibliographical and other information resources (incl. 13 databases with 10 different vocabularies and ca. 2.5 million references).

Years

Languages

  • e 43
  • d 6

Types