Search (64 results, page 1 of 4)

  • × theme_ss:"Semantische Interoperabilität"
  • × type_ss:"el"
  1. Dextre Clarke, S.G.: Overview of ISO NP 25964 : structured vocabularies for information retrieval (2007) 0.08
    0.080047175 = product of:
      0.10672957 = sum of:
        0.01029941 = weight(_text_:information in 535) [ClassicSimilarity], result of:
          0.01029941 = score(doc=535,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.116372846 = fieldWeight in 535, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=535)
        0.066389285 = weight(_text_:standards in 535) [ClassicSimilarity], result of:
          0.066389285 = score(doc=535,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.29545712 = fieldWeight in 535, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.046875 = fieldNorm(doc=535)
        0.030040871 = product of:
          0.060081743 = sum of:
            0.060081743 = weight(_text_:organization in 535) [ClassicSimilarity], result of:
              0.060081743 = score(doc=535,freq=4.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.33425218 = fieldWeight in 535, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.046875 = fieldNorm(doc=535)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    ISO 2788 and ISO 5964, the international standards for monolingual and multilingual thesauri respectively dated 1986 and 1985, are very much in need of revision. A proposal to revise them was recently approved by the relevant subcommittee, ISO TC46/SC9. The work will be based on BS 8723, a five part standard of which Parts 1 and 2 were published in 2005, Parts 3 and 4 are scheduled for publication in 2007, and Part 5 is still in draft. This subsession will address aspects of the whole revision project. It is conceived as a panel session starting with a brief overview from the project leader. Then there are three presentations of 15 minutes, plus 5 minutes each for specific questions. At the end we have 20 minutes for questions to any or all of the panel, and discussion of issues from the workshop participants.
    Content
    Präsentation während der Veranstaltung "Networked Knowledge Organization Systems and Services: The 6th European Networked Knowledge Organization Systems (NKOS) Workshop, Workshop at the 11th ECDL Conference, Budapest, Hungary, September 21st 2007".
  2. Haslhofer, B.: Uniform SPARQL access to interlinked (digital library) sources (2007) 0.07
    0.07424326 = product of:
      0.14848652 = sum of:
        0.013732546 = weight(_text_:information in 541) [ClassicSimilarity], result of:
          0.013732546 = score(doc=541,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.1551638 = fieldWeight in 541, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=541)
        0.13475397 = sum of:
          0.080108985 = weight(_text_:organization in 541) [ClassicSimilarity], result of:
            0.080108985 = score(doc=541,freq=4.0), product of:
              0.17974974 = queryWeight, product of:
                3.5653565 = idf(docFreq=3399, maxDocs=44218)
                0.050415643 = queryNorm
              0.44566956 = fieldWeight in 541, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5653565 = idf(docFreq=3399, maxDocs=44218)
                0.0625 = fieldNorm(doc=541)
          0.054644987 = weight(_text_:22 in 541) [ClassicSimilarity], result of:
            0.054644987 = score(doc=541,freq=2.0), product of:
              0.17654699 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050415643 = queryNorm
              0.30952093 = fieldWeight in 541, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=541)
      0.5 = coord(2/4)
    
    Abstract
    In this presentation, we therefore focus on a solution for providing uniform access to Digital Libraries and other online services. In order to enable uniform query access to heterogeneous sources, we must provide metadata interoperability in a way that a query language - in this case SPARQL - can cope with the incompatibility of the metadata in various sources without changing their already existing information models.
    Content
    Präsentation während der Veranstaltung "Networked Knowledge Organization Systems and Services: The 6th European Networked Knowledge Organization Systems (NKOS) Workshop, Workshop at the 11th ECDL Conference, Budapest, Hungary, September 21st 2007".
    Date
    26.12.2011 13:22:46
  3. Faro, S.; Francesconi, E.; Sandrucci, V.: Thesauri KOS analysis and selected thesaurus mapping methodology on the project case-study (2007) 0.06
    0.05792077 = product of:
      0.11584154 = sum of:
        0.088519044 = weight(_text_:standards in 2227) [ClassicSimilarity], result of:
          0.088519044 = score(doc=2227,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.39394283 = fieldWeight in 2227, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0625 = fieldNorm(doc=2227)
        0.027322493 = product of:
          0.054644987 = sum of:
            0.054644987 = weight(_text_:22 in 2227) [ClassicSimilarity], result of:
              0.054644987 = score(doc=2227,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.30952093 = fieldWeight in 2227, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2227)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    - Introduction to the Thesaurus Interoperability problem - Analysis of the thesauri for the project case study - Overview of Schema/Ontology Mapping methodologies - The proposed approach for thesaurus mapping - Standards for implementing the proposed methodology
    Date
    7.11.2008 10:40:22
  4. Dunsire, G.; Willer, M.: Initiatives to make standard library metadata models and structures available to the Semantic Web (2010) 0.06
    0.056349926 = product of:
      0.11269985 = sum of:
        0.013732546 = weight(_text_:information in 3965) [ClassicSimilarity], result of:
          0.013732546 = score(doc=3965,freq=8.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.1551638 = fieldWeight in 3965, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=3965)
        0.09896731 = weight(_text_:standards in 3965) [ClassicSimilarity], result of:
          0.09896731 = score(doc=3965,freq=10.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.4404415 = fieldWeight in 3965, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.03125 = fieldNorm(doc=3965)
      0.5 = coord(2/4)
    
    Abstract
    This paper describes recent initiatives to make standard library metadata models and structures available to the Semantic Web, including IFLA standards such as Functional Requirements for Bibliographic Records (FRBR), Functional Requirements for Authority Data (FRAD), and International Standard Bibliographic Description (ISBD) along with the infrastructure that supports them. The FRBR Review Group is currently developing representations of FRAD and the entityrelationship model of FRBR in resource description framework (RDF) applications, using a combination of RDF, RDF Schema (RDFS), Simple Knowledge Organisation System (SKOS) and Web Ontology Language (OWL), cross-relating both models where appropriate. The ISBD/XML Task Group is investigating the representation of ISBD in RDF. The IFLA Namespaces project is developing an administrative and technical infrastructure to support such initiatives and encourage uptake of standards by other agencies. The paper describes similar initiatives with related external standards such as RDA - resource description and access, REICAT (the new Italian cataloguing rules) and CIDOC Conceptual Reference Model (CRM). The DCMI RDA Task Group is working with the Joint Steering Committee for RDA to develop Semantic Web representations of RDA structural elements, which are aligned with FRBR and FRAD, and controlled metadata content vocabularies. REICAT is also based on FRBR, and an object-oriented version of FRBR has been integrated with CRM, which itself has an RDF representation. CRM was initially based on the metadata needs of the museum community, and is now seeking extension to the archives community with the eventual aim of developing a model common to the main cultural information domains of archives, libraries and museums. The Vocabulary Mapping Framework (VMF) project has developed a Semantic Web tool to automatically generate mappings between metadata models from the information communities, including publishers. The tool is based on several standards, including CRM, FRAD, FRBR, MARC21 and RDA.
    The paper discusses the importance of these initiatives in releasing as linked data the very large quantities of rich, professionally-generated metadata stored in formats based on these standards, such as UNIMARC and MARC21, addressing such issues as critical mass for semantic and statistical inferencing, integration with user- and machine-generated metadata, and authenticity, veracity and trust. The paper also discusses related initiatives to release controlled vocabularies, including the Dewey Decimal Classification (DDC), ISBD, Library of Congress Name Authority File (LCNAF), Library of Congress Subject Headings (LCSH), Rameau (French subject headings), Universal Decimal Classification (UDC), and the Virtual International Authority File (VIAF) as linked data. Finally, the paper discusses the potential collective impact of these initiatives on metadata workflows and management systems.
    Content
    Vortrag im Rahmen der Session 93. Cataloguing der WORLD LIBRARY AND INFORMATION CONGRESS: 76TH IFLA GENERAL CONFERENCE AND ASSEMBLY, 10-15 August 2010, Gothenburg, Sweden - 149. Information Technology, Cataloguing, Classification and Indexing with Knowledge Management
  5. Balakrishnan, U.; Voß, J.: ¬The Cocoda mapping tool (2015) 0.05
    0.05322557 = product of:
      0.07096743 = sum of:
        0.014716507 = weight(_text_:information in 4205) [ClassicSimilarity], result of:
          0.014716507 = score(doc=4205,freq=12.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.16628155 = fieldWeight in 4205, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4205)
        0.038727082 = weight(_text_:standards in 4205) [ClassicSimilarity], result of:
          0.038727082 = score(doc=4205,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.17234999 = fieldWeight in 4205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4205)
        0.01752384 = product of:
          0.03504768 = sum of:
            0.03504768 = weight(_text_:organization in 4205) [ClassicSimilarity], result of:
              0.03504768 = score(doc=4205,freq=4.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.19498043 = fieldWeight in 4205, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4205)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Since the 90s, we have seen an explosion of information and with it there is an increase in the need for data and information aggregation systems that store and manage information. However, most of the information sources apply different Knowledge Organizations Systems (KOS) to describe the content of stored data. This heterogeneous mix of KOS in different systems complicate access and seamless sharing of information and knowledge. Concordances also known as cross-concordances or terminology mappings map different (KOS) to each other for improvement of information retrieval in such heterogeneous mix of systems. (Mayr 2010, Keil 2012). Also for coherent indexing with different terminologies, mappings are considered to be a valuable and essential working tool. However, despite efforts at standardization (e.g. SKOS, ISO 25964-2, Keil 2012, Soergel 2011); there is a significant scarcity of concordances that has led an inability to establish uniform exchange formats as well as methods and tools for maintaining mappings and making them easily accessible. This is particularly true in the field of library classification schemes. In essence, there is a lack of infrastructure for provision/exchange of concordances, their management and quality assessment as well as tools that would enable semi-automatic generation of mappings. The project "coli-conc" therefore aims to address this gap by creating the necessary infrastructure. This includes the specification of a data format for exchange of concordances (JSKOS), specification and implementation of web APIs to query concordance databases (JSKOS-API), and a modular web application to enable uniform access to knowledge organization systems, concordances and concordance assessments (Cocoda).
    The focus of the project "coli-conc" lies in semi-automatic creation of mappings between different KOS in general and the two important library classification schemes in particular - Dewey classification system (DDC) and Regensburg classification system (RVK). In the year 2000, the national libraries of Germany, Austria and Switzerland adopted DDC in an endeavor to develop a nation-wide classification scheme. But historically, in the German speaking regions, the academic libraries have been using their own home-grown systems, the most prominent and popular being the RVK. However, with the launch of DDC, building concordances between DDC and RVK has become an imperative, although it is still rare. The delay in building comprehensive concordances between these two systems has been because of major challenges posed by the sheer largeness of these two systems (38.000 classes in DDC and ca. 860.000 classes in RVK), the strong disparity in their respective structure, the variation in the perception and representation of the concepts. The challenge is compounded geometrically for any manual attempt in this direction. Although there have been efforts on automatic mappings (OAEI Library Track 2012 -- 2014 and e.g. Pfeffer 2013) in the recent years; such concordances carry the risks of inaccurate mappings, and the approaches are rather more suitable for mapping suggestions than for automatic generation of concordances (Lauser 2008; Reiner 2010). The project "coli-conc" will facilitate the creation, evaluation, and reuse of mappings with a public collection of concordances and a web application of mapping management. The proposed presentation will give an introduction to the tools and standards created and planned in the project "coli-conc". This includes preliminary work on DDC concordances (Balakrishnan 2013), an overview of the software concept, technical architecture (Voß 2015) and a demonstration of the Cocoda web application.
    Content
    Vortrag anlässlich: 14th European Networked Knowledge Organization Systems (NKOS) Workshop, TPDL 2015 Conference in Poznan, Poland, Friday 18th September 2015. Vgl. auch: http://eprints.rclis.org/28007/. Vgl. auch: http://coli-conc.gbv.de/.
  6. Vizine-Goetz, D.; Hickey, C.; Houghton, A.; Thompson, R.: Vocabulary mapping for terminology services (2004) 0.04
    0.040477425 = product of:
      0.08095485 = sum of:
        0.014565565 = weight(_text_:information in 918) [ClassicSimilarity], result of:
          0.014565565 = score(doc=918,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.16457605 = fieldWeight in 918, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=918)
        0.066389285 = weight(_text_:standards in 918) [ClassicSimilarity], result of:
          0.066389285 = score(doc=918,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.29545712 = fieldWeight in 918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.046875 = fieldNorm(doc=918)
      0.5 = coord(2/4)
    
    Abstract
    The paper describes a project to add value to controlled vocabularies by making inter-vocabulary associations. A methodology for mapping terms from one vocabulary to another is presented in the form of a case study applying the approach to the Educational Resources Information Center (ERIC) Thesaurus and the Library of Congress Subject Headings (LCSH). Our approach to mapping involves encoding vocabularies according to Machine-Readable Cataloging (MARC) standards, machine matching of vocabulary terms, and categorizing candidate mappings by likelihood of valid mapping. Mapping data is then stored as machine links. Vocabularies with associations to other schemes will be a key component of Web-based terminology services. The paper briefly describes how the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) is used to provide access to a vocabulary with mappings.
    Footnote
    Teil eines Themenheftes von: Journal of digital information. 4(2004) no.4.
  7. Si, L.: Encoding formats and consideration of requirements for mapping (2007) 0.03
    0.029477432 = product of:
      0.11790973 = sum of:
        0.11790973 = sum of:
          0.07009536 = weight(_text_:organization in 540) [ClassicSimilarity], result of:
            0.07009536 = score(doc=540,freq=4.0), product of:
              0.17974974 = queryWeight, product of:
                3.5653565 = idf(docFreq=3399, maxDocs=44218)
                0.050415643 = queryNorm
              0.38996086 = fieldWeight in 540, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5653565 = idf(docFreq=3399, maxDocs=44218)
                0.0546875 = fieldNorm(doc=540)
          0.047814365 = weight(_text_:22 in 540) [ClassicSimilarity], result of:
            0.047814365 = score(doc=540,freq=2.0), product of:
              0.17654699 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050415643 = queryNorm
              0.2708308 = fieldWeight in 540, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=540)
      0.25 = coord(1/4)
    
    Content
    Präsentation während der Veranstaltung "Networked Knowledge Organization Systems and Services: The 6th European Networked Knowledge Organization Systems (NKOS) Workshop, Workshop at the 11th ECDL Conference, Budapest, Hungary, September 21st 2007".
    Date
    26.12.2011 13:22:27
  8. Takhirov, N.; Aalberg, T.; Duchateau, F.; Zumer, M.: FRBR-ML: a FRBR-based framework for semantic interoperability (2012) 0.03
    0.028076127 = product of:
      0.056152254 = sum of:
        0.011892734 = weight(_text_:information in 134) [ClassicSimilarity], result of:
          0.011892734 = score(doc=134,freq=6.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.1343758 = fieldWeight in 134, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=134)
        0.044259522 = weight(_text_:standards in 134) [ClassicSimilarity], result of:
          0.044259522 = score(doc=134,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.19697142 = fieldWeight in 134, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.03125 = fieldNorm(doc=134)
      0.5 = coord(2/4)
    
    Abstract
    Metadata related to cultural items such as literature, music and movies is a valuable resource that is currently exploited in many applications and services based on semantic web technologies. A vast amount of such information has been created by memory institutions in the last decades using different standard or ad hoc schemas, and a main challenge is to make this legacy data accessible as reusable semantic data. On one hand, this is a syntactic problem that can be solved by transforming to formats that are compatible with the tools and services used for semantic aware services. On the other hand, this is a semantic problem. Simply transforming from one format to another does not automatically enable semantic interoperability and legacy data often needs to be reinterpreted as well as transformed. The conceptual model in the Functional Requirements for Bibliographic Records, initially developed as a conceptual framework for library standards and systems, is a major step towards a shared semantic model of the products of artistic and intellectual endeavor of mankind. The model is generally accepted as sufficiently generic to serve as a conceptual framework for a broad range of cultural heritage metadata. Unfortunately, the existing large body of legacy data makes a transition to this model difficult. For instance, most bibliographic data is still only available in various MARC-based formats which is hard to render into reusable and meaningful semantic data. Making legacy bibliographic data accessible as semantic data is a complex problem that includes interpreting and transforming the information. In this article, we present our work on transforming and enhancing legacy bibliographic information into a representation where the structure and semantics of the FRBR model is explicit.
  9. Kaczmarek, M.; Kruk, S.R.; Gzella, A.: Collaborative building of controlled vocabulary crosswalks (2007) 0.02
    0.021399152 = product of:
      0.042798303 = sum of:
        0.01213797 = weight(_text_:information in 543) [ClassicSimilarity], result of:
          0.01213797 = score(doc=543,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.13714671 = fieldWeight in 543, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=543)
        0.030660335 = product of:
          0.06132067 = sum of:
            0.06132067 = weight(_text_:organization in 543) [ClassicSimilarity], result of:
              0.06132067 = score(doc=543,freq=6.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.34114468 = fieldWeight in 543, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=543)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    One of the main features of classic libraries is metadata, which also is the key aspect of the Semantic Web. Librarians in the process of resources annotation use different kinds of Knowledge Organization Systems; KOS range from controlled vocabularies to classifications and categories (e.g., taxonomies) and to relationship lists (e.g., thesauri). The diversity of controlled vocabularies, used by various libraries and organizations, became a bottleneck for efficient information exchange between different entities. Even though a simple one-to-one mapping could be established, based on the similarities between names of concepts, we cannot derive information about the hierarchy between concepts from two different KOS. One of the solutions to this problem is to create an algorithm based on data delivered by large community of users using many classification schemata at once. The rationale behind it is that similar resources can be described by equivalent concepts taken from different taxonomies. The more annotations are collected, the more precise the result of this crosswalk will be.
    Content
    Präsentation während der Veranstaltung "Networked Knowledge Organization Systems and Services: The 6th European Networked Knowledge Organization Systems (NKOS) Workshop, Workshop at the 11th ECDL Conference, Budapest, Hungary, September 21st 2007".
  10. Bittner, T.; Donnelly, M.; Winter, S.: Ontology and semantic interoperability (2006) 0.02
    0.020545345 = product of:
      0.04109069 = sum of:
        0.02059882 = weight(_text_:information in 4820) [ClassicSimilarity], result of:
          0.02059882 = score(doc=4820,freq=8.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.23274569 = fieldWeight in 4820, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4820)
        0.02049187 = product of:
          0.04098374 = sum of:
            0.04098374 = weight(_text_:22 in 4820) [ClassicSimilarity], result of:
              0.04098374 = score(doc=4820,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.23214069 = fieldWeight in 4820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4820)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    One of the major problems facing systems for Computer Aided Design (CAD), Architecture Engineering and Construction (AEC) and Geographic Information Systems (GIS) applications today is the lack of interoperability among the various systems. When integrating software applications, substantial di culties can arise in translating information from one application to the other. In this paper, we focus on semantic di culties that arise in software integration. Applications may use di erent terminologies to describe the same domain. Even when appli-cations use the same terminology, they often associate di erent semantics with the terms. This obstructs information exchange among applications. To cir-cumvent this obstacle, we need some way of explicitly specifying the semantics for each terminology in an unambiguous fashion. Ontologies can provide such specification. It will be the task of this paper to explain what ontologies are and how they can be used to facilitate interoperability between software systems used in computer aided design, architecture engineering and construction, and geographic information processing.
    Date
    3.12.2016 18:39:22
  11. Ledl, A.: Demonstration of the BAsel Register of Thesauri, Ontologies & Classifications (BARTOC) (2015) 0.02
    0.020170141 = product of:
      0.040340282 = sum of:
        0.01029941 = weight(_text_:information in 2038) [ClassicSimilarity], result of:
          0.01029941 = score(doc=2038,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.116372846 = fieldWeight in 2038, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2038)
        0.030040871 = product of:
          0.060081743 = sum of:
            0.060081743 = weight(_text_:organization in 2038) [ClassicSimilarity], result of:
              0.060081743 = score(doc=2038,freq=4.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.33425218 = fieldWeight in 2038, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2038)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The BAsel Register of Thesauri, Ontologies & Classifications (BARTOC, http://bartoc.org) is a bibliographic database aiming to record metadata of as many Knowledge Organization Systems as possible. It has a facetted, responsive web design search interface in 20 EU languages. With more than 1'300 interdisciplinary items in 77 languages, BARTOC is the largest database of its kind, multilingual both by content and features, and it is still growing. This being said, the demonstration of BARTOC would be suitable for topic nr. 10 [Multilingual and Interdisciplinary KOS applications and tools]. BARTOC has been developed by the University Library of Basel, Switzerland. It is rooted in the tradition of library and information science of collecting bibliographic records of controlled and structured vocabularies, yet in a more contemporary manner. BARTOC is based on the open source content management system Drupal 7.
    Content
    Vortrag anlässlich: 14th European Networked Knowledge Organization Systems (NKOS) Workshop, TPDL 2015 Conference in Poznan, Poland, Friday 18th September 2015. Vgl. auch: http://bartoc.org/.
  12. Bastos Vieira, S.; DeBrito, M.; Mustafa El Hadi, W.; Zumer, M.: Developing imaged KOS with the FRSAD Model : a conceptual methodology (2016) 0.02
    0.018423056 = product of:
      0.036846112 = sum of:
        0.016818866 = weight(_text_:information in 3109) [ClassicSimilarity], result of:
          0.016818866 = score(doc=3109,freq=12.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.19003606 = fieldWeight in 3109, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=3109)
        0.020027246 = product of:
          0.040054493 = sum of:
            0.040054493 = weight(_text_:organization in 3109) [ClassicSimilarity], result of:
              0.040054493 = score(doc=3109,freq=4.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.22283478 = fieldWeight in 3109, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3109)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This proposal presents the methodology of indexing with images suggested by De Brito and Caribé (2015). The imagetic model is used as a compatible mechanism with FRSAD for a global information share and use of subject data, both within the library sector and beyond. The conceptual model of imagetic indexing shows how images are related to topics and 'key-images' are interpreted as nomens to implement the FRSAD model. Indexing with images consists of using images instead of key-words or descriptors, to represent and organize information. Implementing the imaged navigation in OPACs denotes multiple advantages derived from this rethinking the OPAC anew, since we are looking forward to sharing concepts within the subject authority data. Images, carrying linguistic objects, permeate inter-social and cultural concepts. In practice it includes translated metadata, symmetrical multilingual thesaurus, or any traditional indexing tools. iOPAC embodies efforts focused on conceptual levels as expected from librarians. Imaged interfaces are more intuitive since users do not need specific training for information retrieval, offering easier comprehension of indexing codes, larger conceptual portability of descriptors (as images), and a better interoperability between discourse codes and indexing competences affecting positively social and cultural interoperability. The imagetic methodology deploys R&D fields for more suitable interfaces taking into consideration users with specific needs such as deafness and illiteracy. This methodology arouse questions about the paradigms of the primacy of orality in information systems and pave the way to a legitimacy of multiple perspectives in document indexing by suggesting a more universal communication system based on images. Interdisciplinarity in neurosciences, linguistics and information sciences would be desirable competencies for further investigations about he nature of cognitive processes in information organization and classification while developing assistive KOS for individuals with communication problems, such autism and deafness.
    Source
    Proceedings of the 15th European Networked Knowledge Organization Systems Workshop (NKOS 2016) co-located with the 20th International Conference on Theory and Practice of Digital Libraries 2016 (TPDL 2016), Hannover, Germany, September 9, 2016. Edi. by Philipp Mayr et al. [http://ceur-ws.org/Vol-1676/=urn:nbn:de:0074-1676-5]
  13. BARTOC : the BAsel Register of Thesauri, Ontologies & Classifications 0.02
    0.018399216 = product of:
      0.036798432 = sum of:
        0.012015978 = weight(_text_:information in 1734) [ClassicSimilarity], result of:
          0.012015978 = score(doc=1734,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.13576832 = fieldWeight in 1734, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1734)
        0.024782453 = product of:
          0.049564905 = sum of:
            0.049564905 = weight(_text_:organization in 1734) [ClassicSimilarity], result of:
              0.049564905 = score(doc=1734,freq=2.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.27574396 = fieldWeight in 1734, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1734)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    BARTOC, http://bartoc.org, is a bibliographic database that provides metadata of as many Knowledge Organization Systems (KOS) as possible and offers a faceted, responsive web design search interface in 20 languages. With more than 1100 interdisciplinary items (Thesauri, Ontologies, Classifications, Glossaries, Controlled Vocabularies, Taxonomies) in 70 languages, BARTOC is the largest database of its kind, multilingual both by content and features, and will still be growing. Metadata are being enriched with DDC-numbers down to the third level, and subject headings from EuroVoc, the EU's multilingual thesaurus. BARTOC has been developed by the University Library of Basel, Switzerland, and continues in the tradition of library and information science to collect bibliographic records of controlled and structured vocabularies.
  14. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.02
    0.01796158 = product of:
      0.03592316 = sum of:
        0.012015978 = weight(_text_:information in 759) [ClassicSimilarity], result of:
          0.012015978 = score(doc=759,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.13576832 = fieldWeight in 759, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=759)
        0.023907183 = product of:
          0.047814365 = sum of:
            0.047814365 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
              0.047814365 = score(doc=759,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.2708308 = fieldWeight in 759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=759)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    XML will have a profound impact on the way data is exchanged on the Internet. An important feature of this language is the separation of content from presentation, which makes it easier to select and/or reformat the data. However, due to the likelihood of numerous industry and domain specific DTDs, those who wish to integrate information will still be faced with the problem of semantic interoperability. In this paper we discuss why this problem is not solved by XML, and then discuss why the Resource Description Framework is only a partial solution. We then present the SHOE language, which we feel has many of the features necessary to enable a semantic web, and describe an existing set of tools that make it easy to use the language.
    Date
    11. 5.2013 19:22:18
  15. Panzer, M.: Relationships, spaces, and the two faces of Dewey (2008) 0.02
    0.017024403 = product of:
      0.034048807 = sum of:
        0.01029941 = weight(_text_:information in 2127) [ClassicSimilarity], result of:
          0.01029941 = score(doc=2127,freq=8.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.116372846 = fieldWeight in 2127, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2127)
        0.023749396 = product of:
          0.047498792 = sum of:
            0.047498792 = weight(_text_:organization in 2127) [ClassicSimilarity], result of:
              0.047498792 = score(doc=2127,freq=10.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.26424956 = fieldWeight in 2127, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2127)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    "When dealing with a large-scale and widely-used knowledge organization system like the Dewey Decimal Classification, we often tend to focus solely on the organization aspect, which is closely intertwined with editorial work. This is perfectly understandable, since developing and updating the DDC, keeping up with current scientific developments, spotting new trends in both scholarly communication and popular publishing, and figuring out how to fit those patterns into the structure of the scheme are as intriguing as they are challenging. From the organization perspective, the intended user of the scheme is mainly the classifier. Dewey acts very much as a number-building engine, providing richly documented concepts to help with classification decisions. Since the Middle Ages, quasi-religious battles have been fought over the "valid" arrangement of places according to specific views of the world, as parodied by Jorge Luis Borges and others. Organizing knowledge has always been primarily an ontological activity; it is about putting the world into the classification. However, there is another side to this coin--the discovery side. While the hierarchical organization of the DDC establishes a default set of places and neighborhoods that is also visible in the physical manifestation of library shelves, this is just one set of relationships in the DDC. A KOS (Knowledge Organization System) becomes powerful by expressing those other relationships in a manner that not only collocates items in a physical place but in a knowledge space, and exposes those other relationships in ways beneficial and congenial to the unique perspective of an information seeker.
    What are those "other" relationships that Dewey possesses and that seem so important to surface? Firstly, there is the relationship of concepts to resources. Dewey has been used for a long time, and over 200,000 numbers are assigned to information resources each year and added to WorldCat by the Library of Congress and the German National Library alone. Secondly, we have relationships between concepts in the scheme itself. Dewey provides a rich set of non-hierarchical relations, indicating other relevant and related subjects across disciplinary boundaries. Thirdly, perhaps most importantly, there is the relationship between the same concepts across different languages. Dewey has been translated extensively, and current versions are available in French, German, Hebrew, Italian, Spanish, and Vietnamese. Briefer representations of the top-three levels (the DDC Summaries) are available in several languages in the DeweyBrowser. This multilingual nature of the scheme allows searchers to access a broader range of resources or to switch the language of--and thus localize--subject metadata seamlessly. MelvilClass, a Dewey front-end developed by the German National Library for the German translation, could be used as a common interface to the DDC in any language, as it is built upon the standard DDC data format. It is not hard to give an example of the basic terminology of a class pulled together in a multilingual way: <class/794.8> a skos:Concept ; skos:notation "794.8"^^ddc:notation ; skos:prefLabel "Computer games"@en ; skos:prefLabel "Computerspiele"@de ; skos:prefLabel "Jeux sur ordinateur"@fr ; skos:prefLabel "Juegos por computador"@es .
    Expressed in such manner, the Dewey number provides a language-independent representation of a Dewey concept, accompanied by language-dependent assertions about the concept. This information, identified by a URI, can be easily consumed by semantic web agents and used in various metadata scenarios. Fourthly, as we have seen, it is important to play well with others, i.e., establishing and maintaining relationships to other KOS and making the scheme available in different formats. As noted in the Dewey blog post "Tags and Dewey," since no single scheme is ever going to be the be-all, end-all solution for knowledge discovery, DDC concepts have been extensively mapped to other vocabularies and taxonomies, sometimes bridging them and acting as a backbone, sometimes using them as additional access vocabulary to be able to do more work "behind the scenes." To enable other applications and schemes to make use of those relationships, the full Dewey database is available in XML format; RDF-based formats and a web service are forthcoming. Pulling those relationships together under a common surface will be the next challenge going forward. In the semantic web community the concept of Linked Data (http://en.wikipedia.org/wiki/Linked_Data) currently receives some attention, with its emphasis on exposing and connecting data using technologies like URIs, HTTP and RDF to improve information discovery on the web. With its focus on relationships and discovery, it seems that Dewey will be well prepared to become part of this big linked data set. Now it is about putting the classification back into the world!"
  16. Isaac, A.; Raemy, J.A.; Meijers, E.; Valk, S. De; Freire, N.: Metadata aggregation via linked data : results of the Europeana Common Culture project (2020) 0.02
    0.016597321 = product of:
      0.066389285 = sum of:
        0.066389285 = weight(_text_:standards in 39) [ClassicSimilarity], result of:
          0.066389285 = score(doc=39,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.29545712 = fieldWeight in 39, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.046875 = fieldNorm(doc=39)
      0.25 = coord(1/4)
    
    Abstract
    Digital cultural heritage resources are widely available on the web through the digital libraries of heritage institutions. To address the difficulties of discoverability in cultural heritage, the common practice is metadata aggregation, where centralized efforts like Europeana facilitate discoverability by collecting the resources' metadata. We present the results of the linked data aggregation task conducted within the Europeana Common Culture project, which attempted an innovative approach to aggregation based on linked data made available by cultural heritage institutions. This task ran for one year with participation of eleven organizations, involving the three member roles of the Europeana network: data providers, intermediary aggregators, and the central aggregation hub, Europeana. We report on the challenges that were faced by data providers, the standards and specifications applied, and the resulting aggregated metadata.
  17. DC-2013: International Conference on Dublin Core and Metadata Applications : Online Proceedings (2013) 0.02
    0.015648104 = product of:
      0.06259242 = sum of:
        0.06259242 = weight(_text_:standards in 1076) [ClassicSimilarity], result of:
          0.06259242 = score(doc=1076,freq=4.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.27855965 = fieldWeight in 1076, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.03125 = fieldNorm(doc=1076)
      0.25 = coord(1/4)
    
    Content
    FULL PAPERS Provenance and Annotations for Linked Data - Kai Eckert How Portable Are the Metadata Standards for Scientific Data? A Proposal for a Metadata Infrastructure - Jian Qin, Kai Li Lessons Learned in Implementing the Extended Date/Time Format in a Large Digital Library - Hannah Tarver, Mark Phillips Towards the Representation of Chinese Traditional Music: A State of the Art Review of Music Metadata Standards - Mi Tian, György Fazekas, Dawn Black, Mark Sandler Maps and Gaps: Strategies for Vocabulary Design and Development - Diane Ileana Hillmann, Gordon Dunsire, Jon Phipps A Method for the Development of Dublin Core Application Profiles (Me4DCAP V0.1): Aescription - Mariana Curado Malta, Ana Alice Baptista Find and Combine Vocabularies to Design Metadata Application Profiles using Schema Registries and LOD Resources - Tsunagu Honma, Mitsuharu Nagamori, Shigeo Sugimoto Achieving Interoperability between the CARARE Schema for Monuments and Sites and the Europeana Data Model - Antoine Isaac, Valentine Charles, Kate Fernie, Costis Dallas, Dimitris Gavrilis, Stavros Angelis With a Focused Intent: Evolution of DCMI as a Research Community - Jihee Beak, Richard P. Smiraglia Metadata Capital in a Data Repository - Jane Greenberg, Shea Swauger, Elena Feinstein DC Metadata is Alive and Well - A New Standard for Education - Liddy Nevile Representation of the UNIMARC Bibliographic Data Format in Resource Description Framework - Gordon Dunsire, Mirna Willer, Predrag Perozic
  18. Kless, D.: From a thesaurus standard to a general knowledge organization standard?! (2007) 0.02
    0.0153301675 = product of:
      0.06132067 = sum of:
        0.06132067 = product of:
          0.12264134 = sum of:
            0.12264134 = weight(_text_:organization in 528) [ClassicSimilarity], result of:
              0.12264134 = score(doc=528,freq=6.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.68228936 = fieldWeight in 528, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.078125 = fieldNorm(doc=528)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    Präsentation während der Veranstaltung "Networked Knowledge Organization Systems and Services: The 6th European Networked Knowledge Organization Systems (NKOS) Workshop, Workshop at the 11th ECDL Conference, Budapest, Hungary, September 21st 2007".
  19. Mayr, P.; Petras, V.; Walter, A.-K.: Results from a German terminology mapping effort : intra- and interdisciplinary cross-concordances between controlled vocabularies (2007) 0.01
    0.014769909 = product of:
      0.029539818 = sum of:
        0.012015978 = weight(_text_:information in 542) [ClassicSimilarity], result of:
          0.012015978 = score(doc=542,freq=8.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.13576832 = fieldWeight in 542, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=542)
        0.01752384 = product of:
          0.03504768 = sum of:
            0.03504768 = weight(_text_:organization in 542) [ClassicSimilarity], result of:
              0.03504768 = score(doc=542,freq=4.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.19498043 = fieldWeight in 542, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=542)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In 2004, the German Federal Ministry for Education and Research funded a major terminology mapping initiative at the GESIS Social Science Information Centre in Bonn (GESIS-IZ), which will find its conclusion this year. The task of this terminology mapping initiative was to organize, create and manage 'crossconcordances' between major controlled vocabularies (thesauri, classification systems, subject heading lists) centred around the social sciences but quickly extending to other subject areas. Cross-concordances are intellectually (manually) created crosswalks that determine equivalence, hierarchy, and association relations between terms from two controlled vocabularies. Most vocabularies have been related bilaterally, that is, there is a cross-concordance relating terms from vocabulary A to vocabulary B as well as a cross-concordance relating terms from vocabulary B to vocabulary A (bilateral relations are not necessarily symmetrical). Till August 2007, 24 controlled vocabularies from 11 disciplines will be connected with vocabulary sizes ranging from 2,000 - 17,000 terms per vocabulary. To date more than 260,000 relations are generated. A database including all vocabularies and cross-concordances was built and a 'heterogeneity service' developed, a web service, which makes the cross-concordances available for other applications. Many cross-concordances are already implemented and utilized for the German Social Science Information Portal Sowiport (www.sowiport.de), which searches bibliographical and other information resources (incl. 13 databases with 10 different vocabularies and ca. 2.5 million references).
    In the final phase of the project, a major evaluation effort is under way to test and measure the effectiveness of the vocabulary mappings in an information system environment. Actual user queries are tested in a distributed search environment, where several bibliographic databases with different controlled vocabularies are searched at the same time. Three query variations are compared to each other: a free-text search without focusing on using the controlled vocabulary or terminology mapping; a controlled vocabulary search, where terms from one vocabulary (a 'home' vocabulary thought to be familiar to the user of a particular database) are used to search all databases; and finally, a search, where controlled vocabulary terms are translated into the terms of the respective controlled vocabulary of the database. For evaluation purposes, types of cross-concordances are distinguished between intradisciplinary vocabularies (vocabularies within the social sciences) and interdisciplinary vocabularies (social sciences to other disciplines as well as other combinations). Simultaneously, an extensive quantitative analysis is conducted aimed at finding patterns in terminology mappings that can explain trends in the effectiveness of terminology mappings, particularly looking at overlapping terms, types of determined relations (equivalence, hierarchy etc.), size of participating vocabularies, etc. This project is the largest terminology mapping effort in Germany. The number and variety of controlled vocabularies targeted provide an optimal basis for insights and further research opportunities. To our knowledge, terminology mapping efforts have rarely been evaluated with stringent qualitative and quantitative measures. This research should contribute in this area. For the NKOS workshop, we plan to present an overview of the project and participating vocabularies, an introduction to the heterogeneity service and its application as well as some of the results and findings of the evaluation, which will be concluded in August.
    Content
    Präsentation während der Veranstaltung "Networked Knowledge Organization Systems and Services: The 6th European Networked Knowledge Organization Systems (NKOS) Workshop, Workshop at the 11th ECDL Conference, Budapest, Hungary, September 21st 2007".
  20. Kollia, I.; Tzouvaras, V.; Drosopoulos, N.; Stamou, G.: ¬A systemic approach for effective semantic access to cultural content (2012) 0.01
    0.0138311 = product of:
      0.0553244 = sum of:
        0.0553244 = weight(_text_:standards in 130) [ClassicSimilarity], result of:
          0.0553244 = score(doc=130,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.24621427 = fieldWeight in 130, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0390625 = fieldNorm(doc=130)
      0.25 = coord(1/4)
    
    Abstract
    A large on-going activity for digitization, dissemination and preservation of cultural heritage is taking place in Europe, United States and the world, which involves all types of cultural institutions, i.e., galleries, libraries, museums, archives and all types of cultural content. The development of Europeana, as a single point of access to European Cultural Heritage, has probably been the most important result of the activities in the field till now. Semantic interoperability, linked open data, user involvement and user generated content are key issues in these developments. This paper presents a system that provides content providers and users the ability to map, in an effective way, their own metadata schemas to common domain standards and the Europeana (ESE, EDM) data models. The system is currently largely used by many European research projects and the Europeana. Based on these mappings, semantic query answering techniques are proposed as a means for effective access to digital cultural heritage, providing users with content enrichment, linking of data based on their involvement and facilitating content search and retrieval. An experimental study is presented, involving content from national content aggregators, as well as thematic content aggregators and the Europeana, which illustrates the proposed system

Years

Languages

  • e 57
  • d 5

Types

  • a 20
  • n 1
  • p 1
  • r 1
  • x 1
  • More… Less…