Search (33 results, page 2 of 2)

  • × theme_ss:"Semantische Interoperabilität"
  • × year_i:[2010 TO 2020}
  1. Panzer, M.: Increasing patient findability of medical research : annotating clinical trials using standard vocabularies (2017) 0.01
    0.009799894 = product of:
      0.019599788 = sum of:
        0.019599788 = product of:
          0.039199576 = sum of:
            0.039199576 = weight(_text_:management in 2783) [ClassicSimilarity], result of:
              0.039199576 = score(doc=2783,freq=2.0), product of:
                0.17543502 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.05204841 = queryNorm
                0.22344214 = fieldWeight in 2783, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2783)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Multiple groups at Mayo Clinic organize knowledge with the aid of metadata for a variety of purposes. The ontology group focuses on consumer-oriented health information using several controlled vocabularies to support and coordinate care providers, consumers, clinical knowledge and, as part of its research management, information on clinical trials. Poor findability, inconsistent indexing and specialized language undermined the goal of increasing trial participation. The ontology group designed a metadata framework addressing disorders and procedures, investigational drugs and clinical departments, adopted and translated the clinical terminology of SNOMED CT and RxNorm vocabularies to consumer language and coordinated terminology with Mayo's Consumer Health Vocabulary. The result enables retrieval of clinical trial information from multiple access points including conditions, procedures, drug names, organizations involved and trial phase. The jump in inquiries since the search site was revised and vocabularies were modified show evidence of success.
  2. Dunsire, G.; Willer, M.: Initiatives to make standard library metadata models and structures available to the Semantic Web (2010) 0.01
    0.009239429 = product of:
      0.018478857 = sum of:
        0.018478857 = product of:
          0.036957715 = sum of:
            0.036957715 = weight(_text_:management in 3965) [ClassicSimilarity], result of:
              0.036957715 = score(doc=3965,freq=4.0), product of:
                0.17543502 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.05204841 = queryNorm
                0.21066327 = fieldWeight in 3965, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3965)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The paper discusses the importance of these initiatives in releasing as linked data the very large quantities of rich, professionally-generated metadata stored in formats based on these standards, such as UNIMARC and MARC21, addressing such issues as critical mass for semantic and statistical inferencing, integration with user- and machine-generated metadata, and authenticity, veracity and trust. The paper also discusses related initiatives to release controlled vocabularies, including the Dewey Decimal Classification (DDC), ISBD, Library of Congress Name Authority File (LCNAF), Library of Congress Subject Headings (LCSH), Rameau (French subject headings), Universal Decimal Classification (UDC), and the Virtual International Authority File (VIAF) as linked data. Finally, the paper discusses the potential collective impact of these initiatives on metadata workflows and management systems.
    Content
    Vortrag im Rahmen der Session 93. Cataloguing der WORLD LIBRARY AND INFORMATION CONGRESS: 76TH IFLA GENERAL CONFERENCE AND ASSEMBLY, 10-15 August 2010, Gothenburg, Sweden - 149. Information Technology, Cataloguing, Classification and Indexing with Knowledge Management
  3. Leiva-Mederos, A.; Senso, J.A.; Hidalgo-Delgado, Y.; Hipola, P.: Working framework of semantic interoperability for CRIS with heterogeneous data sources (2017) 0.01
    0.009239429 = product of:
      0.018478857 = sum of:
        0.018478857 = product of:
          0.036957715 = sum of:
            0.036957715 = weight(_text_:management in 3706) [ClassicSimilarity], result of:
              0.036957715 = score(doc=3706,freq=4.0), product of:
                0.17543502 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.05204841 = queryNorm
                0.21066327 = fieldWeight in 3706, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3706)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose Information from Current Research Information Systems (CRIS) is stored in different formats, in platforms that are not compatible, or even in independent networks. It would be helpful to have a well-defined methodology to allow for management data processing from a single site, so as to take advantage of the capacity to link disperse data found in different systems, platforms, sources and/or formats. Based on functionalities and materials of the VLIR project, the purpose of this paper is to present a model that provides for interoperability by means of semantic alignment techniques and metadata crosswalks, and facilitates the fusion of information stored in diverse sources. Design/methodology/approach After reviewing the state of the art regarding the diverse mechanisms for achieving semantic interoperability, the paper analyzes the following: the specific coverage of the data sets (type of data, thematic coverage and geographic coverage); the technical specifications needed to retrieve and analyze a distribution of the data set (format, protocol, etc.); the conditions of re-utilization (copyright and licenses); and the "dimensions" included in the data set as well as the semantics of these dimensions (the syntax and the taxonomies of reference). The semantic interoperability framework here presented implements semantic alignment and metadata crosswalk to convert information from three different systems (ABCD, Moodle and DSpace) to integrate all the databases in a single RDF file. Findings The paper also includes an evaluation based on the comparison - by means of calculations of recall and precision - of the proposed model and identical consultations made on Open Archives Initiative and SQL, in order to estimate its efficiency. The results have been satisfactory enough, due to the fact that the semantic interoperability facilitates the exact retrieval of information. Originality/value The proposed model enhances management of the syntactic and semantic interoperability of the CRIS system designed. In a real setting of use it achieves very positive results.
  4. Dunsire, G.; Nicholson, D.: Signposting the crossroads : terminology Web services and classification-based interoperability (2010) 0.01
    0.008814801 = product of:
      0.017629603 = sum of:
        0.017629603 = product of:
          0.035259206 = sum of:
            0.035259206 = weight(_text_:22 in 4066) [ClassicSimilarity], result of:
              0.035259206 = score(doc=4066,freq=2.0), product of:
                0.18226467 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05204841 = queryNorm
                0.19345059 = fieldWeight in 4066, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4066)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    6. 1.2011 19:22:48
  5. Hubrich, J.: Multilinguale Wissensorganisation im Zeitalter der Globalisierung : das Projekt CrissCross (2010) 0.01
    0.008814801 = product of:
      0.017629603 = sum of:
        0.017629603 = product of:
          0.035259206 = sum of:
            0.035259206 = weight(_text_:22 in 4793) [ClassicSimilarity], result of:
              0.035259206 = score(doc=4793,freq=2.0), product of:
                0.18226467 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05204841 = queryNorm
                0.19345059 = fieldWeight in 4793, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4793)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
  6. Concepts in Context : Proceedings of the Cologne Conference on Interoperability and Semantics in Knowledge Organization July 19th - 20th, 2010 (2011) 0.01
    0.008814801 = product of:
      0.017629603 = sum of:
        0.017629603 = product of:
          0.035259206 = sum of:
            0.035259206 = weight(_text_:22 in 628) [ClassicSimilarity], result of:
              0.035259206 = score(doc=628,freq=2.0), product of:
                0.18226467 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05204841 = queryNorm
                0.19345059 = fieldWeight in 628, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=628)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 2.2013 11:34:18
  7. Jahns, Y.; Karg, H.: Translingual retrieval : Moving between vocabularies - MACS 2010 (2011) 0.01
    0.008166579 = product of:
      0.016333157 = sum of:
        0.016333157 = product of:
          0.032666314 = sum of:
            0.032666314 = weight(_text_:management in 648) [ClassicSimilarity], result of:
              0.032666314 = score(doc=648,freq=2.0), product of:
                0.17543502 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.05204841 = queryNorm
                0.18620178 = fieldWeight in 648, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=648)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Within the multilingual framework of the CrissCross project, MACS (Multilingual Access to Subjects) has continued its work. MACS has developed a prototype of mappings between three vocabularies: the LCSH (Library of Congress Subject Headings), RAMEAU (Répertoire d'autorité-matière encyclopédique et alphabétique unifié) and the SWD (Schlagwortnormdatei). A database with a Link Management System (LMI), which allows for an easy linking between English, French and German subject headings, was created. The database started working with headings from the disciplines sports and theatre, but by now headings from all other fields of knowledge have been included as well. In 2008-2010, equivalencies between English and French headings which had been produced by the Bibliothèque nationale de France have been completed with the most important German SWD topical terms. Thus, more than 50.000 trilingual links are now available and can be used in different retrieval scenarios. It is planned to use them in The European Library (TEL) in order to support multilingual searches over all European National Library collections. The article informs about the project workflow, methodology of mapping and future applications of MACS links.
  8. Victorino, M.; Terto de Holanda, M.; Ishikawa, E.; Costa Oliveira, E.; Chhetri, S.: Transforming open data to linked open data using ontologies for information organization in big data environments of the Brazilian Government : the Brazilian database Government Open Linked Data - DBgoldbr (2018) 0.01
    0.008166579 = product of:
      0.016333157 = sum of:
        0.016333157 = product of:
          0.032666314 = sum of:
            0.032666314 = weight(_text_:management in 4532) [ClassicSimilarity], result of:
              0.032666314 = score(doc=4532,freq=2.0), product of:
                0.17543502 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.05204841 = queryNorm
                0.18620178 = fieldWeight in 4532, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4532)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Brazilian Government has made a massive volume of structured, semi-structured and non-structured public data available on the web to ensure that the administration is as transparent as possible. Subsequently, providing applications with enough capability to handle this "big data environment" so that vital and decisive information is readily accessible, has become a tremendous challenge. In this environment, data processing is done via new approaches in the area of information and computer science, involving technologies and processes for collecting, representing, storing and disseminating information. Along these lines, this paper presents a conceptual model, the technical architecture and the prototype implementation of a tool, denominated DBgoldbr, designed to classify government public information with the help of ontologies, by transforming open data into open linked data. To achieve this objective, we used "soft system methodology" to identify problems, to collect users needs and to design solutions according to the objectives of specific groups. The DBgoldbr tool was designed to facilitate the search for open data made available by many Brazilian government institutions, so that this data can be reused to support the evaluation and monitoring of social programs, in order to support the design and management of public policies.
  9. Balakrishnan, U.; Voß, J.: ¬The Cocoda mapping tool (2015) 0.01
    0.0080845 = product of:
      0.016169 = sum of:
        0.016169 = product of:
          0.032338 = sum of:
            0.032338 = weight(_text_:management in 4205) [ClassicSimilarity], result of:
              0.032338 = score(doc=4205,freq=4.0), product of:
                0.17543502 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.05204841 = queryNorm
                0.18433036 = fieldWeight in 4205, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4205)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Since the 90s, we have seen an explosion of information and with it there is an increase in the need for data and information aggregation systems that store and manage information. However, most of the information sources apply different Knowledge Organizations Systems (KOS) to describe the content of stored data. This heterogeneous mix of KOS in different systems complicate access and seamless sharing of information and knowledge. Concordances also known as cross-concordances or terminology mappings map different (KOS) to each other for improvement of information retrieval in such heterogeneous mix of systems. (Mayr 2010, Keil 2012). Also for coherent indexing with different terminologies, mappings are considered to be a valuable and essential working tool. However, despite efforts at standardization (e.g. SKOS, ISO 25964-2, Keil 2012, Soergel 2011); there is a significant scarcity of concordances that has led an inability to establish uniform exchange formats as well as methods and tools for maintaining mappings and making them easily accessible. This is particularly true in the field of library classification schemes. In essence, there is a lack of infrastructure for provision/exchange of concordances, their management and quality assessment as well as tools that would enable semi-automatic generation of mappings. The project "coli-conc" therefore aims to address this gap by creating the necessary infrastructure. This includes the specification of a data format for exchange of concordances (JSKOS), specification and implementation of web APIs to query concordance databases (JSKOS-API), and a modular web application to enable uniform access to knowledge organization systems, concordances and concordance assessments (Cocoda).
    The focus of the project "coli-conc" lies in semi-automatic creation of mappings between different KOS in general and the two important library classification schemes in particular - Dewey classification system (DDC) and Regensburg classification system (RVK). In the year 2000, the national libraries of Germany, Austria and Switzerland adopted DDC in an endeavor to develop a nation-wide classification scheme. But historically, in the German speaking regions, the academic libraries have been using their own home-grown systems, the most prominent and popular being the RVK. However, with the launch of DDC, building concordances between DDC and RVK has become an imperative, although it is still rare. The delay in building comprehensive concordances between these two systems has been because of major challenges posed by the sheer largeness of these two systems (38.000 classes in DDC and ca. 860.000 classes in RVK), the strong disparity in their respective structure, the variation in the perception and representation of the concepts. The challenge is compounded geometrically for any manual attempt in this direction. Although there have been efforts on automatic mappings (OAEI Library Track 2012 -- 2014 and e.g. Pfeffer 2013) in the recent years; such concordances carry the risks of inaccurate mappings, and the approaches are rather more suitable for mapping suggestions than for automatic generation of concordances (Lauser 2008; Reiner 2010). The project "coli-conc" will facilitate the creation, evaluation, and reuse of mappings with a public collection of concordances and a web application of mapping management. The proposed presentation will give an introduction to the tools and standards created and planned in the project "coli-conc". This includes preliminary work on DDC concordances (Balakrishnan 2013), an overview of the software concept, technical architecture (Voß 2015) and a demonstration of the Cocoda web application.
  10. Euzenat, J.; Shvaiko, P.: Ontology matching (2010) 0.01
    0.0070518414 = product of:
      0.014103683 = sum of:
        0.014103683 = product of:
          0.028207365 = sum of:
            0.028207365 = weight(_text_:22 in 168) [ClassicSimilarity], result of:
              0.028207365 = score(doc=168,freq=2.0), product of:
                0.18226467 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05204841 = queryNorm
                0.15476047 = fieldWeight in 168, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=168)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    20. 6.2012 19:08:22
  11. Gracy, K.F.; Zeng, M.L.; Skirvin, L.: Exploring methods to improve access to Music resources by aligning library Data with Linked Data : a report of methodologies and preliminary findings (2013) 0.01
    0.0070518414 = product of:
      0.014103683 = sum of:
        0.014103683 = product of:
          0.028207365 = sum of:
            0.028207365 = weight(_text_:22 in 1096) [ClassicSimilarity], result of:
              0.028207365 = score(doc=1096,freq=2.0), product of:
                0.18226467 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05204841 = queryNorm
                0.15476047 = fieldWeight in 1096, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1096)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    28.10.2013 17:22:17
  12. Semantic search over the Web (2012) 0.01
    0.006533263 = product of:
      0.013066526 = sum of:
        0.013066526 = product of:
          0.026133051 = sum of:
            0.026133051 = weight(_text_:management in 411) [ClassicSimilarity], result of:
              0.026133051 = score(doc=411,freq=2.0), product of:
                0.17543502 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.05204841 = queryNorm
                0.14896142 = fieldWeight in 411, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.03125 = fieldNorm(doc=411)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Web has become the world's largest database, with search being the main tool that allows organizations and individuals to exploit its huge amount of information. Search on the Web has been traditionally based on textual and structural similarities, ignoring to a large degree the semantic dimension, i.e., understanding the meaning of the query and of the document content. Combining search and semantics gives birth to the idea of semantic search. Traditional search engines have already advertised some semantic dimensions. Some of them, for instance, can enhance their generated result sets with documents that are semantically related to the query terms even though they may not include these terms. Nevertheless, the exploitation of the semantic search has not yet reached its full potential. In this book, Roberto De Virgilio, Francesco Guerra and Yannis Velegrakis present an extensive overview of the work done in Semantic Search and other related areas. They explore different technologies and solutions in depth, making their collection a valuable and stimulating reading for both academic and industrial researchers. The book is divided into three parts. The first introduces the readers to the basic notions of the Web of Data. It describes the different kinds of data that exist, their topology, and their storing and indexing techniques. The second part is dedicated to Web Search. It presents different types of search, like the exploratory or the path-oriented, alongside methods for their efficient and effective implementation. Other related topics included in this part are the use of uncertainty in query answering, the exploitation of ontologies, and the use of semantics in mashup design and operation. The focus of the third part is on linked data, and more specifically, on applying ideas originating in recommender systems on linked data management, and on techniques for the efficiently querying answering on linked data.
  13. Gracy, K.F.: Enriching and enhancing moving images with Linked Data : an exploration in the alignment of metadata models (2018) 0.01
    0.006533263 = product of:
      0.013066526 = sum of:
        0.013066526 = product of:
          0.026133051 = sum of:
            0.026133051 = weight(_text_:management in 4200) [ClassicSimilarity], result of:
              0.026133051 = score(doc=4200,freq=2.0), product of:
                0.17543502 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.05204841 = queryNorm
                0.14896142 = fieldWeight in 4200, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4200)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The purpose of this paper is to examine the current state of Linked Data (LD) in archival moving image description, and propose ways in which current metadata records can be enriched and enhanced by interlinking such metadata with relevant information found in other data sets. Design/methodology/approach Several possible metadata models for moving image production and archiving are considered, including models from records management, digital curation, and the recent BIBFRAME AV Modeling Study. This research also explores how mappings between archival moving image records and relevant external data sources might be drawn, and what gaps exist between current vocabularies and what is needed to record and make accessible the full lifecycle of archiving through production, use, and reuse. Findings The author notes several major impediments to implementation of LD for archival moving images. The various pieces of information about creators, places, and events found in moving image records are not easily connected to relevant information in other sources because they are often not semantically defined within the record and can be hidden in unstructured fields. Libraries, archives, and museums must work on aligning the various vocabularies and schemas of potential value for archival moving image description to enable interlinking between vocabularies currently in use and those which are used by external data sets. Alignment of vocabularies is often complicated by mismatches in granularity between vocabularies. Research limitations/implications The focus is on how these models inform functional requirements for access and other archival activities, and how the field might benefit from having a common metadata model for critical archival descriptive activities. Practical implications By having a shared model, archivists may more easily align current vocabularies and develop new vocabularies and schemas to address the needs of moving image data creators and scholars. Originality/value Moving image archives, like other cultural institutions with significant heritage holdings, can benefit tremendously from investing in the semantic definition of information found in their information databases. While commercial entities such as search engines and data providers have already embraced the opportunities that semantic search provides for resource discovery, most non-commercial entities are just beginning to do so. Thus, this research addresses the benefits and challenges of enriching and enhancing archival moving image records with semantically defined information via LD.

Languages

  • e 27
  • d 6

Types

  • a 20
  • m 8
  • el 6
  • s 5
  • x 1
  • More… Less…